Dec 6 01:46:23 localhost kernel: Linux version 5.14.0-284.11.1.el9_2.x86_64 (mockbuild@x86-vm-09.build.eng.bos.redhat.com) (gcc (GCC) 11.3.1 20221121 (Red Hat 11.3.1-4), GNU ld version 2.35.2-37.el9) #1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023 Dec 6 01:46:23 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com. Dec 6 01:46:23 localhost kernel: Command line: BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M Dec 6 01:46:23 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 6 01:46:23 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 6 01:46:23 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 6 01:46:23 localhost kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 6 01:46:23 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 6 01:46:23 localhost kernel: signal: max sigframe size: 1776 Dec 6 01:46:23 localhost kernel: BIOS-provided physical RAM map: Dec 6 01:46:23 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 6 01:46:23 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 6 01:46:23 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 6 01:46:23 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable Dec 6 01:46:23 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved Dec 6 01:46:23 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 6 01:46:23 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 6 01:46:23 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000043fffffff] usable Dec 6 01:46:23 localhost kernel: NX (Execute Disable) protection: active Dec 6 01:46:23 localhost kernel: SMBIOS 2.8 present. Dec 6 01:46:23 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Dec 6 01:46:23 localhost kernel: Hypervisor detected: KVM Dec 6 01:46:23 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 6 01:46:23 localhost kernel: kvm-clock: using sched offset of 1817873558 cycles Dec 6 01:46:23 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 6 01:46:23 localhost kernel: tsc: Detected 2799.998 MHz processor Dec 6 01:46:23 localhost kernel: last_pfn = 0x440000 max_arch_pfn = 0x400000000 Dec 6 01:46:23 localhost kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 6 01:46:23 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000 Dec 6 01:46:23 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef] Dec 6 01:46:23 localhost kernel: Using GB pages for direct mapping Dec 6 01:46:23 localhost kernel: RAMDISK: [mem 0x2eef4000-0x33771fff] Dec 6 01:46:23 localhost kernel: ACPI: Early table checksum verification disabled Dec 6 01:46:23 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Dec 6 01:46:23 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 6 01:46:23 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 6 01:46:23 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 6 01:46:23 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040 Dec 6 01:46:23 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 6 01:46:23 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 6 01:46:23 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4] Dec 6 01:46:23 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570] Dec 6 01:46:23 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f] Dec 6 01:46:23 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694] Dec 6 01:46:23 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc] Dec 6 01:46:23 localhost kernel: No NUMA configuration found Dec 6 01:46:23 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000043fffffff] Dec 6 01:46:23 localhost kernel: NODE_DATA(0) allocated [mem 0x43ffd5000-0x43fffffff] Dec 6 01:46:23 localhost kernel: Reserving 256MB of memory at 2800MB for crashkernel (System RAM: 16383MB) Dec 6 01:46:23 localhost kernel: Zone ranges: Dec 6 01:46:23 localhost kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 6 01:46:23 localhost kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 6 01:46:23 localhost kernel: Normal [mem 0x0000000100000000-0x000000043fffffff] Dec 6 01:46:23 localhost kernel: Device empty Dec 6 01:46:23 localhost kernel: Movable zone start for each node Dec 6 01:46:23 localhost kernel: Early memory node ranges Dec 6 01:46:23 localhost kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 6 01:46:23 localhost kernel: node 0: [mem 0x0000000000100000-0x00000000bffdafff] Dec 6 01:46:23 localhost kernel: node 0: [mem 0x0000000100000000-0x000000043fffffff] Dec 6 01:46:23 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000043fffffff] Dec 6 01:46:23 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 6 01:46:23 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 6 01:46:23 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges Dec 6 01:46:23 localhost kernel: ACPI: PM-Timer IO Port: 0x608 Dec 6 01:46:23 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 6 01:46:23 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 6 01:46:23 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 6 01:46:23 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 6 01:46:23 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 6 01:46:23 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 6 01:46:23 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 6 01:46:23 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 6 01:46:23 localhost kernel: TSC deadline timer available Dec 6 01:46:23 localhost kernel: smpboot: Allowing 8 CPUs, 0 hotplug CPUs Dec 6 01:46:23 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff] Dec 6 01:46:23 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff] Dec 6 01:46:23 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff] Dec 6 01:46:23 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff] Dec 6 01:46:23 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff] Dec 6 01:46:23 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff] Dec 6 01:46:23 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff] Dec 6 01:46:23 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff] Dec 6 01:46:23 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff] Dec 6 01:46:23 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Dec 6 01:46:23 localhost kernel: Booting paravirtualized kernel on KVM Dec 6 01:46:23 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 6 01:46:23 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1 Dec 6 01:46:23 localhost kernel: percpu: Embedded 55 pages/cpu s188416 r8192 d28672 u262144 Dec 6 01:46:23 localhost kernel: kvm-guest: PV spinlocks disabled, no host support Dec 6 01:46:23 localhost kernel: Fallback order for Node 0: 0 Dec 6 01:46:23 localhost kernel: Built 1 zonelists, mobility grouping on. Total pages: 4128475 Dec 6 01:46:23 localhost kernel: Policy zone: Normal Dec 6 01:46:23 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M Dec 6 01:46:23 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64", will be passed to user space. Dec 6 01:46:23 localhost kernel: Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Dec 6 01:46:23 localhost kernel: Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 6 01:46:23 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 6 01:46:23 localhost kernel: software IO TLB: area num 8. Dec 6 01:46:23 localhost kernel: Memory: 2873456K/16776676K available (14342K kernel code, 5536K rwdata, 10180K rodata, 2792K init, 7524K bss, 741260K reserved, 0K cma-reserved) Dec 6 01:46:23 localhost kernel: random: get_random_u64 called from kmem_cache_open+0x1e/0x210 with crng_init=0 Dec 6 01:46:23 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1 Dec 6 01:46:23 localhost kernel: ftrace: allocating 44803 entries in 176 pages Dec 6 01:46:23 localhost kernel: ftrace: allocated 176 pages with 3 groups Dec 6 01:46:23 localhost kernel: Dynamic Preempt: voluntary Dec 6 01:46:23 localhost kernel: rcu: Preemptible hierarchical RCU implementation. Dec 6 01:46:23 localhost kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8. Dec 6 01:46:23 localhost kernel: #011Trampoline variant of Tasks RCU enabled. Dec 6 01:46:23 localhost kernel: #011Rude variant of Tasks RCU enabled. Dec 6 01:46:23 localhost kernel: #011Tracing variant of Tasks RCU enabled. Dec 6 01:46:23 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 6 01:46:23 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8 Dec 6 01:46:23 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16 Dec 6 01:46:23 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 6 01:46:23 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____) Dec 6 01:46:23 localhost kernel: random: crng init done (trusting CPU's manufacturer) Dec 6 01:46:23 localhost kernel: Console: colour VGA+ 80x25 Dec 6 01:46:23 localhost kernel: printk: console [tty0] enabled Dec 6 01:46:23 localhost kernel: printk: console [ttyS0] enabled Dec 6 01:46:23 localhost kernel: ACPI: Core revision 20211217 Dec 6 01:46:23 localhost kernel: APIC: Switch to symmetric I/O mode setup Dec 6 01:46:23 localhost kernel: x2apic enabled Dec 6 01:46:23 localhost kernel: Switched APIC routing to physical x2apic. Dec 6 01:46:23 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 6 01:46:23 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Dec 6 01:46:23 localhost kernel: pid_max: default: 32768 minimum: 301 Dec 6 01:46:23 localhost kernel: LSM: Security Framework initializing Dec 6 01:46:23 localhost kernel: Yama: becoming mindful. Dec 6 01:46:23 localhost kernel: SELinux: Initializing. Dec 6 01:46:23 localhost kernel: LSM support for eBPF active Dec 6 01:46:23 localhost kernel: Mount-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 6 01:46:23 localhost kernel: Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 6 01:46:23 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 6 01:46:23 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 6 01:46:23 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 6 01:46:23 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 6 01:46:23 localhost kernel: Spectre V2 : Mitigation: Retpolines Dec 6 01:46:23 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 6 01:46:23 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 6 01:46:23 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 6 01:46:23 localhost kernel: RETBleed: Mitigation: untrained return thunk Dec 6 01:46:23 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 6 01:46:23 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 6 01:46:23 localhost kernel: Freeing SMP alternatives memory: 36K Dec 6 01:46:23 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 6 01:46:23 localhost kernel: cblist_init_generic: Setting adjustable number of callback queues. Dec 6 01:46:23 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1. Dec 6 01:46:23 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1. Dec 6 01:46:23 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1. Dec 6 01:46:23 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 6 01:46:23 localhost kernel: ... version: 0 Dec 6 01:46:23 localhost kernel: ... bit width: 48 Dec 6 01:46:23 localhost kernel: ... generic registers: 6 Dec 6 01:46:23 localhost kernel: ... value mask: 0000ffffffffffff Dec 6 01:46:23 localhost kernel: ... max period: 00007fffffffffff Dec 6 01:46:23 localhost kernel: ... fixed-purpose events: 0 Dec 6 01:46:23 localhost kernel: ... event mask: 000000000000003f Dec 6 01:46:23 localhost kernel: rcu: Hierarchical SRCU implementation. Dec 6 01:46:23 localhost kernel: rcu: #011Max phase no-delay instances is 400. Dec 6 01:46:23 localhost kernel: smp: Bringing up secondary CPUs ... Dec 6 01:46:23 localhost kernel: x86: Booting SMP configuration: Dec 6 01:46:23 localhost kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 Dec 6 01:46:23 localhost kernel: smp: Brought up 1 node, 8 CPUs Dec 6 01:46:23 localhost kernel: smpboot: Max logical packages: 8 Dec 6 01:46:23 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS) Dec 6 01:46:23 localhost kernel: node 0 deferred pages initialised in 23ms Dec 6 01:46:23 localhost kernel: devtmpfs: initialized Dec 6 01:46:23 localhost kernel: x86/mm: Memory block size: 128MB Dec 6 01:46:23 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 6 01:46:23 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear) Dec 6 01:46:23 localhost kernel: pinctrl core: initialized pinctrl subsystem Dec 6 01:46:23 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 6 01:46:23 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL pool for atomic allocations Dec 6 01:46:23 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 6 01:46:23 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 6 01:46:23 localhost kernel: audit: initializing netlink subsys (disabled) Dec 6 01:46:23 localhost kernel: audit: type=2000 audit(1765003581.413:1): state=initialized audit_enabled=0 res=1 Dec 6 01:46:23 localhost kernel: thermal_sys: Registered thermal governor 'fair_share' Dec 6 01:46:23 localhost kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 6 01:46:23 localhost kernel: thermal_sys: Registered thermal governor 'user_space' Dec 6 01:46:23 localhost kernel: cpuidle: using governor menu Dec 6 01:46:23 localhost kernel: HugeTLB: can optimize 4095 vmemmap pages for hugepages-1048576kB Dec 6 01:46:23 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 6 01:46:23 localhost kernel: PCI: Using configuration type 1 for base access Dec 6 01:46:23 localhost kernel: PCI: Using configuration type 1 for extended access Dec 6 01:46:23 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 6 01:46:23 localhost kernel: HugeTLB: can optimize 7 vmemmap pages for hugepages-2048kB Dec 6 01:46:23 localhost kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 6 01:46:23 localhost kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 6 01:46:23 localhost kernel: cryptd: max_cpu_qlen set to 1000 Dec 6 01:46:23 localhost kernel: ACPI: Added _OSI(Module Device) Dec 6 01:46:23 localhost kernel: ACPI: Added _OSI(Processor Device) Dec 6 01:46:23 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 6 01:46:23 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 6 01:46:23 localhost kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 6 01:46:23 localhost kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 6 01:46:23 localhost kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 6 01:46:23 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 6 01:46:23 localhost kernel: ACPI: Interpreter enabled Dec 6 01:46:23 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5) Dec 6 01:46:23 localhost kernel: ACPI: Using IOAPIC for interrupt routing Dec 6 01:46:23 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 6 01:46:23 localhost kernel: PCI: Using E820 reservations for host bridge windows Dec 6 01:46:23 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Dec 6 01:46:23 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 6 01:46:23 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] Dec 6 01:46:23 localhost kernel: acpiphp: Slot [3] registered Dec 6 01:46:23 localhost kernel: acpiphp: Slot [4] registered Dec 6 01:46:23 localhost kernel: acpiphp: Slot [5] registered Dec 6 01:46:23 localhost kernel: acpiphp: Slot [6] registered Dec 6 01:46:23 localhost kernel: acpiphp: Slot [7] registered Dec 6 01:46:23 localhost kernel: acpiphp: Slot [8] registered Dec 6 01:46:23 localhost kernel: acpiphp: Slot [9] registered Dec 6 01:46:23 localhost kernel: acpiphp: Slot [10] registered Dec 6 01:46:23 localhost kernel: acpiphp: Slot [11] registered Dec 6 01:46:23 localhost kernel: acpiphp: Slot [12] registered Dec 6 01:46:23 localhost kernel: acpiphp: Slot [13] registered Dec 6 01:46:23 localhost kernel: acpiphp: Slot [14] registered Dec 6 01:46:23 localhost kernel: acpiphp: Slot [15] registered Dec 6 01:46:23 localhost kernel: acpiphp: Slot [16] registered Dec 6 01:46:23 localhost kernel: acpiphp: Slot [17] registered Dec 6 01:46:23 localhost kernel: acpiphp: Slot [18] registered Dec 6 01:46:23 localhost kernel: acpiphp: Slot [19] registered Dec 6 01:46:23 localhost kernel: acpiphp: Slot [20] registered Dec 6 01:46:23 localhost kernel: acpiphp: Slot [21] registered Dec 6 01:46:23 localhost kernel: acpiphp: Slot [22] registered Dec 6 01:46:23 localhost kernel: acpiphp: Slot [23] registered Dec 6 01:46:23 localhost kernel: acpiphp: Slot [24] registered Dec 6 01:46:23 localhost kernel: acpiphp: Slot [25] registered Dec 6 01:46:23 localhost kernel: acpiphp: Slot [26] registered Dec 6 01:46:23 localhost kernel: acpiphp: Slot [27] registered Dec 6 01:46:23 localhost kernel: acpiphp: Slot [28] registered Dec 6 01:46:23 localhost kernel: acpiphp: Slot [29] registered Dec 6 01:46:23 localhost kernel: acpiphp: Slot [30] registered Dec 6 01:46:23 localhost kernel: acpiphp: Slot [31] registered Dec 6 01:46:23 localhost kernel: PCI host bridge to bus 0000:00 Dec 6 01:46:23 localhost kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 6 01:46:23 localhost kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 6 01:46:23 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 6 01:46:23 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 6 01:46:23 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x440000000-0x4bfffffff window] Dec 6 01:46:23 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 6 01:46:23 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 6 01:46:23 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 6 01:46:23 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Dec 6 01:46:23 localhost kernel: pci 0000:00:01.1: reg 0x20: [io 0xc140-0xc14f] Dec 6 01:46:23 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Dec 6 01:46:23 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Dec 6 01:46:23 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Dec 6 01:46:23 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Dec 6 01:46:23 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Dec 6 01:46:23 localhost kernel: pci 0000:00:01.2: reg 0x20: [io 0xc100-0xc11f] Dec 6 01:46:23 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 6 01:46:23 localhost kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Dec 6 01:46:23 localhost kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Dec 6 01:46:23 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Dec 6 01:46:23 localhost kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Dec 6 01:46:23 localhost kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Dec 6 01:46:23 localhost kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Dec 6 01:46:23 localhost kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Dec 6 01:46:23 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 6 01:46:23 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 6 01:46:23 localhost kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Dec 6 01:46:23 localhost kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Dec 6 01:46:23 localhost kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Dec 6 01:46:23 localhost kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Dec 6 01:46:23 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Dec 6 01:46:23 localhost kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Dec 6 01:46:23 localhost kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Dec 6 01:46:23 localhost kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Dec 6 01:46:23 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Dec 6 01:46:23 localhost kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Dec 6 01:46:23 localhost kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Dec 6 01:46:23 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Dec 6 01:46:23 localhost kernel: pci 0000:00:06.0: reg 0x10: [io 0xc120-0xc13f] Dec 6 01:46:23 localhost kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Dec 6 01:46:23 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 6 01:46:23 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 6 01:46:23 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 6 01:46:23 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 6 01:46:23 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 6 01:46:23 localhost kernel: iommu: Default domain type: Translated Dec 6 01:46:23 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 6 01:46:23 localhost kernel: SCSI subsystem initialized Dec 6 01:46:23 localhost kernel: ACPI: bus type USB registered Dec 6 01:46:23 localhost kernel: usbcore: registered new interface driver usbfs Dec 6 01:46:23 localhost kernel: usbcore: registered new interface driver hub Dec 6 01:46:23 localhost kernel: usbcore: registered new device driver usb Dec 6 01:46:23 localhost kernel: pps_core: LinuxPPS API ver. 1 registered Dec 6 01:46:23 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 6 01:46:23 localhost kernel: PTP clock support registered Dec 6 01:46:23 localhost kernel: EDAC MC: Ver: 3.0.0 Dec 6 01:46:23 localhost kernel: NetLabel: Initializing Dec 6 01:46:23 localhost kernel: NetLabel: domain hash size = 128 Dec 6 01:46:23 localhost kernel: NetLabel: protocols = UNLABELED CIPSOv4 CALIPSO Dec 6 01:46:23 localhost kernel: NetLabel: unlabeled traffic allowed by default Dec 6 01:46:23 localhost kernel: PCI: Using ACPI for IRQ routing Dec 6 01:46:23 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Dec 6 01:46:23 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible Dec 6 01:46:23 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 6 01:46:23 localhost kernel: vgaarb: loaded Dec 6 01:46:23 localhost kernel: clocksource: Switched to clocksource kvm-clock Dec 6 01:46:23 localhost kernel: VFS: Disk quotas dquot_6.6.0 Dec 6 01:46:23 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 6 01:46:23 localhost kernel: pnp: PnP ACPI init Dec 6 01:46:23 localhost kernel: pnp: PnP ACPI: found 5 devices Dec 6 01:46:23 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 6 01:46:23 localhost kernel: NET: Registered PF_INET protocol family Dec 6 01:46:23 localhost kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 6 01:46:23 localhost kernel: tcp_listen_portaddr_hash hash table entries: 8192 (order: 5, 131072 bytes, linear) Dec 6 01:46:23 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 6 01:46:23 localhost kernel: TCP established hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 6 01:46:23 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Dec 6 01:46:23 localhost kernel: TCP: Hash tables configured (established 131072 bind 65536) Dec 6 01:46:23 localhost kernel: MPTCP token hash table entries: 16384 (order: 6, 393216 bytes, linear) Dec 6 01:46:23 localhost kernel: UDP hash table entries: 8192 (order: 6, 262144 bytes, linear) Dec 6 01:46:23 localhost kernel: UDP-Lite hash table entries: 8192 (order: 6, 262144 bytes, linear) Dec 6 01:46:23 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 6 01:46:23 localhost kernel: NET: Registered PF_XDP protocol family Dec 6 01:46:23 localhost kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 6 01:46:23 localhost kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 6 01:46:23 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 6 01:46:23 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Dec 6 01:46:23 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x440000000-0x4bfffffff window] Dec 6 01:46:23 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Dec 6 01:46:23 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 6 01:46:23 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 6 01:46:23 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x140 took 26855 usecs Dec 6 01:46:23 localhost kernel: PCI: CLS 0 bytes, default 64 Dec 6 01:46:23 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 6 01:46:23 localhost kernel: Trying to unpack rootfs image as initramfs... Dec 6 01:46:23 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB) Dec 6 01:46:23 localhost kernel: ACPI: bus type thunderbolt registered Dec 6 01:46:23 localhost kernel: Initialise system trusted keyrings Dec 6 01:46:23 localhost kernel: Key type blacklist registered Dec 6 01:46:23 localhost kernel: workingset: timestamp_bits=36 max_order=22 bucket_order=0 Dec 6 01:46:23 localhost kernel: zbud: loaded Dec 6 01:46:23 localhost kernel: integrity: Platform Keyring initialized Dec 6 01:46:23 localhost kernel: NET: Registered PF_ALG protocol family Dec 6 01:46:23 localhost kernel: xor: automatically using best checksumming function avx Dec 6 01:46:23 localhost kernel: Key type asymmetric registered Dec 6 01:46:23 localhost kernel: Asymmetric key parser 'x509' registered Dec 6 01:46:23 localhost kernel: Running certificate verification selftests Dec 6 01:46:23 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db' Dec 6 01:46:23 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246) Dec 6 01:46:23 localhost kernel: io scheduler mq-deadline registered Dec 6 01:46:23 localhost kernel: io scheduler kyber registered Dec 6 01:46:23 localhost kernel: io scheduler bfq registered Dec 6 01:46:23 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE Dec 6 01:46:23 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4 Dec 6 01:46:23 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0 Dec 6 01:46:23 localhost kernel: ACPI: button: Power Button [PWRF] Dec 6 01:46:23 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Dec 6 01:46:23 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 6 01:46:23 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 6 01:46:23 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 6 01:46:23 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 6 01:46:23 localhost kernel: Non-volatile memory driver v1.3 Dec 6 01:46:23 localhost kernel: rdac: device handler registered Dec 6 01:46:23 localhost kernel: hp_sw: device handler registered Dec 6 01:46:23 localhost kernel: emc: device handler registered Dec 6 01:46:23 localhost kernel: alua: device handler registered Dec 6 01:46:23 localhost kernel: libphy: Fixed MDIO Bus: probed Dec 6 01:46:23 localhost kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver Dec 6 01:46:23 localhost kernel: ehci-pci: EHCI PCI platform driver Dec 6 01:46:23 localhost kernel: ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver Dec 6 01:46:23 localhost kernel: ohci-pci: OHCI PCI platform driver Dec 6 01:46:23 localhost kernel: uhci_hcd: USB Universal Host Controller Interface driver Dec 6 01:46:23 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Dec 6 01:46:23 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Dec 6 01:46:23 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports Dec 6 01:46:23 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100 Dec 6 01:46:23 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14 Dec 6 01:46:23 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1 Dec 6 01:46:23 localhost kernel: usb usb1: Product: UHCI Host Controller Dec 6 01:46:23 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-284.11.1.el9_2.x86_64 uhci_hcd Dec 6 01:46:23 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2 Dec 6 01:46:23 localhost kernel: hub 1-0:1.0: USB hub found Dec 6 01:46:23 localhost kernel: hub 1-0:1.0: 2 ports detected Dec 6 01:46:23 localhost kernel: usbcore: registered new interface driver usbserial_generic Dec 6 01:46:23 localhost kernel: usbserial: USB Serial support registered for generic Dec 6 01:46:23 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 6 01:46:23 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 6 01:46:23 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 6 01:46:23 localhost kernel: mousedev: PS/2 mouse device common for all mice Dec 6 01:46:23 localhost kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 6 01:46:23 localhost kernel: rtc_cmos 00:04: registered as rtc0 Dec 6 01:46:23 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Dec 6 01:46:23 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-12-06T06:46:22 UTC (1765003582) Dec 6 01:46:23 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4 Dec 6 01:46:23 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Dec 6 01:46:23 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3 Dec 6 01:46:23 localhost kernel: hid: raw HID events driver (C) Jiri Kosina Dec 6 01:46:23 localhost kernel: usbcore: registered new interface driver usbhid Dec 6 01:46:23 localhost kernel: usbhid: USB HID core driver Dec 6 01:46:23 localhost kernel: drop_monitor: Initializing network drop monitor service Dec 6 01:46:23 localhost kernel: Initializing XFRM netlink socket Dec 6 01:46:23 localhost kernel: NET: Registered PF_INET6 protocol family Dec 6 01:46:23 localhost kernel: Segment Routing with IPv6 Dec 6 01:46:23 localhost kernel: NET: Registered PF_PACKET protocol family Dec 6 01:46:23 localhost kernel: mpls_gso: MPLS GSO support Dec 6 01:46:23 localhost kernel: IPI shorthand broadcast: enabled Dec 6 01:46:23 localhost kernel: AVX2 version of gcm_enc/dec engaged. Dec 6 01:46:23 localhost kernel: AES CTR mode by8 optimization enabled Dec 6 01:46:23 localhost kernel: sched_clock: Marking stable (824422579, 175960483)->(1128676633, -128293571) Dec 6 01:46:23 localhost kernel: registered taskstats version 1 Dec 6 01:46:23 localhost kernel: Loading compiled-in X.509 certificates Dec 6 01:46:23 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kernel signing key: aaec4b640ef162b54684864066c7d4ffd428cd72' Dec 6 01:46:23 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80' Dec 6 01:46:23 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8' Dec 6 01:46:23 localhost kernel: zswap: loaded using pool lzo/zbud Dec 6 01:46:23 localhost kernel: page_owner is disabled Dec 6 01:46:23 localhost kernel: Key type big_key registered Dec 6 01:46:23 localhost kernel: Freeing initrd memory: 74232K Dec 6 01:46:23 localhost kernel: Key type encrypted registered Dec 6 01:46:23 localhost kernel: ima: No TPM chip found, activating TPM-bypass! Dec 6 01:46:23 localhost kernel: Loading compiled-in module X.509 certificates Dec 6 01:46:23 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kernel signing key: aaec4b640ef162b54684864066c7d4ffd428cd72' Dec 6 01:46:23 localhost kernel: ima: Allocated hash algorithm: sha256 Dec 6 01:46:23 localhost kernel: ima: No architecture policies found Dec 6 01:46:23 localhost kernel: evm: Initialising EVM extended attributes: Dec 6 01:46:23 localhost kernel: evm: security.selinux Dec 6 01:46:23 localhost kernel: evm: security.SMACK64 (disabled) Dec 6 01:46:23 localhost kernel: evm: security.SMACK64EXEC (disabled) Dec 6 01:46:23 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled) Dec 6 01:46:23 localhost kernel: evm: security.SMACK64MMAP (disabled) Dec 6 01:46:23 localhost kernel: evm: security.apparmor (disabled) Dec 6 01:46:23 localhost kernel: evm: security.ima Dec 6 01:46:23 localhost kernel: evm: security.capability Dec 6 01:46:23 localhost kernel: evm: HMAC attrs: 0x1 Dec 6 01:46:23 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd Dec 6 01:46:23 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00 Dec 6 01:46:23 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10 Dec 6 01:46:23 localhost kernel: usb 1-1: Product: QEMU USB Tablet Dec 6 01:46:23 localhost kernel: usb 1-1: Manufacturer: QEMU Dec 6 01:46:23 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1 Dec 6 01:46:23 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5 Dec 6 01:46:23 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0 Dec 6 01:46:23 localhost kernel: Freeing unused decrypted memory: 2036K Dec 6 01:46:23 localhost kernel: Freeing unused kernel image (initmem) memory: 2792K Dec 6 01:46:23 localhost kernel: Write protecting the kernel read-only data: 26624k Dec 6 01:46:23 localhost kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 6 01:46:23 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 60K Dec 6 01:46:23 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found. Dec 6 01:46:23 localhost kernel: Run /init as init process Dec 6 01:46:23 localhost systemd[1]: systemd 252-13.el9_2 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 6 01:46:23 localhost systemd[1]: Detected virtualization kvm. Dec 6 01:46:23 localhost systemd[1]: Detected architecture x86-64. Dec 6 01:46:23 localhost systemd[1]: Running in initrd. Dec 6 01:46:23 localhost systemd[1]: No hostname configured, using default hostname. Dec 6 01:46:23 localhost systemd[1]: Hostname set to . Dec 6 01:46:23 localhost systemd[1]: Initializing machine ID from VM UUID. Dec 6 01:46:23 localhost systemd[1]: Queued start job for default target Initrd Default Target. Dec 6 01:46:23 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch. Dec 6 01:46:23 localhost systemd[1]: Reached target Local Encrypted Volumes. Dec 6 01:46:23 localhost systemd[1]: Reached target Initrd /usr File System. Dec 6 01:46:23 localhost systemd[1]: Reached target Local File Systems. Dec 6 01:46:23 localhost systemd[1]: Reached target Path Units. Dec 6 01:46:23 localhost systemd[1]: Reached target Slice Units. Dec 6 01:46:23 localhost systemd[1]: Reached target Swaps. Dec 6 01:46:23 localhost systemd[1]: Reached target Timer Units. Dec 6 01:46:23 localhost systemd[1]: Listening on D-Bus System Message Bus Socket. Dec 6 01:46:23 localhost systemd[1]: Listening on Journal Socket (/dev/log). Dec 6 01:46:23 localhost systemd[1]: Listening on Journal Socket. Dec 6 01:46:23 localhost systemd[1]: Listening on udev Control Socket. Dec 6 01:46:23 localhost systemd[1]: Listening on udev Kernel Socket. Dec 6 01:46:23 localhost systemd[1]: Reached target Socket Units. Dec 6 01:46:23 localhost systemd[1]: Starting Create List of Static Device Nodes... Dec 6 01:46:23 localhost systemd[1]: Starting Journal Service... Dec 6 01:46:23 localhost systemd[1]: Starting Load Kernel Modules... Dec 6 01:46:23 localhost systemd[1]: Starting Create System Users... Dec 6 01:46:23 localhost systemd[1]: Starting Setup Virtual Console... Dec 6 01:46:23 localhost systemd[1]: Finished Create List of Static Device Nodes. Dec 6 01:46:23 localhost systemd-journald[284]: Journal started Dec 6 01:46:23 localhost systemd-journald[284]: Runtime Journal (/run/log/journal/0b20d7bd13414912afa7eec4e2b0c648) is 8.0M, max 314.7M, 306.7M free. Dec 6 01:46:23 localhost systemd-modules-load[285]: Module 'msr' is built in Dec 6 01:46:23 localhost systemd[1]: Started Journal Service. Dec 6 01:46:23 localhost systemd[1]: Finished Load Kernel Modules. Dec 6 01:46:23 localhost systemd[1]: Finished Setup Virtual Console. Dec 6 01:46:23 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met. Dec 6 01:46:23 localhost systemd[1]: Starting dracut cmdline hook... Dec 6 01:46:23 localhost systemd[1]: Starting Apply Kernel Variables... Dec 6 01:46:23 localhost systemd-sysusers[286]: Creating group 'sgx' with GID 997. Dec 6 01:46:23 localhost systemd-sysusers[286]: Creating group 'users' with GID 100. Dec 6 01:46:23 localhost systemd-sysusers[286]: Creating group 'dbus' with GID 81. Dec 6 01:46:23 localhost systemd-sysusers[286]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81. Dec 6 01:46:23 localhost systemd[1]: Finished Create System Users. Dec 6 01:46:23 localhost systemd[1]: Finished Apply Kernel Variables. Dec 6 01:46:23 localhost dracut-cmdline[289]: dracut-9.2 (Plow) dracut-057-21.git20230214.el9 Dec 6 01:46:23 localhost systemd[1]: Starting Create Static Device Nodes in /dev... Dec 6 01:46:23 localhost systemd[1]: Starting Create Volatile Files and Directories... Dec 6 01:46:23 localhost dracut-cmdline[289]: Using kernel command line parameters: BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M Dec 6 01:46:23 localhost systemd[1]: Finished Create Static Device Nodes in /dev. Dec 6 01:46:23 localhost systemd[1]: Finished Create Volatile Files and Directories. Dec 6 01:46:23 localhost systemd[1]: Finished dracut cmdline hook. Dec 6 01:46:23 localhost systemd[1]: Starting dracut pre-udev hook... Dec 6 01:46:23 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 6 01:46:23 localhost kernel: device-mapper: uevent: version 1.0.3 Dec 6 01:46:23 localhost kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com Dec 6 01:46:23 localhost kernel: RPC: Registered named UNIX socket transport module. Dec 6 01:46:23 localhost kernel: RPC: Registered udp transport module. Dec 6 01:46:23 localhost kernel: RPC: Registered tcp transport module. Dec 6 01:46:23 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 6 01:46:23 localhost rpc.statd[407]: Version 2.5.4 starting Dec 6 01:46:23 localhost rpc.statd[407]: Initializing NSM state Dec 6 01:46:23 localhost rpc.idmapd[412]: Setting log level to 0 Dec 6 01:46:23 localhost systemd[1]: Finished dracut pre-udev hook. Dec 6 01:46:23 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files... Dec 6 01:46:23 localhost systemd-udevd[425]: Using default interface naming scheme 'rhel-9.0'. Dec 6 01:46:23 localhost systemd[1]: Started Rule-based Manager for Device Events and Files. Dec 6 01:46:23 localhost systemd[1]: Starting dracut pre-trigger hook... Dec 6 01:46:23 localhost systemd[1]: Finished dracut pre-trigger hook. Dec 6 01:46:23 localhost systemd[1]: Starting Coldplug All udev Devices... Dec 6 01:46:23 localhost systemd[1]: Finished Coldplug All udev Devices. Dec 6 01:46:23 localhost systemd[1]: Reached target System Initialization. Dec 6 01:46:23 localhost systemd[1]: Reached target Basic System. Dec 6 01:46:23 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet). Dec 6 01:46:23 localhost systemd[1]: Reached target Network. Dec 6 01:46:23 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet). Dec 6 01:46:23 localhost systemd[1]: Starting dracut initqueue hook... Dec 6 01:46:23 localhost kernel: virtio_blk virtio2: [vda] 838860800 512-byte logical blocks (429 GB/400 GiB) Dec 6 01:46:23 localhost kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 6 01:46:23 localhost kernel: scsi host0: ata_piix Dec 6 01:46:23 localhost kernel: GPT:20971519 != 838860799 Dec 6 01:46:23 localhost kernel: scsi host1: ata_piix Dec 6 01:46:23 localhost kernel: GPT:Alternate GPT header not at the end of the disk. Dec 6 01:46:23 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 Dec 6 01:46:23 localhost kernel: GPT:20971519 != 838860799 Dec 6 01:46:23 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 Dec 6 01:46:24 localhost systemd-udevd[457]: Network interface NamePolicy= disabled on kernel command line. Dec 6 01:46:24 localhost kernel: GPT: Use GNU Parted to correct GPT errors. Dec 6 01:46:24 localhost kernel: vda: vda1 vda2 vda3 vda4 Dec 6 01:46:24 localhost systemd[1]: Found device /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a. Dec 6 01:46:24 localhost systemd[1]: Reached target Initrd Root Device. Dec 6 01:46:24 localhost kernel: ata1: found unknown device (class 0) Dec 6 01:46:24 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 6 01:46:24 localhost kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 6 01:46:24 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5 Dec 6 01:46:24 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 6 01:46:24 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 6 01:46:24 localhost systemd[1]: Finished dracut initqueue hook. Dec 6 01:46:24 localhost systemd[1]: Reached target Preparation for Remote File Systems. Dec 6 01:46:24 localhost systemd[1]: Reached target Remote Encrypted Volumes. Dec 6 01:46:24 localhost systemd[1]: Reached target Remote File Systems. Dec 6 01:46:24 localhost systemd[1]: Starting dracut pre-mount hook... Dec 6 01:46:24 localhost systemd[1]: Finished dracut pre-mount hook. Dec 6 01:46:24 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a... Dec 6 01:46:24 localhost systemd-fsck[513]: /usr/sbin/fsck.xfs: XFS file system. Dec 6 01:46:24 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a. Dec 6 01:46:24 localhost systemd[1]: Mounting /sysroot... Dec 6 01:46:24 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled Dec 6 01:46:24 localhost kernel: XFS (vda4): Mounting V5 Filesystem Dec 6 01:46:24 localhost kernel: XFS (vda4): Ending clean mount Dec 6 01:46:24 localhost systemd[1]: Mounted /sysroot. Dec 6 01:46:24 localhost systemd[1]: Reached target Initrd Root File System. Dec 6 01:46:24 localhost systemd[1]: Starting Mountpoints Configured in the Real Root... Dec 6 01:46:24 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 6 01:46:24 localhost systemd[1]: Finished Mountpoints Configured in the Real Root. Dec 6 01:46:24 localhost systemd[1]: Reached target Initrd File Systems. Dec 6 01:46:24 localhost systemd[1]: Reached target Initrd Default Target. Dec 6 01:46:24 localhost systemd[1]: Starting dracut mount hook... Dec 6 01:46:24 localhost systemd[1]: Finished dracut mount hook. Dec 6 01:46:24 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook... Dec 6 01:46:24 localhost rpc.idmapd[412]: exiting on signal 15 Dec 6 01:46:24 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully. Dec 6 01:46:24 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook. Dec 6 01:46:24 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons... Dec 6 01:46:24 localhost systemd[1]: Stopped target Network. Dec 6 01:46:24 localhost systemd[1]: Stopped target Remote Encrypted Volumes. Dec 6 01:46:24 localhost systemd[1]: Stopped target Timer Units. Dec 6 01:46:24 localhost systemd[1]: dbus.socket: Deactivated successfully. Dec 6 01:46:24 localhost systemd[1]: Closed D-Bus System Message Bus Socket. Dec 6 01:46:24 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 6 01:46:24 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook. Dec 6 01:46:24 localhost systemd[1]: Stopped target Initrd Default Target. Dec 6 01:46:24 localhost systemd[1]: Stopped target Basic System. Dec 6 01:46:24 localhost systemd[1]: Stopped target Initrd Root Device. Dec 6 01:46:24 localhost systemd[1]: Stopped target Initrd /usr File System. Dec 6 01:46:24 localhost systemd[1]: Stopped target Path Units. Dec 6 01:46:24 localhost systemd[1]: Stopped target Remote File Systems. Dec 6 01:46:24 localhost systemd[1]: Stopped target Preparation for Remote File Systems. Dec 6 01:46:24 localhost systemd[1]: Stopped target Slice Units. Dec 6 01:46:24 localhost systemd[1]: Stopped target Socket Units. Dec 6 01:46:24 localhost systemd[1]: Stopped target System Initialization. Dec 6 01:46:24 localhost systemd[1]: Stopped target Local File Systems. Dec 6 01:46:24 localhost systemd[1]: Stopped target Swaps. Dec 6 01:46:24 localhost systemd[1]: dracut-mount.service: Deactivated successfully. Dec 6 01:46:24 localhost systemd[1]: Stopped dracut mount hook. Dec 6 01:46:24 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 6 01:46:24 localhost systemd[1]: Stopped dracut pre-mount hook. Dec 6 01:46:24 localhost systemd[1]: Stopped target Local Encrypted Volumes. Dec 6 01:46:24 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 6 01:46:24 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch. Dec 6 01:46:24 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 6 01:46:24 localhost systemd[1]: Stopped dracut initqueue hook. Dec 6 01:46:24 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 6 01:46:24 localhost systemd[1]: Stopped Apply Kernel Variables. Dec 6 01:46:24 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 6 01:46:24 localhost systemd[1]: Stopped Load Kernel Modules. Dec 6 01:46:24 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 6 01:46:24 localhost systemd[1]: Stopped Create Volatile Files and Directories. Dec 6 01:46:24 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 6 01:46:24 localhost systemd[1]: Stopped Coldplug All udev Devices. Dec 6 01:46:24 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 6 01:46:24 localhost systemd[1]: Stopped dracut pre-trigger hook. Dec 6 01:46:24 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files... Dec 6 01:46:24 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 6 01:46:24 localhost systemd[1]: Stopped Setup Virtual Console. Dec 6 01:46:24 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 6 01:46:24 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 6 01:46:24 localhost systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 6 01:46:24 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files. Dec 6 01:46:24 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 6 01:46:24 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons. Dec 6 01:46:24 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 6 01:46:24 localhost systemd[1]: Closed udev Control Socket. Dec 6 01:46:24 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 6 01:46:24 localhost systemd[1]: Closed udev Kernel Socket. Dec 6 01:46:24 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 6 01:46:24 localhost systemd[1]: Stopped dracut pre-udev hook. Dec 6 01:46:24 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 6 01:46:24 localhost systemd[1]: Stopped dracut cmdline hook. Dec 6 01:46:25 localhost systemd[1]: Starting Cleanup udev Database... Dec 6 01:46:25 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 6 01:46:25 localhost systemd[1]: Stopped Create Static Device Nodes in /dev. Dec 6 01:46:25 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 6 01:46:25 localhost systemd[1]: Stopped Create List of Static Device Nodes. Dec 6 01:46:25 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully. Dec 6 01:46:25 localhost systemd[1]: Stopped Create System Users. Dec 6 01:46:25 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 6 01:46:25 localhost systemd[1]: Finished Cleanup udev Database. Dec 6 01:46:25 localhost systemd[1]: Reached target Switch Root. Dec 6 01:46:25 localhost systemd[1]: Starting Switch Root... Dec 6 01:46:25 localhost systemd[1]: Switching root. Dec 6 01:46:25 localhost systemd-journald[284]: Journal stopped Dec 6 01:46:25 localhost systemd-journald[284]: Received SIGTERM from PID 1 (systemd). Dec 6 01:46:25 localhost kernel: audit: type=1404 audit(1765003585.122:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1 Dec 6 01:46:25 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 6 01:46:25 localhost kernel: SELinux: policy capability open_perms=1 Dec 6 01:46:25 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 6 01:46:25 localhost kernel: SELinux: policy capability always_check_network=0 Dec 6 01:46:25 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 6 01:46:25 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 6 01:46:25 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 6 01:46:25 localhost kernel: audit: type=1403 audit(1765003585.221:3): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 6 01:46:25 localhost systemd[1]: Successfully loaded SELinux policy in 100.414ms. Dec 6 01:46:25 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.172ms. Dec 6 01:46:25 localhost systemd[1]: systemd 252-13.el9_2 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 6 01:46:25 localhost systemd[1]: Detected virtualization kvm. Dec 6 01:46:25 localhost systemd[1]: Detected architecture x86-64. Dec 6 01:46:25 localhost systemd-rc-local-generator[583]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 01:46:25 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 01:46:25 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 6 01:46:25 localhost systemd[1]: Stopped Switch Root. Dec 6 01:46:25 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 6 01:46:25 localhost systemd[1]: Created slice Slice /system/getty. Dec 6 01:46:25 localhost systemd[1]: Created slice Slice /system/modprobe. Dec 6 01:46:25 localhost systemd[1]: Created slice Slice /system/serial-getty. Dec 6 01:46:25 localhost systemd[1]: Created slice Slice /system/sshd-keygen. Dec 6 01:46:25 localhost systemd[1]: Created slice Slice /system/systemd-fsck. Dec 6 01:46:25 localhost systemd[1]: Created slice User and Session Slice. Dec 6 01:46:25 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch. Dec 6 01:46:25 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch. Dec 6 01:46:25 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point. Dec 6 01:46:25 localhost systemd[1]: Reached target Local Encrypted Volumes. Dec 6 01:46:25 localhost systemd[1]: Stopped target Switch Root. Dec 6 01:46:25 localhost systemd[1]: Stopped target Initrd File Systems. Dec 6 01:46:25 localhost systemd[1]: Stopped target Initrd Root File System. Dec 6 01:46:25 localhost systemd[1]: Reached target Local Integrity Protected Volumes. Dec 6 01:46:25 localhost systemd[1]: Reached target Path Units. Dec 6 01:46:25 localhost systemd[1]: Reached target rpc_pipefs.target. Dec 6 01:46:25 localhost systemd[1]: Reached target Slice Units. Dec 6 01:46:25 localhost systemd[1]: Reached target Swaps. Dec 6 01:46:25 localhost systemd[1]: Reached target Local Verity Protected Volumes. Dec 6 01:46:25 localhost systemd[1]: Listening on RPCbind Server Activation Socket. Dec 6 01:46:25 localhost systemd[1]: Reached target RPC Port Mapper. Dec 6 01:46:25 localhost systemd[1]: Listening on Process Core Dump Socket. Dec 6 01:46:25 localhost systemd[1]: Listening on initctl Compatibility Named Pipe. Dec 6 01:46:25 localhost systemd[1]: Listening on udev Control Socket. Dec 6 01:46:25 localhost systemd[1]: Listening on udev Kernel Socket. Dec 6 01:46:25 localhost systemd[1]: Mounting Huge Pages File System... Dec 6 01:46:25 localhost systemd[1]: Mounting POSIX Message Queue File System... Dec 6 01:46:25 localhost systemd[1]: Mounting Kernel Debug File System... Dec 6 01:46:25 localhost systemd[1]: Mounting Kernel Trace File System... Dec 6 01:46:25 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab). Dec 6 01:46:25 localhost systemd[1]: Starting Create List of Static Device Nodes... Dec 6 01:46:25 localhost systemd[1]: Starting Load Kernel Module configfs... Dec 6 01:46:25 localhost systemd[1]: Starting Load Kernel Module drm... Dec 6 01:46:25 localhost systemd[1]: Starting Load Kernel Module fuse... Dec 6 01:46:25 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network... Dec 6 01:46:25 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 6 01:46:25 localhost systemd[1]: Stopped File System Check on Root Device. Dec 6 01:46:25 localhost systemd[1]: Stopped Journal Service. Dec 6 01:46:25 localhost kernel: fuse: init (API version 7.36) Dec 6 01:46:25 localhost systemd[1]: Starting Journal Service... Dec 6 01:46:25 localhost systemd[1]: Starting Load Kernel Modules... Dec 6 01:46:25 localhost systemd[1]: Starting Generate network units from Kernel command line... Dec 6 01:46:25 localhost systemd[1]: Starting Remount Root and Kernel File Systems... Dec 6 01:46:25 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met. Dec 6 01:46:25 localhost systemd[1]: Starting Coldplug All udev Devices... Dec 6 01:46:25 localhost systemd[1]: Mounted Huge Pages File System. Dec 6 01:46:25 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff) Dec 6 01:46:25 localhost systemd-journald[619]: Journal started Dec 6 01:46:25 localhost systemd-journald[619]: Runtime Journal (/run/log/journal/4b30904fc4748c16d0c72dbebcabab49) is 8.0M, max 314.7M, 306.7M free. Dec 6 01:46:25 localhost systemd[1]: Queued start job for default target Multi-User System. Dec 6 01:46:25 localhost systemd[1]: systemd-journald.service: Deactivated successfully. Dec 6 01:46:25 localhost systemd-modules-load[620]: Module 'msr' is built in Dec 6 01:46:25 localhost systemd[1]: Started Journal Service. Dec 6 01:46:25 localhost kernel: ACPI: bus type drm_connector registered Dec 6 01:46:25 localhost systemd[1]: Mounted POSIX Message Queue File System. Dec 6 01:46:25 localhost systemd[1]: Mounted Kernel Debug File System. Dec 6 01:46:25 localhost systemd[1]: Mounted Kernel Trace File System. Dec 6 01:46:25 localhost systemd[1]: Finished Create List of Static Device Nodes. Dec 6 01:46:25 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 6 01:46:25 localhost systemd[1]: Finished Load Kernel Module configfs. Dec 6 01:46:25 localhost systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 6 01:46:25 localhost systemd[1]: Finished Load Kernel Module drm. Dec 6 01:46:25 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 6 01:46:25 localhost systemd[1]: Finished Load Kernel Module fuse. Dec 6 01:46:25 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network. Dec 6 01:46:25 localhost systemd[1]: Finished Load Kernel Modules. Dec 6 01:46:25 localhost systemd[1]: Finished Generate network units from Kernel command line. Dec 6 01:46:25 localhost systemd[1]: Finished Remount Root and Kernel File Systems. Dec 6 01:46:25 localhost systemd[1]: Mounting FUSE Control File System... Dec 6 01:46:25 localhost systemd[1]: Mounting Kernel Configuration File System... Dec 6 01:46:25 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes). Dec 6 01:46:25 localhost systemd[1]: Starting Rebuild Hardware Database... Dec 6 01:46:25 localhost systemd[1]: Starting Flush Journal to Persistent Storage... Dec 6 01:46:25 localhost systemd[1]: Starting Load/Save Random Seed... Dec 6 01:46:25 localhost systemd[1]: Starting Apply Kernel Variables... Dec 6 01:46:25 localhost systemd-journald[619]: Runtime Journal (/run/log/journal/4b30904fc4748c16d0c72dbebcabab49) is 8.0M, max 314.7M, 306.7M free. Dec 6 01:46:25 localhost systemd-journald[619]: Received client request to flush runtime journal. Dec 6 01:46:25 localhost systemd[1]: Starting Create System Users... Dec 6 01:46:25 localhost systemd[1]: Finished Coldplug All udev Devices. Dec 6 01:46:25 localhost systemd[1]: Mounted FUSE Control File System. Dec 6 01:46:25 localhost systemd[1]: Mounted Kernel Configuration File System. Dec 6 01:46:25 localhost systemd[1]: Finished Flush Journal to Persistent Storage. Dec 6 01:46:25 localhost systemd[1]: Finished Load/Save Random Seed. Dec 6 01:46:25 localhost systemd[1]: Finished Apply Kernel Variables. Dec 6 01:46:25 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes). Dec 6 01:46:25 localhost systemd-sysusers[632]: Creating group 'sgx' with GID 989. Dec 6 01:46:25 localhost systemd-sysusers[632]: Creating group 'systemd-oom' with GID 988. Dec 6 01:46:25 localhost systemd-sysusers[632]: Creating user 'systemd-oom' (systemd Userspace OOM Killer) with UID 988 and GID 988. Dec 6 01:46:25 localhost systemd[1]: Finished Create System Users. Dec 6 01:46:25 localhost systemd[1]: Starting Create Static Device Nodes in /dev... Dec 6 01:46:25 localhost systemd[1]: Finished Create Static Device Nodes in /dev. Dec 6 01:46:25 localhost systemd[1]: Reached target Preparation for Local File Systems. Dec 6 01:46:25 localhost systemd[1]: Set up automount EFI System Partition Automount. Dec 6 01:46:26 localhost systemd[1]: Finished Rebuild Hardware Database. Dec 6 01:46:26 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files... Dec 6 01:46:26 localhost systemd-udevd[636]: Using default interface naming scheme 'rhel-9.0'. Dec 6 01:46:26 localhost systemd[1]: Started Rule-based Manager for Device Events and Files. Dec 6 01:46:26 localhost systemd[1]: Starting Load Kernel Module configfs... Dec 6 01:46:26 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 6 01:46:26 localhost systemd[1]: Finished Load Kernel Module configfs. Dec 6 01:46:26 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped. Dec 6 01:46:26 localhost systemd-udevd[640]: Network interface NamePolicy= disabled on kernel command line. Dec 6 01:46:26 localhost systemd[1]: Condition check resulted in /dev/disk/by-uuid/b141154b-6a70-437a-a97f-d160c9ba37eb being skipped. Dec 6 01:46:26 localhost systemd[1]: Mounting /boot... Dec 6 01:46:26 localhost kernel: XFS (vda3): Mounting V5 Filesystem Dec 6 01:46:26 localhost systemd[1]: Condition check resulted in /dev/disk/by-uuid/7B77-95E7 being skipped. Dec 6 01:46:26 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/7B77-95E7... Dec 6 01:46:26 localhost kernel: XFS (vda3): Ending clean mount Dec 6 01:46:26 localhost kernel: xfs filesystem being mounted at /boot supports timestamps until 2038 (0x7fffffff) Dec 6 01:46:26 localhost systemd[1]: Mounted /boot. Dec 6 01:46:26 localhost systemd-fsck[688]: fsck.fat 4.2 (2021-01-31) Dec 6 01:46:26 localhost systemd-fsck[688]: /dev/vda2: 12 files, 1782/51145 clusters Dec 6 01:46:26 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/7B77-95E7. Dec 6 01:46:26 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Dec 6 01:46:26 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6 Dec 6 01:46:26 localhost kernel: SVM: TSC scaling supported Dec 6 01:46:26 localhost kernel: kvm: Nested Virtualization enabled Dec 6 01:46:26 localhost kernel: SVM: kvm: Nested Paging enabled Dec 6 01:46:26 localhost kernel: SVM: LBR virtualization supported Dec 6 01:46:26 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Dec 6 01:46:26 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Dec 6 01:46:26 localhost kernel: Console: switching to colour dummy device 80x25 Dec 6 01:46:26 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 6 01:46:26 localhost kernel: [drm] features: -context_init Dec 6 01:46:26 localhost kernel: [drm] number of scanouts: 1 Dec 6 01:46:26 localhost kernel: [drm] number of cap sets: 0 Dec 6 01:46:26 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 0 for virtio0 on minor 0 Dec 6 01:46:26 localhost kernel: virtio_gpu virtio0: [drm] drm_plane_enable_fb_damage_clips() not called Dec 6 01:46:26 localhost kernel: Console: switching to colour frame buffer device 128x48 Dec 6 01:46:26 localhost kernel: virtio_gpu virtio0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 6 01:46:26 localhost systemd[1]: Mounting /boot/efi... Dec 6 01:46:26 localhost systemd[1]: Mounted /boot/efi. Dec 6 01:46:26 localhost systemd[1]: Reached target Local File Systems. Dec 6 01:46:26 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache... Dec 6 01:46:26 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux). Dec 6 01:46:26 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 6 01:46:26 localhost systemd[1]: Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 6 01:46:26 localhost systemd[1]: Starting Automatic Boot Loader Update... Dec 6 01:46:26 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id). Dec 6 01:46:26 localhost systemd[1]: Starting Create Volatile Files and Directories... Dec 6 01:46:26 localhost systemd[1]: efi.automount: Got automount request for /efi, triggered by 708 (bootctl) Dec 6 01:46:26 localhost systemd[1]: Starting File System Check on /dev/vda2... Dec 6 01:46:26 localhost systemd[1]: Finished File System Check on /dev/vda2. Dec 6 01:46:26 localhost systemd[1]: Mounting EFI System Partition Automount... Dec 6 01:46:26 localhost systemd[1]: Mounted EFI System Partition Automount. Dec 6 01:46:26 localhost systemd[1]: Finished Automatic Boot Loader Update. Dec 6 01:46:26 localhost systemd[1]: Finished Create Volatile Files and Directories. Dec 6 01:46:26 localhost systemd[1]: Starting Security Auditing Service... Dec 6 01:46:26 localhost systemd[1]: Starting RPC Bind... Dec 6 01:46:26 localhost systemd[1]: Starting Rebuild Journal Catalog... Dec 6 01:46:26 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache. Dec 6 01:46:26 localhost auditd[725]: audit dispatcher initialized with q_depth=1200 and 1 active plugins Dec 6 01:46:26 localhost auditd[725]: Init complete, auditd 3.0.7 listening for events (startup state enable) Dec 6 01:46:26 localhost systemd[1]: Finished Rebuild Journal Catalog. Dec 6 01:46:26 localhost systemd[1]: Starting Update is Completed... Dec 6 01:46:26 localhost systemd[1]: Started RPC Bind. Dec 6 01:46:26 localhost systemd[1]: Finished Update is Completed. Dec 6 01:46:26 localhost augenrules[731]: /sbin/augenrules: No change Dec 6 01:46:26 localhost augenrules[743]: No rules Dec 6 01:46:26 localhost augenrules[743]: enabled 1 Dec 6 01:46:26 localhost augenrules[743]: failure 1 Dec 6 01:46:26 localhost augenrules[743]: pid 725 Dec 6 01:46:26 localhost augenrules[743]: rate_limit 0 Dec 6 01:46:26 localhost augenrules[743]: backlog_limit 8192 Dec 6 01:46:26 localhost augenrules[743]: lost 0 Dec 6 01:46:26 localhost augenrules[743]: backlog 4 Dec 6 01:46:26 localhost augenrules[743]: backlog_wait_time 60000 Dec 6 01:46:26 localhost augenrules[743]: backlog_wait_time_actual 0 Dec 6 01:46:26 localhost augenrules[743]: enabled 1 Dec 6 01:46:26 localhost augenrules[743]: failure 1 Dec 6 01:46:26 localhost augenrules[743]: pid 725 Dec 6 01:46:26 localhost augenrules[743]: rate_limit 0 Dec 6 01:46:26 localhost augenrules[743]: backlog_limit 8192 Dec 6 01:46:26 localhost augenrules[743]: lost 0 Dec 6 01:46:26 localhost augenrules[743]: backlog 3 Dec 6 01:46:26 localhost augenrules[743]: backlog_wait_time 60000 Dec 6 01:46:26 localhost augenrules[743]: backlog_wait_time_actual 0 Dec 6 01:46:26 localhost augenrules[743]: enabled 1 Dec 6 01:46:26 localhost augenrules[743]: failure 1 Dec 6 01:46:26 localhost augenrules[743]: pid 725 Dec 6 01:46:26 localhost augenrules[743]: rate_limit 0 Dec 6 01:46:26 localhost augenrules[743]: backlog_limit 8192 Dec 6 01:46:26 localhost augenrules[743]: lost 0 Dec 6 01:46:26 localhost augenrules[743]: backlog 4 Dec 6 01:46:26 localhost augenrules[743]: backlog_wait_time 60000 Dec 6 01:46:26 localhost augenrules[743]: backlog_wait_time_actual 0 Dec 6 01:46:26 localhost systemd[1]: Started Security Auditing Service. Dec 6 01:46:26 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP... Dec 6 01:46:26 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP. Dec 6 01:46:26 localhost systemd[1]: Reached target System Initialization. Dec 6 01:46:26 localhost systemd[1]: Started dnf makecache --timer. Dec 6 01:46:26 localhost systemd[1]: Started Daily rotation of log files. Dec 6 01:46:26 localhost systemd[1]: Started Daily Cleanup of Temporary Directories. Dec 6 01:46:26 localhost systemd[1]: Reached target Timer Units. Dec 6 01:46:26 localhost systemd[1]: Listening on D-Bus System Message Bus Socket. Dec 6 01:46:26 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket. Dec 6 01:46:26 localhost systemd[1]: Reached target Socket Units. Dec 6 01:46:26 localhost systemd[1]: Starting Initial cloud-init job (pre-networking)... Dec 6 01:46:27 localhost systemd[1]: Starting D-Bus System Message Bus... Dec 6 01:46:27 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 6 01:46:27 localhost systemd[1]: Started D-Bus System Message Bus. Dec 6 01:46:27 localhost journal[752]: Ready Dec 6 01:46:27 localhost systemd[1]: Reached target Basic System. Dec 6 01:46:27 localhost systemd[1]: Starting NTP client/server... Dec 6 01:46:27 localhost systemd[1]: Starting Restore /run/initramfs on shutdown... Dec 6 01:46:27 localhost systemd[1]: Started irqbalance daemon. Dec 6 01:46:27 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload). Dec 6 01:46:27 localhost systemd[1]: Starting System Logging Service... Dec 6 01:46:27 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 6 01:46:27 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 6 01:46:27 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 6 01:46:27 localhost systemd[1]: Reached target sshd-keygen.target. Dec 6 01:46:27 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met. Dec 6 01:46:27 localhost systemd[1]: Reached target User and Group Name Lookups. Dec 6 01:46:27 localhost chronyd[762]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Dec 6 01:46:27 localhost rsyslogd[760]: [origin software="rsyslogd" swVersion="8.2102.0-111.el9" x-pid="760" x-info="https://www.rsyslog.com"] start Dec 6 01:46:27 localhost rsyslogd[760]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2040 ] Dec 6 01:46:27 localhost chronyd[762]: Using right/UTC timezone to obtain leap second data Dec 6 01:46:27 localhost chronyd[762]: Loaded seccomp filter (level 2) Dec 6 01:46:27 localhost systemd[1]: Starting User Login Management... Dec 6 01:46:27 localhost systemd[1]: Started System Logging Service. Dec 6 01:46:27 localhost systemd[1]: Started NTP client/server. Dec 6 01:46:27 localhost systemd[1]: Finished Restore /run/initramfs on shutdown. Dec 6 01:46:27 localhost rsyslogd[760]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 6 01:46:27 localhost systemd-logind[766]: New seat seat0. Dec 6 01:46:27 localhost systemd-logind[766]: Watching system buttons on /dev/input/event0 (Power Button) Dec 6 01:46:27 localhost systemd-logind[766]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Dec 6 01:46:27 localhost systemd[1]: Started User Login Management. Dec 6 01:46:27 localhost cloud-init[771]: Cloud-init v. 22.1-9.el9 running 'init-local' at Sat, 06 Dec 2025 06:46:27 +0000. Up 5.65 seconds. Dec 6 01:46:27 localhost systemd[1]: Starting Hostname Service... Dec 6 01:46:27 localhost systemd[1]: run-cloud\x2dinit-tmp-tmppp438cfj.mount: Deactivated successfully. Dec 6 01:46:27 localhost systemd[1]: Started Hostname Service. Dec 6 01:46:27 localhost systemd-hostnamed[785]: Hostname set to (static) Dec 6 01:46:27 localhost systemd[1]: Finished Initial cloud-init job (pre-networking). Dec 6 01:46:27 localhost systemd[1]: Reached target Preparation for Network. Dec 6 01:46:27 localhost systemd[1]: Starting Network Manager... Dec 6 01:46:27 localhost NetworkManager[790]: [1765003587.8621] NetworkManager (version 1.42.2-1.el9) is starting... (boot:a2c5bf5a-4be9-4ef7-a12e-aeb290b897cb) Dec 6 01:46:27 localhost NetworkManager[790]: [1765003587.8627] Read config: /etc/NetworkManager/NetworkManager.conf (run: 15-carrier-timeout.conf) Dec 6 01:46:27 localhost systemd[1]: Started Network Manager. Dec 6 01:46:27 localhost NetworkManager[790]: [1765003587.8652] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager" Dec 6 01:46:27 localhost systemd[1]: Reached target Network. Dec 6 01:46:27 localhost NetworkManager[790]: [1765003587.8707] manager[0x563828ced020]: monitoring kernel firmware directory '/lib/firmware'. Dec 6 01:46:27 localhost systemd[1]: Starting Network Manager Wait Online... Dec 6 01:46:27 localhost systemd[1]: Starting GSSAPI Proxy Daemon... Dec 6 01:46:27 localhost systemd[1]: Starting Enable periodic update of entitlement certificates.... Dec 6 01:46:27 localhost NetworkManager[790]: [1765003587.8833] hostname: hostname: using hostnamed Dec 6 01:46:27 localhost NetworkManager[790]: [1765003587.8833] hostname: static hostname changed from (none) to "np0005548789.novalocal" Dec 6 01:46:27 localhost NetworkManager[790]: [1765003587.8838] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto) Dec 6 01:46:27 localhost systemd[1]: Starting Dynamic System Tuning Daemon... Dec 6 01:46:27 localhost systemd[1]: Started Enable periodic update of entitlement certificates.. Dec 6 01:46:27 localhost NetworkManager[790]: [1765003587.8975] manager[0x563828ced020]: rfkill: Wi-Fi hardware radio set enabled Dec 6 01:46:27 localhost NetworkManager[790]: [1765003587.8977] manager[0x563828ced020]: rfkill: WWAN hardware radio set enabled Dec 6 01:46:27 localhost systemd[1]: Started GSSAPI Proxy Daemon. Dec 6 01:46:27 localhost NetworkManager[790]: [1765003587.9039] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-device-plugin-team.so) Dec 6 01:46:27 localhost NetworkManager[790]: [1765003587.9040] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file Dec 6 01:46:27 localhost NetworkManager[790]: [1765003587.9042] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file Dec 6 01:46:27 localhost NetworkManager[790]: [1765003587.9046] manager: Networking is enabled by state file Dec 6 01:46:27 localhost NetworkManager[790]: [1765003587.9059] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-settings-plugin-ifcfg-rh.so") Dec 6 01:46:27 localhost NetworkManager[790]: [1765003587.9060] settings: Loaded settings plugin: keyfile (internal) Dec 6 01:46:27 localhost NetworkManager[790]: [1765003587.9084] dhcp: init: Using DHCP client 'internal' Dec 6 01:46:27 localhost NetworkManager[790]: [1765003587.9090] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1) Dec 6 01:46:27 localhost systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch. Dec 6 01:46:27 localhost NetworkManager[790]: [1765003587.9104] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Dec 6 01:46:27 localhost NetworkManager[790]: [1765003587.9108] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external') Dec 6 01:46:27 localhost NetworkManager[790]: [1765003587.9116] device (lo): Activation: starting connection 'lo' (1c0ca10a-4a5b-41dd-9a55-58f9b21f8cc0) Dec 6 01:46:27 localhost NetworkManager[790]: [1765003587.9124] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2) Dec 6 01:46:27 localhost NetworkManager[790]: [1765003587.9128] device (eth0): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Dec 6 01:46:27 localhost NetworkManager[790]: [1765003587.9160] device (lo): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external') Dec 6 01:46:27 localhost NetworkManager[790]: [1765003587.9163] device (lo): state change: prepare -> config (reason 'none', sys-iface-state: 'external') Dec 6 01:46:27 localhost NetworkManager[790]: [1765003587.9164] device (lo): state change: config -> ip-config (reason 'none', sys-iface-state: 'external') Dec 6 01:46:27 localhost NetworkManager[790]: [1765003587.9166] device (eth0): carrier: link connected Dec 6 01:46:27 localhost NetworkManager[790]: [1765003587.9168] device (lo): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external') Dec 6 01:46:27 localhost NetworkManager[790]: [1765003587.9172] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', sys-iface-state: 'managed') Dec 6 01:46:27 localhost NetworkManager[790]: [1765003587.9177] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) Dec 6 01:46:27 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Dec 6 01:46:27 localhost NetworkManager[790]: [1765003587.9183] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) Dec 6 01:46:27 localhost NetworkManager[790]: [1765003587.9184] device (eth0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Dec 6 01:46:27 localhost NetworkManager[790]: [1765003587.9187] manager: NetworkManager state is now CONNECTING Dec 6 01:46:27 localhost NetworkManager[790]: [1765003587.9189] device (eth0): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Dec 6 01:46:27 localhost NetworkManager[790]: [1765003587.9194] device (eth0): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Dec 6 01:46:27 localhost NetworkManager[790]: [1765003587.9197] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds) Dec 6 01:46:27 localhost systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab). Dec 6 01:46:27 localhost systemd[1]: Reached target NFS client services. Dec 6 01:46:27 localhost systemd[1]: Reached target Preparation for Remote File Systems. Dec 6 01:46:27 localhost NetworkManager[790]: [1765003587.9270] dhcp4 (eth0): state changed new lease, address=38.102.83.150 Dec 6 01:46:27 localhost NetworkManager[790]: [1765003587.9273] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS Dec 6 01:46:27 localhost NetworkManager[790]: [1765003587.9291] device (eth0): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Dec 6 01:46:27 localhost systemd[1]: Reached target Remote File Systems. Dec 6 01:46:27 localhost systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 6 01:46:27 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Dec 6 01:46:27 localhost NetworkManager[790]: [1765003587.9423] device (lo): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external') Dec 6 01:46:27 localhost NetworkManager[790]: [1765003587.9424] device (lo): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external') Dec 6 01:46:27 localhost NetworkManager[790]: [1765003587.9429] device (lo): Activation: successful, device activated. Dec 6 01:46:27 localhost NetworkManager[790]: [1765003587.9433] device (eth0): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Dec 6 01:46:27 localhost NetworkManager[790]: [1765003587.9435] device (eth0): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Dec 6 01:46:27 localhost NetworkManager[790]: [1765003587.9437] manager: NetworkManager state is now CONNECTED_SITE Dec 6 01:46:27 localhost NetworkManager[790]: [1765003587.9440] device (eth0): Activation: successful, device activated. Dec 6 01:46:27 localhost NetworkManager[790]: [1765003587.9443] manager: NetworkManager state is now CONNECTED_GLOBAL Dec 6 01:46:27 localhost NetworkManager[790]: [1765003587.9445] manager: startup complete Dec 6 01:46:27 localhost systemd[1]: Finished Network Manager Wait Online. Dec 6 01:46:27 localhost systemd[1]: Starting Initial cloud-init job (metadata service crawler)... Dec 6 01:46:28 localhost cloud-init[1030]: Cloud-init v. 22.1-9.el9 running 'init' at Sat, 06 Dec 2025 06:46:28 +0000. Up 6.44 seconds. Dec 6 01:46:28 localhost systemd[1]: Starting Authorization Manager... Dec 6 01:46:28 localhost polkitd[1032]: Started polkitd version 0.117 Dec 6 01:46:28 localhost cloud-init[1030]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++ Dec 6 01:46:28 localhost cloud-init[1030]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+ Dec 6 01:46:28 localhost cloud-init[1030]: ci-info: | Device | Up | Address | Mask | Scope | Hw-Address | Dec 6 01:46:28 localhost cloud-init[1030]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+ Dec 6 01:46:28 localhost cloud-init[1030]: ci-info: | eth0 | True | 38.102.83.150 | 255.255.255.0 | global | fa:16:3e:11:88:44 | Dec 6 01:46:28 localhost cloud-init[1030]: ci-info: | eth0 | True | fe80::f816:3eff:fe11:8844/64 | . | link | fa:16:3e:11:88:44 | Dec 6 01:46:28 localhost cloud-init[1030]: ci-info: | lo | True | 127.0.0.1 | 255.0.0.0 | host | . | Dec 6 01:46:28 localhost cloud-init[1030]: ci-info: | lo | True | ::1/128 | . | host | . | Dec 6 01:46:28 localhost cloud-init[1030]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+ Dec 6 01:46:28 localhost cloud-init[1030]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++ Dec 6 01:46:28 localhost cloud-init[1030]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+ Dec 6 01:46:28 localhost cloud-init[1030]: ci-info: | Route | Destination | Gateway | Genmask | Interface | Flags | Dec 6 01:46:28 localhost cloud-init[1030]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+ Dec 6 01:46:28 localhost cloud-init[1030]: ci-info: | 0 | 0.0.0.0 | 38.102.83.1 | 0.0.0.0 | eth0 | UG | Dec 6 01:46:28 localhost cloud-init[1030]: ci-info: | 1 | 38.102.83.0 | 0.0.0.0 | 255.255.255.0 | eth0 | U | Dec 6 01:46:28 localhost cloud-init[1030]: ci-info: | 2 | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 | eth0 | UGH | Dec 6 01:46:28 localhost cloud-init[1030]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+ Dec 6 01:46:28 localhost cloud-init[1030]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++ Dec 6 01:46:28 localhost cloud-init[1030]: ci-info: +-------+-------------+---------+-----------+-------+ Dec 6 01:46:28 localhost cloud-init[1030]: ci-info: | Route | Destination | Gateway | Interface | Flags | Dec 6 01:46:28 localhost cloud-init[1030]: ci-info: +-------+-------------+---------+-----------+-------+ Dec 6 01:46:28 localhost cloud-init[1030]: ci-info: | 1 | fe80::/64 | :: | eth0 | U | Dec 6 01:46:28 localhost cloud-init[1030]: ci-info: | 3 | multicast | :: | eth0 | U | Dec 6 01:46:28 localhost cloud-init[1030]: ci-info: +-------+-------------+---------+-----------+-------+ Dec 6 01:46:28 localhost systemd[1]: Started Dynamic System Tuning Daemon. Dec 6 01:46:28 localhost systemd[1]: Started Authorization Manager. Dec 6 01:46:31 localhost cloud-init[1030]: Generating public/private rsa key pair. Dec 6 01:46:31 localhost cloud-init[1030]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key Dec 6 01:46:31 localhost cloud-init[1030]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub Dec 6 01:46:31 localhost cloud-init[1030]: The key fingerprint is: Dec 6 01:46:31 localhost cloud-init[1030]: SHA256:afUs/aXJem1MbQYsKivIGrnY561c0pW4IR2IXc3Bbfk root@np0005548789.novalocal Dec 6 01:46:31 localhost cloud-init[1030]: The key's randomart image is: Dec 6 01:46:31 localhost cloud-init[1030]: +---[RSA 3072]----+ Dec 6 01:46:31 localhost cloud-init[1030]: | .+.o . | Dec 6 01:46:31 localhost cloud-init[1030]: | o o + + | Dec 6 01:46:31 localhost cloud-init[1030]: | . o . ... . | Dec 6 01:46:31 localhost cloud-init[1030]: | . o + +E o | Dec 6 01:46:31 localhost cloud-init[1030]: | . + S ..+. .o| Dec 6 01:46:31 localhost cloud-init[1030]: | . o =. .. o +=| Dec 6 01:46:31 localhost cloud-init[1030]: | o...+ o =* | Dec 6 01:46:31 localhost cloud-init[1030]: | o +++. . .. +| Dec 6 01:46:31 localhost cloud-init[1030]: |. +++... .. . | Dec 6 01:46:31 localhost cloud-init[1030]: +----[SHA256]-----+ Dec 6 01:46:31 localhost cloud-init[1030]: Generating public/private ecdsa key pair. Dec 6 01:46:31 localhost cloud-init[1030]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key Dec 6 01:46:31 localhost cloud-init[1030]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub Dec 6 01:46:31 localhost cloud-init[1030]: The key fingerprint is: Dec 6 01:46:31 localhost cloud-init[1030]: SHA256:7xY1g5s/YjqrzdVuxiBI0g2l850jNJ3wntt6EYV6+ys root@np0005548789.novalocal Dec 6 01:46:31 localhost cloud-init[1030]: The key's randomart image is: Dec 6 01:46:31 localhost cloud-init[1030]: +---[ECDSA 256]---+ Dec 6 01:46:31 localhost cloud-init[1030]: | .o . | Dec 6 01:46:31 localhost cloud-init[1030]: | .. + .. . | Dec 6 01:46:31 localhost cloud-init[1030]: | .ooo +o . | Dec 6 01:46:31 localhost cloud-init[1030]: | . o+.+oo* | Dec 6 01:46:31 localhost cloud-init[1030]: | o .S *= = | Dec 6 01:46:31 localhost cloud-init[1030]: | . .o==o | Dec 6 01:46:31 localhost cloud-init[1030]: | .+=oo | Dec 6 01:46:31 localhost cloud-init[1030]: | o.o+oE . | Dec 6 01:46:31 localhost cloud-init[1030]: | ..=*o=.o.. | Dec 6 01:46:31 localhost cloud-init[1030]: +----[SHA256]-----+ Dec 6 01:46:31 localhost cloud-init[1030]: Generating public/private ed25519 key pair. Dec 6 01:46:31 localhost cloud-init[1030]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key Dec 6 01:46:31 localhost cloud-init[1030]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub Dec 6 01:46:31 localhost cloud-init[1030]: The key fingerprint is: Dec 6 01:46:31 localhost cloud-init[1030]: SHA256:qtMm4Ybt7afrbmQTpdQ7d3hongkiN8O5Y7mswrjOXBw root@np0005548789.novalocal Dec 6 01:46:31 localhost cloud-init[1030]: The key's randomart image is: Dec 6 01:46:31 localhost cloud-init[1030]: +--[ED25519 256]--+ Dec 6 01:46:31 localhost cloud-init[1030]: | . | Dec 6 01:46:31 localhost cloud-init[1030]: | . o | Dec 6 01:46:31 localhost cloud-init[1030]: | o + . o | Dec 6 01:46:31 localhost cloud-init[1030]: | . X + = o | Dec 6 01:46:31 localhost cloud-init[1030]: | E o BS* = | Dec 6 01:46:31 localhost cloud-init[1030]: | . o O. + | Dec 6 01:46:31 localhost cloud-init[1030]: | o * B.+ | Dec 6 01:46:31 localhost cloud-init[1030]: |+ = *o* . | Dec 6 01:46:31 localhost cloud-init[1030]: |o= ++XB+ | Dec 6 01:46:31 localhost cloud-init[1030]: +----[SHA256]-----+ Dec 6 01:46:32 localhost systemd[1]: Finished Initial cloud-init job (metadata service crawler). Dec 6 01:46:32 localhost systemd[1]: Reached target Cloud-config availability. Dec 6 01:46:32 localhost systemd[1]: Reached target Network is Online. Dec 6 01:46:32 localhost systemd[1]: Starting Apply the settings specified in cloud-config... Dec 6 01:46:32 localhost systemd[1]: Run Insights Client at boot was skipped because of an unmet condition check (ConditionPathExists=/etc/insights-client/.run_insights_client_next_boot). Dec 6 01:46:32 localhost systemd[1]: Starting Crash recovery kernel arming... Dec 6 01:46:32 localhost systemd[1]: Starting Notify NFS peers of a restart... Dec 6 01:46:32 localhost systemd[1]: Starting OpenSSH server daemon... Dec 6 01:46:32 localhost systemd[1]: Starting Permit User Sessions... Dec 6 01:46:32 localhost sm-notify[1129]: Version 2.5.4 starting Dec 6 01:46:32 localhost systemd[1]: Started Notify NFS peers of a restart. Dec 6 01:46:32 localhost systemd[1]: Finished Permit User Sessions. Dec 6 01:46:32 localhost sshd[1130]: main: sshd: ssh-rsa algorithm is disabled Dec 6 01:46:32 localhost systemd[1]: Started Command Scheduler. Dec 6 01:46:32 localhost systemd[1]: Started Getty on tty1. Dec 6 01:46:32 localhost systemd[1]: Started Serial Getty on ttyS0. Dec 6 01:46:32 localhost systemd[1]: Reached target Login Prompts. Dec 6 01:46:32 localhost systemd[1]: Started OpenSSH server daemon. Dec 6 01:46:32 localhost systemd[1]: Reached target Multi-User System. Dec 6 01:46:32 localhost systemd[1]: Starting Record Runlevel Change in UTMP... Dec 6 01:46:32 localhost systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 6 01:46:32 localhost systemd[1]: Finished Record Runlevel Change in UTMP. Dec 6 01:46:32 localhost kdumpctl[1133]: kdump: No kdump initial ramdisk found. Dec 6 01:46:32 localhost kdumpctl[1133]: kdump: Rebuilding /boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img Dec 6 01:46:32 localhost cloud-init[1226]: Cloud-init v. 22.1-9.el9 running 'modules:config' at Sat, 06 Dec 2025 06:46:32 +0000. Up 10.47 seconds. Dec 6 01:46:32 localhost systemd[1]: Finished Apply the settings specified in cloud-config. Dec 6 01:46:32 localhost systemd[1]: Starting Execute cloud user/final scripts... Dec 6 01:46:32 localhost sshd[1334]: main: sshd: ssh-rsa algorithm is disabled Dec 6 01:46:32 localhost sshd[1355]: main: sshd: ssh-rsa algorithm is disabled Dec 6 01:46:32 localhost sshd[1376]: main: sshd: ssh-rsa algorithm is disabled Dec 6 01:46:32 localhost sshd[1385]: main: sshd: ssh-rsa algorithm is disabled Dec 6 01:46:32 localhost sshd[1396]: main: sshd: ssh-rsa algorithm is disabled Dec 6 01:46:32 localhost sshd[1408]: main: sshd: ssh-rsa algorithm is disabled Dec 6 01:46:32 localhost sshd[1425]: main: sshd: ssh-rsa algorithm is disabled Dec 6 01:46:32 localhost cloud-init[1430]: Cloud-init v. 22.1-9.el9 running 'modules:final' at Sat, 06 Dec 2025 06:46:32 +0000. Up 10.86 seconds. Dec 6 01:46:32 localhost dracut[1432]: dracut-057-21.git20230214.el9 Dec 6 01:46:32 localhost sshd[1433]: main: sshd: ssh-rsa algorithm is disabled Dec 6 01:46:32 localhost sshd[1449]: main: sshd: ssh-rsa algorithm is disabled Dec 6 01:46:32 localhost cloud-init[1453]: ############################################################# Dec 6 01:46:32 localhost cloud-init[1454]: -----BEGIN SSH HOST KEY FINGERPRINTS----- Dec 6 01:46:32 localhost cloud-init[1456]: 256 SHA256:7xY1g5s/YjqrzdVuxiBI0g2l850jNJ3wntt6EYV6+ys root@np0005548789.novalocal (ECDSA) Dec 6 01:46:32 localhost cloud-init[1459]: 256 SHA256:qtMm4Ybt7afrbmQTpdQ7d3hongkiN8O5Y7mswrjOXBw root@np0005548789.novalocal (ED25519) Dec 6 01:46:32 localhost cloud-init[1464]: 3072 SHA256:afUs/aXJem1MbQYsKivIGrnY561c0pW4IR2IXc3Bbfk root@np0005548789.novalocal (RSA) Dec 6 01:46:32 localhost cloud-init[1468]: -----END SSH HOST KEY FINGERPRINTS----- Dec 6 01:46:32 localhost cloud-init[1471]: ############################################################# Dec 6 01:46:32 localhost dracut[1435]: Executing: /usr/bin/dracut --add kdumpbase --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics -o "plymouth resume ifcfg earlykdump" --mount "/dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device -f /boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img 5.14.0-284.11.1.el9_2.x86_64 Dec 6 01:46:32 localhost cloud-init[1430]: Cloud-init v. 22.1-9.el9 finished at Sat, 06 Dec 2025 06:46:32 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0]. Up 11.11 seconds Dec 6 01:46:32 localhost chronyd[762]: Selected source 23.159.16.194 (2.rhel.pool.ntp.org) Dec 6 01:46:32 localhost chronyd[762]: System clock TAI offset set to 37 seconds Dec 6 01:46:32 localhost systemd[1]: Reloading Network Manager... Dec 6 01:46:32 localhost dracut[1435]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found! Dec 6 01:46:32 localhost dracut[1435]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found! Dec 6 01:46:32 localhost dracut[1435]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found! Dec 6 01:46:33 localhost dracut[1435]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found! Dec 6 01:46:33 localhost NetworkManager[790]: [1765003593.0082] audit: op="reload" arg="0" pid=1573 uid=0 result="success" Dec 6 01:46:33 localhost NetworkManager[790]: [1765003593.0092] config: signal: SIGHUP (no changes from disk) Dec 6 01:46:33 localhost dracut[1435]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found! Dec 6 01:46:33 localhost systemd[1]: Reloaded Network Manager. Dec 6 01:46:33 localhost systemd[1]: Finished Execute cloud user/final scripts. Dec 6 01:46:33 localhost systemd[1]: Reached target Cloud-init target. Dec 6 01:46:33 localhost dracut[1435]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found! Dec 6 01:46:33 localhost dracut[1435]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found! Dec 6 01:46:33 localhost dracut[1435]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found! Dec 6 01:46:33 localhost dracut[1435]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found! Dec 6 01:46:33 localhost dracut[1435]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found! Dec 6 01:46:33 localhost dracut[1435]: dracut module 'connman' will not be installed, because command 'connmand' could not be found! Dec 6 01:46:33 localhost dracut[1435]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found! Dec 6 01:46:33 localhost dracut[1435]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found! Dec 6 01:46:33 localhost dracut[1435]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found! Dec 6 01:46:33 localhost dracut[1435]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'! Dec 6 01:46:33 localhost dracut[1435]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found! Dec 6 01:46:33 localhost dracut[1435]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found! Dec 6 01:46:33 localhost dracut[1435]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found! Dec 6 01:46:33 localhost dracut[1435]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found! Dec 6 01:46:33 localhost dracut[1435]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found! Dec 6 01:46:33 localhost dracut[1435]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found! Dec 6 01:46:33 localhost dracut[1435]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found! Dec 6 01:46:33 localhost dracut[1435]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found! Dec 6 01:46:33 localhost dracut[1435]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found! Dec 6 01:46:33 localhost dracut[1435]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found! Dec 6 01:46:33 localhost dracut[1435]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found! Dec 6 01:46:33 localhost dracut[1435]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found! Dec 6 01:46:33 localhost dracut[1435]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found! Dec 6 01:46:33 localhost dracut[1435]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found! Dec 6 01:46:33 localhost dracut[1435]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found! Dec 6 01:46:33 localhost dracut[1435]: memstrack is not available Dec 6 01:46:33 localhost dracut[1435]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng Dec 6 01:46:33 localhost dracut[1435]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found! Dec 6 01:46:33 localhost dracut[1435]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found! Dec 6 01:46:33 localhost dracut[1435]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found! Dec 6 01:46:33 localhost dracut[1435]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found! Dec 6 01:46:33 localhost dracut[1435]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found! Dec 6 01:46:33 localhost dracut[1435]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found! Dec 6 01:46:33 localhost dracut[1435]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found! Dec 6 01:46:33 localhost dracut[1435]: dracut module 'connman' will not be installed, because command 'connmand' could not be found! Dec 6 01:46:33 localhost dracut[1435]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found! Dec 6 01:46:33 localhost dracut[1435]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found! Dec 6 01:46:33 localhost dracut[1435]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found! Dec 6 01:46:33 localhost dracut[1435]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'! Dec 6 01:46:33 localhost dracut[1435]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found! Dec 6 01:46:33 localhost dracut[1435]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found! Dec 6 01:46:33 localhost dracut[1435]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found! Dec 6 01:46:33 localhost dracut[1435]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found! Dec 6 01:46:33 localhost dracut[1435]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found! Dec 6 01:46:33 localhost dracut[1435]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found! Dec 6 01:46:33 localhost dracut[1435]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found! Dec 6 01:46:33 localhost dracut[1435]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found! Dec 6 01:46:33 localhost dracut[1435]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found! Dec 6 01:46:33 localhost dracut[1435]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found! Dec 6 01:46:33 localhost dracut[1435]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found! Dec 6 01:46:33 localhost dracut[1435]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found! Dec 6 01:46:33 localhost dracut[1435]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found! Dec 6 01:46:33 localhost dracut[1435]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found! Dec 6 01:46:33 localhost dracut[1435]: memstrack is not available Dec 6 01:46:33 localhost dracut[1435]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng Dec 6 01:46:33 localhost dracut[1435]: *** Including module: systemd *** Dec 6 01:46:34 localhost dracut[1435]: *** Including module: systemd-initrd *** Dec 6 01:46:34 localhost dracut[1435]: *** Including module: i18n *** Dec 6 01:46:34 localhost dracut[1435]: No KEYMAP configured. Dec 6 01:46:34 localhost dracut[1435]: *** Including module: drm *** Dec 6 01:46:34 localhost dracut[1435]: *** Including module: prefixdevname *** Dec 6 01:46:34 localhost dracut[1435]: *** Including module: kernel-modules *** Dec 6 01:46:35 localhost dracut[1435]: *** Including module: kernel-modules-extra *** Dec 6 01:46:35 localhost dracut[1435]: *** Including module: qemu *** Dec 6 01:46:35 localhost dracut[1435]: *** Including module: fstab-sys *** Dec 6 01:46:35 localhost dracut[1435]: *** Including module: rootfs-block *** Dec 6 01:46:35 localhost dracut[1435]: *** Including module: terminfo *** Dec 6 01:46:35 localhost dracut[1435]: *** Including module: udev-rules *** Dec 6 01:46:35 localhost dracut[1435]: Skipping udev rule: 91-permissions.rules Dec 6 01:46:35 localhost dracut[1435]: Skipping udev rule: 80-drivers-modprobe.rules Dec 6 01:46:35 localhost dracut[1435]: *** Including module: virtiofs *** Dec 6 01:46:35 localhost dracut[1435]: *** Including module: dracut-systemd *** Dec 6 01:46:35 localhost dracut[1435]: *** Including module: usrmount *** Dec 6 01:46:35 localhost dracut[1435]: *** Including module: base *** Dec 6 01:46:36 localhost dracut[1435]: *** Including module: fs-lib *** Dec 6 01:46:36 localhost dracut[1435]: *** Including module: kdumpbase *** Dec 6 01:46:36 localhost dracut[1435]: *** Including module: microcode_ctl-fw_dir_override *** Dec 6 01:46:36 localhost dracut[1435]: microcode_ctl module: mangling fw_dir Dec 6 01:46:36 localhost dracut[1435]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel"... Dec 6 01:46:36 localhost dracut[1435]: microcode_ctl: configuration "intel" is ignored Dec 6 01:46:36 localhost dracut[1435]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"... Dec 6 01:46:36 localhost dracut[1435]: microcode_ctl: configuration "intel-06-2d-07" is ignored Dec 6 01:46:36 localhost dracut[1435]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"... Dec 6 01:46:36 localhost dracut[1435]: microcode_ctl: configuration "intel-06-4e-03" is ignored Dec 6 01:46:36 localhost dracut[1435]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"... Dec 6 01:46:36 localhost dracut[1435]: microcode_ctl: configuration "intel-06-4f-01" is ignored Dec 6 01:46:36 localhost dracut[1435]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"... Dec 6 01:46:36 localhost dracut[1435]: microcode_ctl: configuration "intel-06-55-04" is ignored Dec 6 01:46:36 localhost dracut[1435]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"... Dec 6 01:46:36 localhost dracut[1435]: microcode_ctl: configuration "intel-06-5e-03" is ignored Dec 6 01:46:36 localhost dracut[1435]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"... Dec 6 01:46:36 localhost dracut[1435]: microcode_ctl: configuration "intel-06-8c-01" is ignored Dec 6 01:46:36 localhost dracut[1435]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"... Dec 6 01:46:36 localhost dracut[1435]: microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored Dec 6 01:46:36 localhost dracut[1435]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"... Dec 6 01:46:36 localhost dracut[1435]: microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored Dec 6 01:46:36 localhost dracut[1435]: microcode_ctl: final fw_dir: "/lib/firmware/updates/5.14.0-284.11.1.el9_2.x86_64 /lib/firmware/updates /lib/firmware/5.14.0-284.11.1.el9_2.x86_64 /lib/firmware" Dec 6 01:46:36 localhost dracut[1435]: *** Including module: shutdown *** Dec 6 01:46:36 localhost dracut[1435]: *** Including module: squash *** Dec 6 01:46:36 localhost dracut[1435]: *** Including modules done *** Dec 6 01:46:36 localhost dracut[1435]: *** Installing kernel module dependencies *** Dec 6 01:46:37 localhost dracut[1435]: *** Installing kernel module dependencies done *** Dec 6 01:46:37 localhost dracut[1435]: *** Resolving executable dependencies *** Dec 6 01:46:38 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Dec 6 01:46:38 localhost dracut[1435]: *** Resolving executable dependencies done *** Dec 6 01:46:38 localhost dracut[1435]: *** Hardlinking files *** Dec 6 01:46:38 localhost dracut[1435]: Mode: real Dec 6 01:46:38 localhost dracut[1435]: Files: 1099 Dec 6 01:46:38 localhost dracut[1435]: Linked: 3 files Dec 6 01:46:38 localhost dracut[1435]: Compared: 0 xattrs Dec 6 01:46:38 localhost dracut[1435]: Compared: 373 files Dec 6 01:46:38 localhost dracut[1435]: Saved: 61.04 KiB Dec 6 01:46:38 localhost dracut[1435]: Duration: 0.025114 seconds Dec 6 01:46:38 localhost dracut[1435]: *** Hardlinking files done *** Dec 6 01:46:38 localhost dracut[1435]: Could not find 'strip'. Not stripping the initramfs. Dec 6 01:46:38 localhost dracut[1435]: *** Generating early-microcode cpio image *** Dec 6 01:46:38 localhost dracut[1435]: *** Constructing AuthenticAMD.bin *** Dec 6 01:46:38 localhost dracut[1435]: *** Store current command line parameters *** Dec 6 01:46:38 localhost dracut[1435]: Stored kernel commandline: Dec 6 01:46:38 localhost dracut[1435]: No dracut internal kernel commandline stored in the initramfs Dec 6 01:46:38 localhost dracut[1435]: *** Install squash loader *** Dec 6 01:46:39 localhost dracut[1435]: *** Squashing the files inside the initramfs *** Dec 6 01:46:40 localhost dracut[1435]: *** Squashing the files inside the initramfs done *** Dec 6 01:46:40 localhost dracut[1435]: *** Creating image file '/boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img' *** Dec 6 01:46:40 localhost dracut[1435]: *** Creating initramfs image file '/boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img' done *** Dec 6 01:46:41 localhost kdumpctl[1133]: kdump: kexec: loaded kdump kernel Dec 6 01:46:41 localhost kdumpctl[1133]: kdump: Starting kdump: [OK] Dec 6 01:46:41 localhost systemd[1]: Finished Crash recovery kernel arming. Dec 6 01:46:41 localhost systemd[1]: Startup finished in 1.265s (kernel) + 2.105s (initrd) + 15.985s (userspace) = 19.356s. Dec 6 01:46:57 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 6 01:48:24 localhost sshd[4175]: main: sshd: ssh-rsa algorithm is disabled Dec 6 01:48:24 localhost systemd[1]: Created slice User Slice of UID 1000. Dec 6 01:48:24 localhost systemd[1]: Starting User Runtime Directory /run/user/1000... Dec 6 01:48:24 localhost systemd-logind[766]: New session 1 of user zuul. Dec 6 01:48:24 localhost systemd[1]: Finished User Runtime Directory /run/user/1000. Dec 6 01:48:24 localhost systemd[1]: Starting User Manager for UID 1000... Dec 6 01:48:24 localhost systemd[4179]: Queued start job for default target Main User Target. Dec 6 01:48:24 localhost systemd[4179]: Created slice User Application Slice. Dec 6 01:48:24 localhost systemd[4179]: Started Mark boot as successful after the user session has run 2 minutes. Dec 6 01:48:24 localhost systemd[4179]: Started Daily Cleanup of User's Temporary Directories. Dec 6 01:48:24 localhost systemd[4179]: Reached target Paths. Dec 6 01:48:24 localhost systemd[4179]: Reached target Timers. Dec 6 01:48:24 localhost systemd[4179]: Starting D-Bus User Message Bus Socket... Dec 6 01:48:24 localhost systemd[4179]: Starting Create User's Volatile Files and Directories... Dec 6 01:48:24 localhost systemd[4179]: Finished Create User's Volatile Files and Directories. Dec 6 01:48:24 localhost systemd[4179]: Listening on D-Bus User Message Bus Socket. Dec 6 01:48:24 localhost systemd[4179]: Reached target Sockets. Dec 6 01:48:24 localhost systemd[4179]: Reached target Basic System. Dec 6 01:48:24 localhost systemd[4179]: Reached target Main User Target. Dec 6 01:48:24 localhost systemd[4179]: Startup finished in 109ms. Dec 6 01:48:24 localhost systemd[1]: Started User Manager for UID 1000. Dec 6 01:48:24 localhost systemd[1]: Started Session 1 of User zuul. Dec 6 01:48:25 localhost python3[4231]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 6 01:48:35 localhost python3[4250]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 6 01:48:42 localhost python3[4303]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 6 01:48:43 localhost python3[4333]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present Dec 6 01:48:46 localhost python3[4349]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDVgIoETU+ZMXzSQYJdf7tKLhQsLaB9easlDHbhHsBFXd1+Axjoyg338dVOvCx68r/a15lecdlSwbLqd4GXxUOdHnWLa1I9u6bd6azOwE0Dd6ZjnquN3BRq9dLJXMlKHhXMddL6WHNfxT/JOL+gKp0CM74naUBGqrzV05qlb19n7xZJtmxVohAGGeQdFwQJBVoQ6yZOjcJZ5CpbWCs4pFXZT/31fA0KIAJkrzAeUGRRkQEnzXY1riF0RHwvXaNJ0ZoAYfT7q263Pd5gnQEmpiBirUBH6CXJn4lIQyNMyVRbnKWemW9P1kyv2bjZUPg2b1xWBE7MBTs/wMt1RjdO9p+sxtwOd2IQMf1t3JLa2p3xqgxtGTMugpJUBr1TWwdLoHl+eAMuWZwAWofLWICHUlPzyTN8L8acu0im2eR60FEl9XdUjp8DYCBGxhhIVx+xZxj6nTnNc5T7GJpJlCTF+9YPlDVrLg8y/YXly0BoOqr7p+RaqMAJnoZymNDbuu9V3Vs= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 6 01:48:47 localhost python3[4363]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 01:48:48 localhost python3[4422]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 01:48:48 localhost python3[4463]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765003728.1588957-393-159816955677880/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=00c07cb1874e45b180d5c151333e96b1_id_rsa follow=False checksum=59556e0a2f4b936183817041ae1f59f0f3c92dd9 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 01:48:51 localhost python3[4536]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 01:48:51 localhost python3[4577]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765003730.9362123-494-144815595449017/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=00c07cb1874e45b180d5c151333e96b1_id_rsa.pub follow=False checksum=2b77fe3fb3441abe077d8d93b68745bd8f418f92 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 01:48:53 localhost python3[4605]: ansible-ping Invoked with data=pong Dec 6 01:48:55 localhost python3[4619]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 6 01:48:58 localhost python3[4672]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None Dec 6 01:49:01 localhost python3[4694]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 01:49:02 localhost python3[4708]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 01:49:02 localhost python3[4722]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 01:49:04 localhost python3[4736]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 01:49:04 localhost python3[4750]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 01:49:04 localhost python3[4764]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 01:49:07 localhost python3[4780]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 01:49:08 localhost python3[4829]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 01:49:08 localhost python3[4872]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765003748.294465-103-75605470268299/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 01:49:16 localhost python3[4900]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 6 01:49:16 localhost python3[4914]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 6 01:49:17 localhost python3[4928]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 6 01:49:17 localhost python3[4942]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 6 01:49:17 localhost python3[4956]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 6 01:49:17 localhost python3[4970]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 6 01:49:18 localhost python3[4984]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 6 01:49:18 localhost python3[4998]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 6 01:49:18 localhost python3[5012]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 6 01:49:18 localhost python3[5026]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 6 01:49:19 localhost python3[5040]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 6 01:49:19 localhost python3[5054]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 6 01:49:19 localhost python3[5068]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 6 01:49:20 localhost python3[5082]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 6 01:49:20 localhost python3[5096]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 6 01:49:20 localhost python3[5110]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 6 01:49:20 localhost python3[5124]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 6 01:49:21 localhost python3[5138]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 6 01:49:21 localhost python3[5152]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 6 01:49:21 localhost python3[5166]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 6 01:49:21 localhost python3[5180]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 6 01:49:22 localhost python3[5194]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 6 01:49:22 localhost python3[5208]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 6 01:49:22 localhost python3[5222]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 6 01:49:23 localhost python3[5236]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 6 01:49:23 localhost python3[5250]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 6 01:49:25 localhost python3[5266]: ansible-community.general.timezone Invoked with name=UTC hwclock=None Dec 6 01:49:25 localhost systemd[1]: Starting Time & Date Service... Dec 6 01:49:25 localhost systemd[1]: Started Time & Date Service. Dec 6 01:49:25 localhost systemd-timedated[5268]: Changed time zone to 'UTC' (UTC). Dec 6 01:49:26 localhost python3[5287]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 01:49:28 localhost python3[5333]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 01:49:28 localhost python3[5374]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1765003767.8538969-495-205773419454080/source _original_basename=tmplovyvf94 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 01:49:29 localhost python3[5434]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 01:49:30 localhost python3[5475]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1765003769.4579241-584-257946452858827/source _original_basename=tmp299r9vl9 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 01:49:31 localhost python3[5537]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 01:49:32 localhost python3[5580]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1765003771.5337546-729-220628508346141/source _original_basename=tmpivkrylp5 follow=False checksum=12efaaf67f4d002c9317067f1840bb831c38c306 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 01:49:33 localhost python3[5608]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 01:49:33 localhost python3[5624]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 01:49:34 localhost python3[5674]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 01:49:35 localhost python3[5717]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1765003774.4274964-849-148715715435939/source _original_basename=tmp98l69hd1 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 01:49:36 localhost python3[5748]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ef9-e89a-8d81-2216-000000000023-1-overcloudnovacompute1 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 01:49:38 localhost python3[5766]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-8d81-2216-000000000024-1-overcloudnovacompute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None Dec 6 01:49:39 localhost python3[5784]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 01:49:55 localhost systemd[1]: systemd-timedated.service: Deactivated successfully. Dec 6 01:50:21 localhost sshd[5788]: main: sshd: ssh-rsa algorithm is disabled Dec 6 01:50:37 localhost python3[5805]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 01:51:16 localhost systemd[4179]: Starting Mark boot as successful... Dec 6 01:51:16 localhost systemd[4179]: Finished Mark boot as successful. Dec 6 01:51:37 localhost systemd-logind[766]: Session 1 logged out. Waiting for processes to exit. Dec 6 01:51:48 localhost systemd[1]: Unmounting EFI System Partition Automount... Dec 6 01:51:48 localhost systemd[1]: efi.mount: Deactivated successfully. Dec 6 01:51:48 localhost systemd[1]: Unmounted EFI System Partition Automount. Dec 6 01:53:09 localhost sshd[5811]: main: sshd: ssh-rsa algorithm is disabled Dec 6 01:54:16 localhost systemd[4179]: Created slice User Background Tasks Slice. Dec 6 01:54:16 localhost systemd[4179]: Starting Cleanup of User's Temporary Files and Directories... Dec 6 01:54:16 localhost systemd[4179]: Finished Cleanup of User's Temporary Files and Directories. Dec 6 01:54:19 localhost kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 Dec 6 01:54:19 localhost kernel: pci 0000:00:07.0: reg 0x10: [io 0x0000-0x003f] Dec 6 01:54:19 localhost kernel: pci 0000:00:07.0: reg 0x14: [mem 0x00000000-0x00000fff] Dec 6 01:54:19 localhost kernel: pci 0000:00:07.0: reg 0x20: [mem 0x00000000-0x00003fff 64bit pref] Dec 6 01:54:19 localhost kernel: pci 0000:00:07.0: reg 0x30: [mem 0x00000000-0x0007ffff pref] Dec 6 01:54:19 localhost kernel: pci 0000:00:07.0: BAR 6: assigned [mem 0xc0000000-0xc007ffff pref] Dec 6 01:54:19 localhost kernel: pci 0000:00:07.0: BAR 4: assigned [mem 0x440000000-0x440003fff 64bit pref] Dec 6 01:54:19 localhost kernel: pci 0000:00:07.0: BAR 1: assigned [mem 0xc0080000-0xc0080fff] Dec 6 01:54:19 localhost kernel: pci 0000:00:07.0: BAR 0: assigned [io 0x1000-0x103f] Dec 6 01:54:19 localhost kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003) Dec 6 01:54:19 localhost NetworkManager[790]: [1765004059.2424] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3) Dec 6 01:54:19 localhost systemd-udevd[5815]: Network interface NamePolicy= disabled on kernel command line. Dec 6 01:54:19 localhost NetworkManager[790]: [1765004059.2528] device (eth1): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Dec 6 01:54:19 localhost NetworkManager[790]: [1765004059.2550] settings: (eth1): created default wired connection 'Wired connection 1' Dec 6 01:54:19 localhost NetworkManager[790]: [1765004059.2552] device (eth1): carrier: link connected Dec 6 01:54:19 localhost NetworkManager[790]: [1765004059.2553] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', sys-iface-state: 'managed') Dec 6 01:54:19 localhost NetworkManager[790]: [1765004059.2556] policy: auto-activating connection 'Wired connection 1' (c69a1528-e75c-3f2e-b5a3-724110d3f450) Dec 6 01:54:19 localhost NetworkManager[790]: [1765004059.2559] device (eth1): Activation: starting connection 'Wired connection 1' (c69a1528-e75c-3f2e-b5a3-724110d3f450) Dec 6 01:54:19 localhost NetworkManager[790]: [1765004059.2559] device (eth1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Dec 6 01:54:19 localhost NetworkManager[790]: [1765004059.2561] device (eth1): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Dec 6 01:54:19 localhost NetworkManager[790]: [1765004059.2564] device (eth1): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Dec 6 01:54:19 localhost NetworkManager[790]: [1765004059.2566] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds) Dec 6 01:54:19 localhost sshd[5818]: main: sshd: ssh-rsa algorithm is disabled Dec 6 01:54:20 localhost systemd-logind[766]: New session 3 of user zuul. Dec 6 01:54:20 localhost systemd[1]: Started Session 3 of User zuul. Dec 6 01:54:20 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready Dec 6 01:54:20 localhost python3[5835]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ef9-e89a-1ece-0164-000000000408-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 01:54:33 localhost python3[5885]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 01:54:33 localhost python3[5928]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765004073.3000777-486-197346107908609/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=ee1ff0d02b7f6e5013c40075618b5eb9b72f06b2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 01:54:34 localhost python3[5958]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 6 01:54:34 localhost systemd[1]: NetworkManager-wait-online.service: Deactivated successfully. Dec 6 01:54:34 localhost systemd[1]: Stopped Network Manager Wait Online. Dec 6 01:54:34 localhost systemd[1]: Stopping Network Manager Wait Online... Dec 6 01:54:34 localhost systemd[1]: Stopping Network Manager... Dec 6 01:54:34 localhost NetworkManager[790]: [1765004074.6038] caught SIGTERM, shutting down normally. Dec 6 01:54:34 localhost NetworkManager[790]: [1765004074.6178] dhcp4 (eth0): canceled DHCP transaction Dec 6 01:54:34 localhost NetworkManager[790]: [1765004074.6178] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds) Dec 6 01:54:34 localhost NetworkManager[790]: [1765004074.6178] dhcp4 (eth0): state changed no lease Dec 6 01:54:34 localhost NetworkManager[790]: [1765004074.6183] manager: NetworkManager state is now CONNECTING Dec 6 01:54:34 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Dec 6 01:54:34 localhost NetworkManager[790]: [1765004074.6368] dhcp4 (eth1): canceled DHCP transaction Dec 6 01:54:34 localhost NetworkManager[790]: [1765004074.6369] dhcp4 (eth1): state changed no lease Dec 6 01:54:34 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Dec 6 01:54:34 localhost NetworkManager[790]: [1765004074.6439] exiting (success) Dec 6 01:54:34 localhost systemd[1]: NetworkManager.service: Deactivated successfully. Dec 6 01:54:34 localhost systemd[1]: Stopped Network Manager. Dec 6 01:54:34 localhost systemd[1]: NetworkManager.service: Consumed 2.627s CPU time. Dec 6 01:54:34 localhost systemd[1]: Starting Network Manager... Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.6950] NetworkManager (version 1.42.2-1.el9) is starting... (after a restart, boot:a2c5bf5a-4be9-4ef7-a12e-aeb290b897cb) Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.6951] Read config: /etc/NetworkManager/NetworkManager.conf (run: 15-carrier-timeout.conf) Dec 6 01:54:34 localhost systemd[1]: Started Network Manager. Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.6983] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager" Dec 6 01:54:34 localhost systemd[1]: Starting Network Manager Wait Online... Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.7044] manager[0x561cd06d7090]: monitoring kernel firmware directory '/lib/firmware'. Dec 6 01:54:34 localhost systemd[1]: Starting Hostname Service... Dec 6 01:54:34 localhost systemd[1]: Started Hostname Service. Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.7642] hostname: hostname: using hostnamed Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.7643] hostname: static hostname changed from (none) to "np0005548789.novalocal" Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.7649] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto) Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.7655] manager[0x561cd06d7090]: rfkill: Wi-Fi hardware radio set enabled Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.7656] manager[0x561cd06d7090]: rfkill: WWAN hardware radio set enabled Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.7706] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-device-plugin-team.so) Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.7707] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.7709] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.7710] manager: Networking is enabled by state file Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.7723] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-settings-plugin-ifcfg-rh.so") Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.7724] settings: Loaded settings plugin: keyfile (internal) Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.7773] dhcp: init: Using DHCP client 'internal' Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.7778] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1) Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.7789] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.7795] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external') Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.7809] device (lo): Activation: starting connection 'lo' (1c0ca10a-4a5b-41dd-9a55-58f9b21f8cc0) Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.7819] device (eth0): carrier: link connected Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.7826] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2) Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.7833] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated) Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.7834] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'assume') Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.7844] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'assume') Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.7855] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.7863] device (eth1): carrier: link connected Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.7870] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3) Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.7878] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (c69a1528-e75c-3f2e-b5a3-724110d3f450) (indicated) Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.7878] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'assume') Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.7885] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'assume') Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.7896] device (eth1): Activation: starting connection 'Wired connection 1' (c69a1528-e75c-3f2e-b5a3-724110d3f450) Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.7925] device (lo): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external') Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.7931] device (lo): state change: prepare -> config (reason 'none', sys-iface-state: 'external') Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.7945] device (lo): state change: config -> ip-config (reason 'none', sys-iface-state: 'external') Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.7948] device (eth0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'assume') Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.7952] device (eth0): state change: prepare -> config (reason 'none', sys-iface-state: 'assume') Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.7954] device (eth1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'assume') Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.7957] device (eth1): state change: prepare -> config (reason 'none', sys-iface-state: 'assume') Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.7998] device (lo): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external') Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.8002] device (eth0): state change: config -> ip-config (reason 'none', sys-iface-state: 'assume') Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.8006] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds) Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.8014] device (eth1): state change: config -> ip-config (reason 'none', sys-iface-state: 'assume') Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.8016] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds) Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.8029] device (lo): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external') Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.8034] device (lo): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external') Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.8039] device (lo): Activation: successful, device activated. Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.8096] dhcp4 (eth0): state changed new lease, address=38.102.83.150 Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.8101] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.8171] device (eth0): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'assume') Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.8193] device (eth0): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'assume') Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.8196] device (eth0): state change: secondaries -> activated (reason 'none', sys-iface-state: 'assume') Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.8201] manager: NetworkManager state is now CONNECTED_SITE Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.8205] device (eth0): Activation: successful, device activated. Dec 6 01:54:34 localhost NetworkManager[5973]: [1765004074.8211] manager: NetworkManager state is now CONNECTED_GLOBAL Dec 6 01:54:35 localhost python3[6039]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ef9-e89a-1ece-0164-00000000012b-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 01:54:44 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Dec 6 01:55:04 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 6 01:55:19 localhost NetworkManager[5973]: [1765004119.7517] device (eth1): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'assume') Dec 6 01:55:19 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Dec 6 01:55:19 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Dec 6 01:55:19 localhost NetworkManager[5973]: [1765004119.7741] device (eth1): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'assume') Dec 6 01:55:19 localhost NetworkManager[5973]: [1765004119.7743] device (eth1): state change: secondaries -> activated (reason 'none', sys-iface-state: 'assume') Dec 6 01:55:19 localhost NetworkManager[5973]: [1765004119.7750] device (eth1): Activation: successful, device activated. Dec 6 01:55:19 localhost NetworkManager[5973]: [1765004119.7755] manager: startup complete Dec 6 01:55:19 localhost systemd[1]: Finished Network Manager Wait Online. Dec 6 01:55:29 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Dec 6 01:55:35 localhost systemd[1]: session-3.scope: Deactivated successfully. Dec 6 01:55:35 localhost systemd[1]: session-3.scope: Consumed 1.515s CPU time. Dec 6 01:55:35 localhost systemd-logind[766]: Session 3 logged out. Waiting for processes to exit. Dec 6 01:55:35 localhost systemd-logind[766]: Removed session 3. Dec 6 01:56:17 localhost sshd[6058]: main: sshd: ssh-rsa algorithm is disabled Dec 6 01:56:18 localhost sshd[6060]: main: sshd: ssh-rsa algorithm is disabled Dec 6 01:56:18 localhost systemd-logind[766]: New session 4 of user zuul. Dec 6 01:56:18 localhost systemd[1]: Started Session 4 of User zuul. Dec 6 01:56:19 localhost python3[6111]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 01:56:19 localhost python3[6154]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765004178.949045-628-261181115769909/source _original_basename=tmp2c4451xj follow=False checksum=301833a7e04d955921816dd6c79e775f1a8a19aa backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 01:56:20 localhost sshd[6169]: main: sshd: ssh-rsa algorithm is disabled Dec 6 01:56:22 localhost sshd[6171]: main: sshd: ssh-rsa algorithm is disabled Dec 6 01:56:22 localhost systemd[1]: session-4.scope: Deactivated successfully. Dec 6 01:56:22 localhost systemd-logind[766]: Session 4 logged out. Waiting for processes to exit. Dec 6 01:56:22 localhost systemd-logind[766]: Removed session 4. Dec 6 01:56:25 localhost sshd[6174]: main: sshd: ssh-rsa algorithm is disabled Dec 6 01:56:27 localhost sshd[6176]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:00:16 localhost systemd[1]: Starting dnf makecache... Dec 6 02:00:17 localhost dnf[6178]: Failed determining last makecache time. Dec 6 02:00:17 localhost dnf[6178]: There are no enabled repositories in "/etc/yum.repos.d", "/etc/yum/repos.d", "/etc/distro.repos.d". Dec 6 02:00:17 localhost systemd[1]: dnf-makecache.service: Deactivated successfully. Dec 6 02:00:17 localhost systemd[1]: Finished dnf makecache. Dec 6 02:01:43 localhost sshd[6195]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:01:43 localhost systemd[1]: Starting Cleanup of Temporary Directories... Dec 6 02:01:43 localhost systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Dec 6 02:01:43 localhost systemd[1]: Finished Cleanup of Temporary Directories. Dec 6 02:01:43 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully. Dec 6 02:01:46 localhost sshd[6199]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:01:48 localhost sshd[6201]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:01:50 localhost sshd[6203]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:01:52 localhost sshd[6205]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:04:39 localhost sshd[6209]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:04:39 localhost systemd-logind[766]: New session 5 of user zuul. Dec 6 02:04:39 localhost systemd[1]: Started Session 5 of User zuul. Dec 6 02:04:40 localhost python3[6228]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-e5b2-9de0-000000001d10-1-overcloudnovacompute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 02:04:41 localhost python3[6246]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 02:04:41 localhost python3[6262]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 02:04:41 localhost python3[6278]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 02:04:42 localhost python3[6294]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 02:04:43 localhost python3[6310]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 02:04:44 localhost python3[6358]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 02:04:44 localhost python3[6401]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765004684.0802486-648-270810667483841/source _original_basename=tmp1g0xjkfa follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 02:04:46 localhost python3[6431]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 6 02:04:46 localhost systemd[1]: Reloading. Dec 6 02:04:46 localhost systemd-rc-local-generator[6449]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 02:04:46 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 02:04:48 localhost python3[6477]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None Dec 6 02:04:49 localhost python3[6493]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0 riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 02:04:49 localhost python3[6511]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0 riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 02:04:50 localhost python3[6529]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0 riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 02:04:50 localhost python3[6547]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0 riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 02:05:01 localhost python3[6565]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init"; cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system"; cat /sys/fs/cgroup/system.slice/io.max; echo "user"; cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-e5b2-9de0-000000001d17-1-overcloudnovacompute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 02:05:02 localhost python3[6584]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 6 02:05:05 localhost systemd[1]: session-5.scope: Deactivated successfully. Dec 6 02:05:05 localhost systemd[1]: session-5.scope: Consumed 3.908s CPU time. Dec 6 02:05:05 localhost systemd-logind[766]: Session 5 logged out. Waiting for processes to exit. Dec 6 02:05:05 localhost systemd-logind[766]: Removed session 5. Dec 6 02:06:59 localhost sshd[6593]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:06:59 localhost systemd-logind[766]: New session 6 of user zuul. Dec 6 02:06:59 localhost systemd[1]: Started Session 6 of User zuul. Dec 6 02:06:59 localhost systemd[1]: Starting RHSM dbus service... Dec 6 02:07:00 localhost systemd[1]: Started RHSM dbus service. Dec 6 02:07:00 localhost rhsm-service[6617]: INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm' Dec 6 02:07:00 localhost rhsm-service[6617]: INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm' Dec 6 02:07:00 localhost rhsm-service[6617]: INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm' Dec 6 02:07:00 localhost rhsm-service[6617]: INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm' Dec 6 02:07:01 localhost rhsm-service[6617]: INFO [subscription_manager.managerlib:90] Consumer created: np0005548789.novalocal (49b9d3d6-359c-4738-9880-6751941cc8f8) Dec 6 02:07:01 localhost subscription-manager[6617]: Registered system with identity: 49b9d3d6-359c-4738-9880-6751941cc8f8 Dec 6 02:07:02 localhost rhsm-service[6617]: INFO [subscription_manager.entcertlib:131] certs updated: Dec 6 02:07:02 localhost rhsm-service[6617]: Total updates: 1 Dec 6 02:07:02 localhost rhsm-service[6617]: Found (local) serial# [] Dec 6 02:07:02 localhost rhsm-service[6617]: Expected (UEP) serial# [4524945705155541200] Dec 6 02:07:02 localhost rhsm-service[6617]: Added (new) Dec 6 02:07:02 localhost rhsm-service[6617]: [sn:4524945705155541200 ( Content Access,) @ /etc/pki/entitlement/4524945705155541200.pem] Dec 6 02:07:02 localhost rhsm-service[6617]: Deleted (rogue): Dec 6 02:07:02 localhost rhsm-service[6617]: Dec 6 02:07:02 localhost subscription-manager[6617]: Added subscription for 'Content Access' contract 'None' Dec 6 02:07:02 localhost subscription-manager[6617]: Added subscription for product ' Content Access' Dec 6 02:07:03 localhost rhsm-service[6617]: INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm' Dec 6 02:07:03 localhost rhsm-service[6617]: INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm' Dec 6 02:07:03 localhost rhsm-service[6617]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 6 02:07:03 localhost rhsm-service[6617]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 6 02:07:03 localhost rhsm-service[6617]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 6 02:07:03 localhost rhsm-service[6617]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 6 02:07:04 localhost rhsm-service[6617]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 6 02:07:11 localhost python3[6708]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/redhat-release zuul_log_id=fa163ef9-e89a-ea42-bf82-00000000000d-1-overcloudnovacompute1 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 02:07:13 localhost python3[6727]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 6 02:07:44 localhost setsebool[6802]: The virt_use_nfs policy boolean was changed to 1 by root Dec 6 02:07:44 localhost setsebool[6802]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root Dec 6 02:07:52 localhost kernel: SELinux: Converting 409 SID table entries... Dec 6 02:07:52 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 6 02:07:52 localhost kernel: SELinux: policy capability open_perms=1 Dec 6 02:07:52 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 6 02:07:52 localhost kernel: SELinux: policy capability always_check_network=0 Dec 6 02:07:52 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 6 02:07:52 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 6 02:07:52 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 6 02:08:05 localhost dbus-broker-launch[756]: avc: op=load_policy lsm=selinux seqno=3 res=1 Dec 6 02:08:05 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 6 02:08:05 localhost systemd[1]: Starting man-db-cache-update.service... Dec 6 02:08:05 localhost systemd[1]: Reloading. Dec 6 02:08:05 localhost systemd-rc-local-generator[7669]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 02:08:06 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 02:08:06 localhost systemd[1]: Queuing reload/restart jobs for marked units… Dec 6 02:08:07 localhost rhsm-service[6617]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 6 02:08:07 localhost rhsm-service[6617]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 6 02:08:14 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Dec 6 02:08:14 localhost systemd[1]: Finished man-db-cache-update.service. Dec 6 02:08:14 localhost systemd[1]: man-db-cache-update.service: Consumed 10.578s CPU time. Dec 6 02:08:14 localhost systemd[1]: run-r009f101a74f34b9b987df03572949b1b.service: Deactivated successfully. Dec 6 02:08:59 localhost systemd[1]: var-lib-containers-storage-overlay-opaque\x2dbug\x2dcheck3706556388-merged.mount: Deactivated successfully. Dec 6 02:08:59 localhost podman[18395]: 2025-12-06 07:08:59.238406899 +0000 UTC m=+0.125981404 system refresh Dec 6 02:09:00 localhost systemd[4179]: Starting D-Bus User Message Bus... Dec 6 02:09:00 localhost dbus-broker-launch[18452]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored Dec 6 02:09:00 localhost dbus-broker-launch[18452]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored Dec 6 02:09:00 localhost systemd[4179]: Started D-Bus User Message Bus. Dec 6 02:09:00 localhost journal[18452]: Ready Dec 6 02:09:00 localhost systemd[4179]: selinux: avc: op=load_policy lsm=selinux seqno=3 res=1 Dec 6 02:09:00 localhost systemd[4179]: Created slice Slice /user. Dec 6 02:09:00 localhost systemd[4179]: podman-18435.scope: unit configures an IP firewall, but not running as root. Dec 6 02:09:00 localhost systemd[4179]: (This warning is only shown for the first unit using IP firewalling.) Dec 6 02:09:00 localhost systemd[4179]: Started podman-18435.scope. Dec 6 02:09:00 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 6 02:09:00 localhost systemd[4179]: Started podman-pause-927ce357.scope. Dec 6 02:09:02 localhost systemd[1]: session-6.scope: Deactivated successfully. Dec 6 02:09:02 localhost systemd[1]: session-6.scope: Consumed 50.791s CPU time. Dec 6 02:09:02 localhost systemd-logind[766]: Session 6 logged out. Waiting for processes to exit. Dec 6 02:09:02 localhost systemd-logind[766]: Removed session 6. Dec 6 02:09:17 localhost sshd[18455]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:09:17 localhost sshd[18456]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:09:17 localhost sshd[18459]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:09:17 localhost sshd[18457]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:09:17 localhost sshd[18458]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:09:22 localhost sshd[18465]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:09:22 localhost systemd-logind[766]: New session 7 of user zuul. Dec 6 02:09:22 localhost systemd[1]: Started Session 7 of User zuul. Dec 6 02:09:22 localhost python3[18482]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEYVtM235X0xWH2FKli0CUGpvCLQnDDtCI4yCYqNdWcGuxt1LThsgCBuwYYpkvH+K5VLRKMEyM949Yu6yQU/mgI= zuul@np0005548782.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 6 02:09:23 localhost python3[18498]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEYVtM235X0xWH2FKli0CUGpvCLQnDDtCI4yCYqNdWcGuxt1LThsgCBuwYYpkvH+K5VLRKMEyM949Yu6yQU/mgI= zuul@np0005548782.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 6 02:09:25 localhost systemd[1]: session-7.scope: Deactivated successfully. Dec 6 02:09:25 localhost systemd-logind[766]: Session 7 logged out. Waiting for processes to exit. Dec 6 02:09:25 localhost systemd-logind[766]: Removed session 7. Dec 6 02:11:02 localhost sshd[18500]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:11:02 localhost systemd-logind[766]: New session 8 of user zuul. Dec 6 02:11:02 localhost systemd[1]: Started Session 8 of User zuul. Dec 6 02:11:03 localhost python3[18519]: ansible-authorized_key Invoked with user=root manage_dir=True key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDVgIoETU+ZMXzSQYJdf7tKLhQsLaB9easlDHbhHsBFXd1+Axjoyg338dVOvCx68r/a15lecdlSwbLqd4GXxUOdHnWLa1I9u6bd6azOwE0Dd6ZjnquN3BRq9dLJXMlKHhXMddL6WHNfxT/JOL+gKp0CM74naUBGqrzV05qlb19n7xZJtmxVohAGGeQdFwQJBVoQ6yZOjcJZ5CpbWCs4pFXZT/31fA0KIAJkrzAeUGRRkQEnzXY1riF0RHwvXaNJ0ZoAYfT7q263Pd5gnQEmpiBirUBH6CXJn4lIQyNMyVRbnKWemW9P1kyv2bjZUPg2b1xWBE7MBTs/wMt1RjdO9p+sxtwOd2IQMf1t3JLa2p3xqgxtGTMugpJUBr1TWwdLoHl+eAMuWZwAWofLWICHUlPzyTN8L8acu0im2eR60FEl9XdUjp8DYCBGxhhIVx+xZxj6nTnNc5T7GJpJlCTF+9YPlDVrLg8y/YXly0BoOqr7p+RaqMAJnoZymNDbuu9V3Vs= zuul-build-sshkey state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 6 02:11:04 localhost python3[18535]: ansible-user Invoked with name=root state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005548789.novalocal update_password=always uid=None group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None Dec 6 02:11:05 localhost python3[18585]: ansible-ansible.legacy.stat Invoked with path=/root/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 02:11:05 localhost python3[18628]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765005065.3021338-136-108546485667979/source dest=/root/.ssh/id_rsa mode=384 owner=root force=False _original_basename=00c07cb1874e45b180d5c151333e96b1_id_rsa follow=False checksum=59556e0a2f4b936183817041ae1f59f0f3c92dd9 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 02:11:07 localhost python3[18690]: ansible-ansible.legacy.stat Invoked with path=/root/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 02:11:07 localhost python3[18733]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765005066.9738903-222-9153435052993/source dest=/root/.ssh/id_rsa.pub mode=420 owner=root force=False _original_basename=00c07cb1874e45b180d5c151333e96b1_id_rsa.pub follow=False checksum=2b77fe3fb3441abe077d8d93b68745bd8f418f92 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 02:11:09 localhost python3[18763]: ansible-ansible.builtin.file Invoked with path=/etc/nodepool state=directory mode=0777 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 02:11:10 localhost python3[18809]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 02:11:11 localhost python3[18825]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/sub_nodes _original_basename=tmp9y3n8ftm recurse=False state=file path=/etc/nodepool/sub_nodes force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 02:11:12 localhost python3[18885]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 02:11:12 localhost python3[18901]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/sub_nodes_private _original_basename=tmpm7rf67np recurse=False state=file path=/etc/nodepool/sub_nodes_private force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 02:11:14 localhost python3[18961]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 02:11:14 localhost python3[18977]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/node_private _original_basename=tmpb9v6_722 recurse=False state=file path=/etc/nodepool/node_private force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 02:11:14 localhost systemd[1]: session-8.scope: Deactivated successfully. Dec 6 02:11:14 localhost systemd[1]: session-8.scope: Consumed 3.516s CPU time. Dec 6 02:11:14 localhost systemd-logind[766]: Session 8 logged out. Waiting for processes to exit. Dec 6 02:11:14 localhost systemd-logind[766]: Removed session 8. Dec 6 02:12:50 localhost sshd[18994]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:13:28 localhost sshd[18996]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:13:28 localhost systemd[1]: Started Session 9 of User zuul. Dec 6 02:13:28 localhost systemd-logind[766]: New session 9 of user zuul. Dec 6 02:13:28 localhost python3[19042]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 02:18:28 localhost systemd[1]: session-9.scope: Deactivated successfully. Dec 6 02:18:28 localhost systemd-logind[766]: Session 9 logged out. Waiting for processes to exit. Dec 6 02:18:28 localhost systemd-logind[766]: Removed session 9. Dec 6 02:19:06 localhost sshd[19048]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:19:08 localhost sshd[19050]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:19:11 localhost sshd[19052]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:19:13 localhost sshd[19054]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:19:15 localhost sshd[19056]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:24:15 localhost sshd[19059]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:24:17 localhost sshd[19061]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:24:19 localhost sshd[19063]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:24:22 localhost sshd[19065]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:24:25 localhost sshd[19067]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:29:18 localhost sshd[19070]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:29:41 localhost sshd[19072]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:30:02 localhost sshd[19074]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:30:55 localhost sshd[19078]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:31:06 localhost sshd[19079]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:31:06 localhost sshd[19080]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:31:31 localhost sshd[19083]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:31:31 localhost systemd-logind[766]: New session 10 of user zuul. Dec 6 02:31:31 localhost systemd[1]: Started Session 10 of User zuul. Dec 6 02:31:32 localhost python3[19100]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/redhat-release zuul_log_id=fa163ef9-e89a-1bf3-6840-00000000000c-1-overcloudnovacompute1 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 02:31:32 localhost sshd[19103]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:31:33 localhost python3[19122]: ansible-ansible.legacy.command Invoked with _raw_params=yum clean all zuul_log_id=fa163ef9-e89a-1bf3-6840-00000000000d-1-overcloudnovacompute1 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 02:31:38 localhost python3[19141]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-baseos-eus-rpms'] state=enabled purge=False Dec 6 02:31:39 localhost sshd[19143]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:31:41 localhost rhsm-service[6617]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 6 02:31:41 localhost rhsm-service[6617]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 6 02:31:44 localhost sshd[19273]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:31:46 localhost sshd[19275]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:31:48 localhost sshd[19277]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:31:51 localhost sshd[19283]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:31:51 localhost sshd[19286]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:31:53 localhost sshd[19291]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:32:36 localhost python3[19313]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-appstream-eus-rpms'] state=enabled purge=False Dec 6 02:32:39 localhost rhsm-service[6617]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 6 02:32:39 localhost rhsm-service[6617]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 6 02:32:48 localhost python3[19454]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-highavailability-eus-rpms'] state=enabled purge=False Dec 6 02:32:50 localhost rhsm-service[6617]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 6 02:32:50 localhost rhsm-service[6617]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 6 02:32:55 localhost rhsm-service[6617]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 6 02:32:55 localhost rhsm-service[6617]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 6 02:33:21 localhost python3[19730]: ansible-community.general.rhsm_repository Invoked with name=['fast-datapath-for-rhel-9-x86_64-rpms'] state=enabled purge=False Dec 6 02:33:23 localhost sshd[19733]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:33:24 localhost rhsm-service[6617]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 6 02:33:24 localhost rhsm-service[6617]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 6 02:33:24 localhost sshd[19857]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:33:29 localhost rhsm-service[6617]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 6 02:33:30 localhost rhsm-service[6617]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 6 02:33:38 localhost sshd[20115]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:33:41 localhost sshd[20117]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:33:52 localhost python3[20134]: ansible-community.general.rhsm_repository Invoked with name=['openstack-17.1-for-rhel-9-x86_64-rpms'] state=enabled purge=False Dec 6 02:33:54 localhost rhsm-service[6617]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 6 02:33:55 localhost rhsm-service[6617]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 6 02:34:00 localhost rhsm-service[6617]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 6 02:34:00 localhost rhsm-service[6617]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 6 02:34:24 localhost python3[20472]: ansible-ansible.legacy.command Invoked with _raw_params=yum repolist --enabled#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-1bf3-6840-000000000013-1-overcloudnovacompute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 02:34:27 localhost sshd[20476]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:34:29 localhost python3[20493]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch', 'os-net-config', 'ansible-core'] state=present update_cache=True allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 6 02:34:48 localhost sshd[20586]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:34:48 localhost sshd[20589]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:34:49 localhost kernel: SELinux: Converting 490 SID table entries... Dec 6 02:34:49 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 6 02:34:49 localhost kernel: SELinux: policy capability open_perms=1 Dec 6 02:34:49 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 6 02:34:49 localhost kernel: SELinux: policy capability always_check_network=0 Dec 6 02:34:49 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 6 02:34:49 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 6 02:34:49 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 6 02:34:49 localhost dbus-broker-launch[756]: avc: op=load_policy lsm=selinux seqno=4 res=1 Dec 6 02:34:49 localhost systemd[1]: Started daily update of the root trust anchor for DNSSEC. Dec 6 02:34:49 localhost sshd[20646]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:34:53 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 6 02:34:53 localhost systemd[1]: Starting man-db-cache-update.service... Dec 6 02:34:53 localhost systemd[1]: Reloading. Dec 6 02:34:53 localhost systemd-rc-local-generator[21158]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 02:34:53 localhost systemd-sysv-generator[21164]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 02:34:53 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 02:34:53 localhost systemd[1]: Queuing reload/restart jobs for marked units… Dec 6 02:34:54 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Dec 6 02:34:54 localhost systemd[1]: Finished man-db-cache-update.service. Dec 6 02:34:54 localhost systemd[1]: run-r3049330e21e64e75be809c4d43891857.service: Deactivated successfully. Dec 6 02:34:55 localhost rhsm-service[6617]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 6 02:34:55 localhost rhsm-service[6617]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 6 02:34:55 localhost sshd[21694]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:35:02 localhost sshd[21696]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:35:22 localhost python3[21714]: ansible-ansible.legacy.command Invoked with _raw_params=ansible-galaxy collection install ansible.posix#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-1bf3-6840-000000000015-1-overcloudnovacompute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 02:35:35 localhost sshd[21718]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:35:53 localhost python3[21736]: ansible-ansible.builtin.file Invoked with path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 02:35:54 localhost python3[21784]: ansible-ansible.legacy.stat Invoked with path=/etc/os-net-config/tripleo_config.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 02:35:54 localhost python3[21827]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765006553.982665-295-118966497809452/source dest=/etc/os-net-config/tripleo_config.yaml mode=None follow=False _original_basename=overcloud_net_config.j2 checksum=9333f42ac4b9baf349a5c32f7bcba3335b5912e0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 02:35:56 localhost python3[21857]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Dec 6 02:35:56 localhost systemd-journald[619]: Field hash table of /run/log/journal/4b30904fc4748c16d0c72dbebcabab49/system.journal has a fill level at 89.2 (297 of 333 items), suggesting rotation. Dec 6 02:35:56 localhost systemd-journald[619]: /run/log/journal/4b30904fc4748c16d0c72dbebcabab49/system.journal: Journal header limits reached or header out-of-date, rotating. Dec 6 02:35:56 localhost rsyslogd[760]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 6 02:35:56 localhost rsyslogd[760]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 6 02:35:56 localhost python3[21878]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-20 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Dec 6 02:35:56 localhost python3[21898]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-21 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Dec 6 02:35:56 localhost python3[21918]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-22 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Dec 6 02:35:57 localhost python3[21938]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-23 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Dec 6 02:35:57 localhost sshd[21943]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:35:58 localhost sshd[21944]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:35:58 localhost sshd[21945]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:36:00 localhost python3[21962]: ansible-ansible.builtin.systemd Invoked with name=network state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 6 02:36:01 localhost systemd[1]: Starting LSB: Bring up/down networking... Dec 6 02:36:01 localhost network[21965]: WARN : [network] You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 6 02:36:01 localhost network[21976]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 6 02:36:01 localhost network[21965]: WARN : [network] 'network-scripts' will be removed from distribution in near future. Dec 6 02:36:01 localhost network[21977]: 'network-scripts' will be removed from distribution in near future. Dec 6 02:36:01 localhost network[21965]: WARN : [network] It is advised to switch to 'NetworkManager' instead for network management. Dec 6 02:36:01 localhost network[21978]: It is advised to switch to 'NetworkManager' instead for network management. Dec 6 02:36:01 localhost NetworkManager[5973]: [1765006561.6917] audit: op="connections-reload" pid=22006 uid=0 result="success" Dec 6 02:36:01 localhost network[21965]: Bringing up loopback interface: [ OK ] Dec 6 02:36:01 localhost NetworkManager[5973]: [1765006561.8795] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth0" pid=22094 uid=0 result="success" Dec 6 02:36:01 localhost network[21965]: Bringing up interface eth0: [ OK ] Dec 6 02:36:01 localhost systemd[1]: Started LSB: Bring up/down networking. Dec 6 02:36:02 localhost python3[22135]: ansible-ansible.builtin.systemd Invoked with name=openvswitch state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 6 02:36:02 localhost systemd[1]: Starting Open vSwitch Database Unit... Dec 6 02:36:02 localhost chown[22139]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory Dec 6 02:36:02 localhost ovs-ctl[22144]: /etc/openvswitch/conf.db does not exist ... (warning). Dec 6 02:36:02 localhost ovs-ctl[22144]: Creating empty database /etc/openvswitch/conf.db [ OK ] Dec 6 02:36:02 localhost ovs-ctl[22144]: Starting ovsdb-server [ OK ] Dec 6 02:36:02 localhost ovs-vsctl[22193]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1 Dec 6 02:36:02 localhost ovs-vsctl[22213]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.6-141.el9fdp "external-ids:system-id=\"b142a5ef-fbed-4e92-aa78-e3ad080c6370\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"rhel\"" "system-version=\"9.2\"" Dec 6 02:36:02 localhost ovs-ctl[22144]: Configuring Open vSwitch system IDs [ OK ] Dec 6 02:36:02 localhost ovs-ctl[22144]: Enabling remote OVSDB managers [ OK ] Dec 6 02:36:02 localhost ovs-vsctl[22219]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=np0005548789.novalocal Dec 6 02:36:02 localhost systemd[1]: Started Open vSwitch Database Unit. Dec 6 02:36:02 localhost systemd[1]: Starting Open vSwitch Delete Transient Ports... Dec 6 02:36:02 localhost systemd[1]: Finished Open vSwitch Delete Transient Ports. Dec 6 02:36:02 localhost systemd[1]: Starting Open vSwitch Forwarding Unit... Dec 6 02:36:02 localhost kernel: openvswitch: Open vSwitch switching datapath Dec 6 02:36:02 localhost ovs-ctl[22263]: Inserting openvswitch module [ OK ] Dec 6 02:36:02 localhost ovs-ctl[22232]: Starting ovs-vswitchd [ OK ] Dec 6 02:36:02 localhost ovs-ctl[22232]: Enabling remote OVSDB managers [ OK ] Dec 6 02:36:02 localhost systemd[1]: Started Open vSwitch Forwarding Unit. Dec 6 02:36:02 localhost ovs-vsctl[22282]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=np0005548789.novalocal Dec 6 02:36:02 localhost systemd[1]: Starting Open vSwitch... Dec 6 02:36:02 localhost systemd[1]: Finished Open vSwitch. Dec 6 02:36:05 localhost python3[22300]: ansible-ansible.legacy.command Invoked with _raw_params=os-net-config -c /etc/os-net-config/tripleo_config.yaml#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-1bf3-6840-00000000001a-1-overcloudnovacompute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 02:36:06 localhost NetworkManager[5973]: [1765006566.3443] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22458 uid=0 result="success" Dec 6 02:36:06 localhost ifup[22459]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 6 02:36:06 localhost ifup[22460]: 'network-scripts' will be removed from distribution in near future. Dec 6 02:36:06 localhost ifup[22461]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 6 02:36:06 localhost NetworkManager[5973]: [1765006566.3710] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22467 uid=0 result="success" Dec 6 02:36:06 localhost ovs-vsctl[22469]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --may-exist add-br br-ex -- set bridge br-ex other-config:mac-table-size=50000 -- set bridge br-ex other-config:hwaddr=fa:16:3e:66:7f:12 -- set bridge br-ex fail_mode=standalone -- del-controller br-ex Dec 6 02:36:06 localhost kernel: device ovs-system entered promiscuous mode Dec 6 02:36:06 localhost NetworkManager[5973]: [1765006566.3981] manager: (ovs-system): new Generic device (/org/freedesktop/NetworkManager/Devices/4) Dec 6 02:36:06 localhost systemd-udevd[22471]: Network interface NamePolicy= disabled on kernel command line. Dec 6 02:36:06 localhost kernel: Timeout policy base is empty Dec 6 02:36:06 localhost kernel: Failed to associated timeout policy `ovs_test_tp' Dec 6 02:36:06 localhost kernel: device br-ex entered promiscuous mode Dec 6 02:36:06 localhost systemd-udevd[22485]: Network interface NamePolicy= disabled on kernel command line. Dec 6 02:36:06 localhost NetworkManager[5973]: [1765006566.4412] manager: (br-ex): new Generic device (/org/freedesktop/NetworkManager/Devices/5) Dec 6 02:36:06 localhost NetworkManager[5973]: [1765006566.4648] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22497 uid=0 result="success" Dec 6 02:36:06 localhost NetworkManager[5973]: [1765006566.4860] device (br-ex): carrier: link connected Dec 6 02:36:09 localhost sshd[22517]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:36:09 localhost NetworkManager[5973]: [1765006569.5386] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22528 uid=0 result="success" Dec 6 02:36:09 localhost NetworkManager[5973]: [1765006569.5826] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22543 uid=0 result="success" Dec 6 02:36:09 localhost NET[22568]: /etc/sysconfig/network-scripts/ifup-post : updated /etc/resolv.conf Dec 6 02:36:09 localhost NetworkManager[5973]: [1765006569.6532] device (eth1): state change: activated -> unmanaged (reason 'unmanaged', sys-iface-state: 'managed') Dec 6 02:36:09 localhost NetworkManager[5973]: [1765006569.6718] dhcp4 (eth1): canceled DHCP transaction Dec 6 02:36:09 localhost NetworkManager[5973]: [1765006569.6719] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds) Dec 6 02:36:09 localhost NetworkManager[5973]: [1765006569.6719] dhcp4 (eth1): state changed no lease Dec 6 02:36:09 localhost NetworkManager[5973]: [1765006569.6746] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22577 uid=0 result="success" Dec 6 02:36:09 localhost ifup[22578]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 6 02:36:09 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Dec 6 02:36:09 localhost ifup[22580]: 'network-scripts' will be removed from distribution in near future. Dec 6 02:36:09 localhost ifup[22581]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 6 02:36:09 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Dec 6 02:36:09 localhost NetworkManager[5973]: [1765006569.7050] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22594 uid=0 result="success" Dec 6 02:36:09 localhost NetworkManager[5973]: [1765006569.7477] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22605 uid=0 result="success" Dec 6 02:36:09 localhost NetworkManager[5973]: [1765006569.7541] device (eth1): carrier: link connected Dec 6 02:36:09 localhost NetworkManager[5973]: [1765006569.7752] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22614 uid=0 result="success" Dec 6 02:36:09 localhost ipv6_wait_tentative[22626]: Waiting for interface eth1 IPv6 address(es) to leave the 'tentative' state Dec 6 02:36:10 localhost ipv6_wait_tentative[22631]: Waiting for interface eth1 IPv6 address(es) to leave the 'tentative' state Dec 6 02:36:11 localhost NetworkManager[5973]: [1765006571.8408] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22640 uid=0 result="success" Dec 6 02:36:11 localhost ovs-vsctl[22655]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex eth1 -- add-port br-ex eth1 Dec 6 02:36:11 localhost kernel: device eth1 entered promiscuous mode Dec 6 02:36:11 localhost NetworkManager[5973]: [1765006571.9124] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22663 uid=0 result="success" Dec 6 02:36:11 localhost ifup[22664]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 6 02:36:11 localhost ifup[22665]: 'network-scripts' will be removed from distribution in near future. Dec 6 02:36:11 localhost ifup[22666]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 6 02:36:11 localhost NetworkManager[5973]: [1765006571.9434] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22672 uid=0 result="success" Dec 6 02:36:11 localhost NetworkManager[5973]: [1765006571.9878] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=22682 uid=0 result="success" Dec 6 02:36:11 localhost ifup[22683]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 6 02:36:11 localhost ifup[22684]: 'network-scripts' will be removed from distribution in near future. Dec 6 02:36:11 localhost ifup[22685]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 6 02:36:12 localhost NetworkManager[5973]: [1765006572.0211] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=22691 uid=0 result="success" Dec 6 02:36:12 localhost ovs-vsctl[22694]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan23 -- add-port br-ex vlan23 tag=23 -- set Interface vlan23 type=internal Dec 6 02:36:12 localhost kernel: device vlan23 entered promiscuous mode Dec 6 02:36:12 localhost NetworkManager[5973]: [1765006572.0631] manager: (vlan23): new Generic device (/org/freedesktop/NetworkManager/Devices/6) Dec 6 02:36:12 localhost systemd-udevd[22696]: Network interface NamePolicy= disabled on kernel command line. Dec 6 02:36:12 localhost NetworkManager[5973]: [1765006572.0939] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=22705 uid=0 result="success" Dec 6 02:36:12 localhost NetworkManager[5973]: [1765006572.1154] device (vlan23): carrier: link connected Dec 6 02:36:15 localhost NetworkManager[5973]: [1765006575.1742] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=22734 uid=0 result="success" Dec 6 02:36:15 localhost NetworkManager[5973]: [1765006575.2199] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=22749 uid=0 result="success" Dec 6 02:36:15 localhost NetworkManager[5973]: [1765006575.2813] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22770 uid=0 result="success" Dec 6 02:36:15 localhost ifup[22771]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 6 02:36:15 localhost ifup[22772]: 'network-scripts' will be removed from distribution in near future. Dec 6 02:36:15 localhost ifup[22773]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 6 02:36:15 localhost NetworkManager[5973]: [1765006575.3149] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22779 uid=0 result="success" Dec 6 02:36:15 localhost ovs-vsctl[22782]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan20 -- add-port br-ex vlan20 tag=20 -- set Interface vlan20 type=internal Dec 6 02:36:15 localhost kernel: device vlan20 entered promiscuous mode Dec 6 02:36:15 localhost NetworkManager[5973]: [1765006575.3653] manager: (vlan20): new Generic device (/org/freedesktop/NetworkManager/Devices/7) Dec 6 02:36:15 localhost systemd-udevd[22784]: Network interface NamePolicy= disabled on kernel command line. Dec 6 02:36:15 localhost NetworkManager[5973]: [1765006575.3915] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22794 uid=0 result="success" Dec 6 02:36:15 localhost NetworkManager[5973]: [1765006575.4122] device (vlan20): carrier: link connected Dec 6 02:36:18 localhost NetworkManager[5973]: [1765006578.4605] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22824 uid=0 result="success" Dec 6 02:36:18 localhost NetworkManager[5973]: [1765006578.5093] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22839 uid=0 result="success" Dec 6 02:36:18 localhost NetworkManager[5973]: [1765006578.5693] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22860 uid=0 result="success" Dec 6 02:36:18 localhost ifup[22861]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 6 02:36:18 localhost ifup[22862]: 'network-scripts' will be removed from distribution in near future. Dec 6 02:36:18 localhost ifup[22863]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 6 02:36:18 localhost NetworkManager[5973]: [1765006578.6011] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22869 uid=0 result="success" Dec 6 02:36:18 localhost ovs-vsctl[22872]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan22 -- add-port br-ex vlan22 tag=22 -- set Interface vlan22 type=internal Dec 6 02:36:18 localhost kernel: device vlan22 entered promiscuous mode Dec 6 02:36:18 localhost NetworkManager[5973]: [1765006578.6383] manager: (vlan22): new Generic device (/org/freedesktop/NetworkManager/Devices/8) Dec 6 02:36:18 localhost systemd-udevd[22875]: Network interface NamePolicy= disabled on kernel command line. Dec 6 02:36:18 localhost NetworkManager[5973]: [1765006578.6618] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22884 uid=0 result="success" Dec 6 02:36:18 localhost NetworkManager[5973]: [1765006578.6794] device (vlan22): carrier: link connected Dec 6 02:36:19 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Dec 6 02:36:20 localhost sshd[22903]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:36:21 localhost NetworkManager[5973]: [1765006581.7267] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22916 uid=0 result="success" Dec 6 02:36:21 localhost NetworkManager[5973]: [1765006581.7727] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22931 uid=0 result="success" Dec 6 02:36:21 localhost NetworkManager[5973]: [1765006581.8227] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22952 uid=0 result="success" Dec 6 02:36:21 localhost ifup[22953]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 6 02:36:21 localhost ifup[22954]: 'network-scripts' will be removed from distribution in near future. Dec 6 02:36:21 localhost ifup[22955]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 6 02:36:21 localhost NetworkManager[5973]: [1765006581.8549] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22961 uid=0 result="success" Dec 6 02:36:21 localhost ovs-vsctl[22964]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan44 -- add-port br-ex vlan44 tag=44 -- set Interface vlan44 type=internal Dec 6 02:36:21 localhost NetworkManager[5973]: [1765006581.8975] manager: (vlan44): new Generic device (/org/freedesktop/NetworkManager/Devices/9) Dec 6 02:36:21 localhost kernel: device vlan44 entered promiscuous mode Dec 6 02:36:21 localhost systemd-udevd[22966]: Network interface NamePolicy= disabled on kernel command line. Dec 6 02:36:21 localhost NetworkManager[5973]: [1765006581.9248] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22976 uid=0 result="success" Dec 6 02:36:21 localhost NetworkManager[5973]: [1765006581.9474] device (vlan44): carrier: link connected Dec 6 02:36:22 localhost sshd[22994]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:36:25 localhost NetworkManager[5973]: [1765006585.0091] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23008 uid=0 result="success" Dec 6 02:36:25 localhost NetworkManager[5973]: [1765006585.0550] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23023 uid=0 result="success" Dec 6 02:36:25 localhost NetworkManager[5973]: [1765006585.1159] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23044 uid=0 result="success" Dec 6 02:36:25 localhost ifup[23045]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 6 02:36:25 localhost ifup[23046]: 'network-scripts' will be removed from distribution in near future. Dec 6 02:36:25 localhost ifup[23047]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 6 02:36:25 localhost NetworkManager[5973]: [1765006585.1495] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23053 uid=0 result="success" Dec 6 02:36:25 localhost ovs-vsctl[23056]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan21 -- add-port br-ex vlan21 tag=21 -- set Interface vlan21 type=internal Dec 6 02:36:25 localhost kernel: device vlan21 entered promiscuous mode Dec 6 02:36:25 localhost systemd-udevd[23058]: Network interface NamePolicy= disabled on kernel command line. Dec 6 02:36:25 localhost NetworkManager[5973]: [1765006585.1914] manager: (vlan21): new Generic device (/org/freedesktop/NetworkManager/Devices/10) Dec 6 02:36:25 localhost NetworkManager[5973]: [1765006585.2155] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23068 uid=0 result="success" Dec 6 02:36:25 localhost NetworkManager[5973]: [1765006585.2374] device (vlan21): carrier: link connected Dec 6 02:36:28 localhost NetworkManager[5973]: [1765006588.3012] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23098 uid=0 result="success" Dec 6 02:36:28 localhost NetworkManager[5973]: [1765006588.3525] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23113 uid=0 result="success" Dec 6 02:36:28 localhost NetworkManager[5973]: [1765006588.4138] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23134 uid=0 result="success" Dec 6 02:36:28 localhost ifup[23135]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 6 02:36:28 localhost ifup[23136]: 'network-scripts' will be removed from distribution in near future. Dec 6 02:36:28 localhost ifup[23137]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 6 02:36:28 localhost NetworkManager[5973]: [1765006588.4472] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23143 uid=0 result="success" Dec 6 02:36:28 localhost ovs-vsctl[23146]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan44 -- add-port br-ex vlan44 tag=44 -- set Interface vlan44 type=internal Dec 6 02:36:28 localhost NetworkManager[5973]: [1765006588.5070] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23153 uid=0 result="success" Dec 6 02:36:29 localhost NetworkManager[5973]: [1765006589.5688] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23180 uid=0 result="success" Dec 6 02:36:29 localhost NetworkManager[5973]: [1765006589.6088] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23195 uid=0 result="success" Dec 6 02:36:29 localhost NetworkManager[5973]: [1765006589.6674] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23216 uid=0 result="success" Dec 6 02:36:29 localhost ifup[23217]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 6 02:36:29 localhost ifup[23218]: 'network-scripts' will be removed from distribution in near future. Dec 6 02:36:29 localhost ifup[23219]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 6 02:36:29 localhost NetworkManager[5973]: [1765006589.6982] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23225 uid=0 result="success" Dec 6 02:36:29 localhost ovs-vsctl[23228]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan20 -- add-port br-ex vlan20 tag=20 -- set Interface vlan20 type=internal Dec 6 02:36:29 localhost NetworkManager[5973]: [1765006589.7454] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23235 uid=0 result="success" Dec 6 02:36:30 localhost NetworkManager[5973]: [1765006590.8111] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23263 uid=0 result="success" Dec 6 02:36:30 localhost NetworkManager[5973]: [1765006590.8602] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23278 uid=0 result="success" Dec 6 02:36:30 localhost NetworkManager[5973]: [1765006590.9241] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23299 uid=0 result="success" Dec 6 02:36:30 localhost ifup[23300]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 6 02:36:30 localhost ifup[23301]: 'network-scripts' will be removed from distribution in near future. Dec 6 02:36:30 localhost ifup[23302]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 6 02:36:30 localhost NetworkManager[5973]: [1765006590.9603] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23308 uid=0 result="success" Dec 6 02:36:30 localhost ovs-vsctl[23311]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan21 -- add-port br-ex vlan21 tag=21 -- set Interface vlan21 type=internal Dec 6 02:36:31 localhost NetworkManager[5973]: [1765006591.0214] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23318 uid=0 result="success" Dec 6 02:36:32 localhost NetworkManager[5973]: [1765006592.0815] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23346 uid=0 result="success" Dec 6 02:36:32 localhost NetworkManager[5973]: [1765006592.1279] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23361 uid=0 result="success" Dec 6 02:36:32 localhost NetworkManager[5973]: [1765006592.1870] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23382 uid=0 result="success" Dec 6 02:36:32 localhost ifup[23383]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 6 02:36:32 localhost ifup[23384]: 'network-scripts' will be removed from distribution in near future. Dec 6 02:36:32 localhost ifup[23385]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 6 02:36:32 localhost NetworkManager[5973]: [1765006592.2203] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23391 uid=0 result="success" Dec 6 02:36:32 localhost ovs-vsctl[23394]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan23 -- add-port br-ex vlan23 tag=23 -- set Interface vlan23 type=internal Dec 6 02:36:32 localhost NetworkManager[5973]: [1765006592.2798] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23401 uid=0 result="success" Dec 6 02:36:33 localhost NetworkManager[5973]: [1765006593.3435] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23429 uid=0 result="success" Dec 6 02:36:33 localhost NetworkManager[5973]: [1765006593.3979] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23444 uid=0 result="success" Dec 6 02:36:33 localhost NetworkManager[5973]: [1765006593.4510] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23465 uid=0 result="success" Dec 6 02:36:33 localhost ifup[23466]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Dec 6 02:36:33 localhost ifup[23467]: 'network-scripts' will be removed from distribution in near future. Dec 6 02:36:33 localhost ifup[23468]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Dec 6 02:36:33 localhost NetworkManager[5973]: [1765006593.4784] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23474 uid=0 result="success" Dec 6 02:36:33 localhost ovs-vsctl[23477]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan22 -- add-port br-ex vlan22 tag=22 -- set Interface vlan22 type=internal Dec 6 02:36:33 localhost NetworkManager[5973]: [1765006593.5291] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23484 uid=0 result="success" Dec 6 02:36:34 localhost NetworkManager[5973]: [1765006594.5922] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23512 uid=0 result="success" Dec 6 02:36:34 localhost NetworkManager[5973]: [1765006594.6385] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23527 uid=0 result="success" Dec 6 02:36:43 localhost sshd[23545]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:37:09 localhost sshd[23547]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:37:27 localhost python3[23563]: ansible-ansible.legacy.command Invoked with _raw_params=ip a#012ping -c 2 -W 2 192.168.122.10#012ping -c 2 -W 2 192.168.122.11#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-1bf3-6840-00000000001b-1-overcloudnovacompute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 02:37:32 localhost sshd[23569]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:37:33 localhost python3[23584]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDVgIoETU+ZMXzSQYJdf7tKLhQsLaB9easlDHbhHsBFXd1+Axjoyg338dVOvCx68r/a15lecdlSwbLqd4GXxUOdHnWLa1I9u6bd6azOwE0Dd6ZjnquN3BRq9dLJXMlKHhXMddL6WHNfxT/JOL+gKp0CM74naUBGqrzV05qlb19n7xZJtmxVohAGGeQdFwQJBVoQ6yZOjcJZ5CpbWCs4pFXZT/31fA0KIAJkrzAeUGRRkQEnzXY1riF0RHwvXaNJ0ZoAYfT7q263Pd5gnQEmpiBirUBH6CXJn4lIQyNMyVRbnKWemW9P1kyv2bjZUPg2b1xWBE7MBTs/wMt1RjdO9p+sxtwOd2IQMf1t3JLa2p3xqgxtGTMugpJUBr1TWwdLoHl+eAMuWZwAWofLWICHUlPzyTN8L8acu0im2eR60FEl9XdUjp8DYCBGxhhIVx+xZxj6nTnNc5T7GJpJlCTF+9YPlDVrLg8y/YXly0BoOqr7p+RaqMAJnoZymNDbuu9V3Vs= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 6 02:37:34 localhost python3[23600]: ansible-ansible.posix.authorized_key Invoked with user=root key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDVgIoETU+ZMXzSQYJdf7tKLhQsLaB9easlDHbhHsBFXd1+Axjoyg338dVOvCx68r/a15lecdlSwbLqd4GXxUOdHnWLa1I9u6bd6azOwE0Dd6ZjnquN3BRq9dLJXMlKHhXMddL6WHNfxT/JOL+gKp0CM74naUBGqrzV05qlb19n7xZJtmxVohAGGeQdFwQJBVoQ6yZOjcJZ5CpbWCs4pFXZT/31fA0KIAJkrzAeUGRRkQEnzXY1riF0RHwvXaNJ0ZoAYfT7q263Pd5gnQEmpiBirUBH6CXJn4lIQyNMyVRbnKWemW9P1kyv2bjZUPg2b1xWBE7MBTs/wMt1RjdO9p+sxtwOd2IQMf1t3JLa2p3xqgxtGTMugpJUBr1TWwdLoHl+eAMuWZwAWofLWICHUlPzyTN8L8acu0im2eR60FEl9XdUjp8DYCBGxhhIVx+xZxj6nTnNc5T7GJpJlCTF+9YPlDVrLg8y/YXly0BoOqr7p+RaqMAJnoZymNDbuu9V3Vs= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 6 02:37:35 localhost python3[23614]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDVgIoETU+ZMXzSQYJdf7tKLhQsLaB9easlDHbhHsBFXd1+Axjoyg338dVOvCx68r/a15lecdlSwbLqd4GXxUOdHnWLa1I9u6bd6azOwE0Dd6ZjnquN3BRq9dLJXMlKHhXMddL6WHNfxT/JOL+gKp0CM74naUBGqrzV05qlb19n7xZJtmxVohAGGeQdFwQJBVoQ6yZOjcJZ5CpbWCs4pFXZT/31fA0KIAJkrzAeUGRRkQEnzXY1riF0RHwvXaNJ0ZoAYfT7q263Pd5gnQEmpiBirUBH6CXJn4lIQyNMyVRbnKWemW9P1kyv2bjZUPg2b1xWBE7MBTs/wMt1RjdO9p+sxtwOd2IQMf1t3JLa2p3xqgxtGTMugpJUBr1TWwdLoHl+eAMuWZwAWofLWICHUlPzyTN8L8acu0im2eR60FEl9XdUjp8DYCBGxhhIVx+xZxj6nTnNc5T7GJpJlCTF+9YPlDVrLg8y/YXly0BoOqr7p+RaqMAJnoZymNDbuu9V3Vs= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 6 02:37:36 localhost python3[23630]: ansible-ansible.posix.authorized_key Invoked with user=root key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDVgIoETU+ZMXzSQYJdf7tKLhQsLaB9easlDHbhHsBFXd1+Axjoyg338dVOvCx68r/a15lecdlSwbLqd4GXxUOdHnWLa1I9u6bd6azOwE0Dd6ZjnquN3BRq9dLJXMlKHhXMddL6WHNfxT/JOL+gKp0CM74naUBGqrzV05qlb19n7xZJtmxVohAGGeQdFwQJBVoQ6yZOjcJZ5CpbWCs4pFXZT/31fA0KIAJkrzAeUGRRkQEnzXY1riF0RHwvXaNJ0ZoAYfT7q263Pd5gnQEmpiBirUBH6CXJn4lIQyNMyVRbnKWemW9P1kyv2bjZUPg2b1xWBE7MBTs/wMt1RjdO9p+sxtwOd2IQMf1t3JLa2p3xqgxtGTMugpJUBr1TWwdLoHl+eAMuWZwAWofLWICHUlPzyTN8L8acu0im2eR60FEl9XdUjp8DYCBGxhhIVx+xZxj6nTnNc5T7GJpJlCTF+9YPlDVrLg8y/YXly0BoOqr7p+RaqMAJnoZymNDbuu9V3Vs= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Dec 6 02:37:37 localhost python3[23644]: ansible-ansible.builtin.slurp Invoked with path=/etc/hostname src=/etc/hostname Dec 6 02:37:37 localhost python3[23659]: ansible-ansible.legacy.command Invoked with _raw_params=hostname="np0005548789.novalocal"#012hostname_str_array=(${hostname//./ })#012echo ${hostname_str_array[0]} > /home/zuul/ansible_hostname#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-1bf3-6840-000000000022-1-overcloudnovacompute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 02:37:38 localhost python3[23679]: ansible-ansible.legacy.command Invoked with _raw_params=hostname=$(cat /home/zuul/ansible_hostname)#012hostnamectl hostname "$hostname.localdomain"#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-1bf3-6840-000000000023-1-overcloudnovacompute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 02:37:38 localhost systemd[1]: Starting Hostname Service... Dec 6 02:37:38 localhost systemd[1]: Started Hostname Service. Dec 6 02:37:38 localhost systemd-hostnamed[23683]: Hostname set to (static) Dec 6 02:37:38 localhost NetworkManager[5973]: [1765006658.7404] hostname: static hostname changed from "np0005548789.novalocal" to "np0005548789.localdomain" Dec 6 02:37:38 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Dec 6 02:37:38 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Dec 6 02:37:40 localhost systemd[1]: session-10.scope: Deactivated successfully. Dec 6 02:37:40 localhost systemd[1]: session-10.scope: Consumed 1min 44.156s CPU time. Dec 6 02:37:40 localhost systemd-logind[766]: Session 10 logged out. Waiting for processes to exit. Dec 6 02:37:40 localhost systemd-logind[766]: Removed session 10. Dec 6 02:37:43 localhost sshd[23694]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:37:43 localhost systemd-logind[766]: New session 11 of user zuul. Dec 6 02:37:43 localhost systemd[1]: Started Session 11 of User zuul. Dec 6 02:37:43 localhost python3[23711]: ansible-ansible.builtin.slurp Invoked with path=/home/zuul/ansible_hostname src=/home/zuul/ansible_hostname Dec 6 02:37:45 localhost systemd[1]: session-11.scope: Deactivated successfully. Dec 6 02:37:45 localhost systemd-logind[766]: Session 11 logged out. Waiting for processes to exit. Dec 6 02:37:45 localhost systemd-logind[766]: Removed session 11. Dec 6 02:37:47 localhost sshd[23713]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:37:48 localhost sshd[23715]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:37:48 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Dec 6 02:37:52 localhost sshd[23717]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:38:08 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 6 02:38:21 localhost sshd[23725]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:38:31 localhost sshd[23727]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:38:31 localhost systemd-logind[766]: New session 12 of user zuul. Dec 6 02:38:31 localhost systemd[1]: Started Session 12 of User zuul. Dec 6 02:38:31 localhost python3[23746]: ansible-ansible.legacy.dnf Invoked with name=['lvm2', 'jq'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 6 02:38:35 localhost systemd[1]: Reloading. Dec 6 02:38:35 localhost systemd-rc-local-generator[23784]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 02:38:35 localhost systemd-sysv-generator[23788]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 02:38:35 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 02:38:35 localhost systemd[1]: Listening on Device-mapper event daemon FIFOs. Dec 6 02:38:35 localhost systemd[1]: Reloading. Dec 6 02:38:35 localhost systemd-sysv-generator[23831]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 02:38:35 localhost systemd-rc-local-generator[23827]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 02:38:35 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 02:38:35 localhost systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling... Dec 6 02:38:35 localhost systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling. Dec 6 02:38:35 localhost systemd[1]: Reloading. Dec 6 02:38:35 localhost systemd-rc-local-generator[23865]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 02:38:35 localhost systemd-sysv-generator[23869]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 02:38:35 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 02:38:36 localhost systemd[1]: Listening on LVM2 poll daemon socket. Dec 6 02:38:36 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 6 02:38:36 localhost systemd[1]: Starting man-db-cache-update.service... Dec 6 02:38:36 localhost systemd[1]: Reloading. Dec 6 02:38:36 localhost systemd-rc-local-generator[23931]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 02:38:36 localhost systemd-sysv-generator[23934]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 02:38:36 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 02:38:36 localhost systemd[1]: Queuing reload/restart jobs for marked units… Dec 6 02:38:36 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 6 02:38:37 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Dec 6 02:38:37 localhost systemd[1]: Finished man-db-cache-update.service. Dec 6 02:38:37 localhost systemd[1]: run-r87c78052be9c4c00b5254abdbd491c77.service: Deactivated successfully. Dec 6 02:38:37 localhost systemd[1]: run-r180c600c3766474fa0509bd24a3f2262.service: Deactivated successfully. Dec 6 02:38:54 localhost sshd[24519]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:39:01 localhost sshd[24521]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:39:20 localhost sshd[24523]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:39:26 localhost sshd[24525]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:39:30 localhost sshd[24527]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:39:37 localhost systemd[1]: session-12.scope: Deactivated successfully. Dec 6 02:39:37 localhost systemd[1]: session-12.scope: Consumed 4.664s CPU time. Dec 6 02:39:37 localhost systemd-logind[766]: Session 12 logged out. Waiting for processes to exit. Dec 6 02:39:37 localhost systemd-logind[766]: Removed session 12. Dec 6 02:40:08 localhost sshd[24529]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:40:13 localhost sshd[24531]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:40:39 localhost sshd[24533]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:40:48 localhost sshd[24535]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:40:56 localhost sshd[24537]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:41:14 localhost sshd[24539]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:41:34 localhost sshd[24541]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:41:45 localhost sshd[24543]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:42:15 localhost sshd[24545]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:42:18 localhost sshd[24547]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:42:26 localhost sshd[24549]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:42:30 localhost sshd[24551]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:42:32 localhost sshd[24553]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:42:35 localhost sshd[24555]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:42:38 localhost sshd[24557]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:42:41 localhost sshd[24559]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:42:50 localhost sshd[24561]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:42:51 localhost sshd[24563]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:43:24 localhost sshd[24565]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:43:42 localhost sshd[24567]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:43:55 localhost sshd[24569]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:43:58 localhost sshd[24571]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:44:11 localhost sshd[24573]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:44:32 localhost sshd[24576]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:45:11 localhost sshd[24578]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:45:14 localhost sshd[24580]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:45:29 localhost sshd[24583]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:45:34 localhost sshd[24585]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:45:42 localhost sshd[24587]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:46:22 localhost sshd[24589]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:46:43 localhost sshd[24591]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:46:45 localhost sshd[24593]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:46:45 localhost sshd[24595]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:46:47 localhost sshd[24597]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:46:49 localhost sshd[24599]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:46:50 localhost sshd[24601]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:46:51 localhost sshd[24603]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:46:56 localhost sshd[24606]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:47:01 localhost sshd[24608]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:47:32 localhost sshd[24610]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:47:57 localhost sshd[24612]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:48:16 localhost sshd[24614]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:48:18 localhost sshd[24616]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:48:36 localhost sshd[24619]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:48:43 localhost sshd[24621]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:49:04 localhost sshd[24623]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:49:38 localhost sshd[24625]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:49:46 localhost sshd[24627]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:49:53 localhost sshd[24630]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:50:10 localhost sshd[24632]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:50:13 localhost sshd[24634]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:51:04 localhost sshd[24636]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:51:05 localhost sshd[24638]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:51:19 localhost sshd[24640]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:51:24 localhost sshd[24642]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:51:45 localhost sshd[24644]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:52:18 localhost sshd[24646]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:52:30 localhost sshd[24648]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:52:34 localhost sshd[24650]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:52:53 localhost sshd[24653]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:53:22 localhost sshd[24655]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:53:31 localhost sshd[24657]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:53:46 localhost sshd[24659]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:53:58 localhost sshd[24661]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:54:27 localhost sshd[24663]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:54:39 localhost sshd[24665]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:54:51 localhost sshd[24667]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:54:56 localhost sshd[24669]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:55:00 localhost sshd[24671]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:55:02 localhost sshd[24673]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:55:04 localhost sshd[24675]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:55:07 localhost sshd[24677]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:55:09 localhost sshd[24679]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:55:17 localhost sshd[24682]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:55:17 localhost systemd-logind[766]: New session 13 of user zuul. Dec 6 02:55:17 localhost systemd[1]: Started Session 13 of User zuul. Dec 6 02:55:18 localhost python3[24730]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 6 02:55:19 localhost sshd[24818]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:55:19 localhost python3[24817]: ansible-ansible.builtin.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 6 02:55:23 localhost python3[24836]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 6 02:55:23 localhost python3[24852]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 02:55:23 localhost kernel: loop: module loaded Dec 6 02:55:23 localhost kernel: loop3: detected capacity change from 0 to 14680064 Dec 6 02:55:24 localhost python3[24877]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 02:55:24 localhost lvm[24880]: PV /dev/loop3 not used. Dec 6 02:55:24 localhost lvm[24882]: PV /dev/loop3 online, VG ceph_vg0 is complete. Dec 6 02:55:24 localhost systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0. Dec 6 02:55:24 localhost lvm[24885]: 1 logical volume(s) in volume group "ceph_vg0" now active Dec 6 02:55:24 localhost lvm[24892]: PV /dev/loop3 online, VG ceph_vg0 is complete. Dec 6 02:55:24 localhost lvm[24892]: VG ceph_vg0 finished Dec 6 02:55:24 localhost systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully. Dec 6 02:55:25 localhost python3[24940]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 02:55:25 localhost python3[24984]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765007724.768987-54544-221435459495844/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 02:55:26 localhost python3[25014]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 02:55:26 localhost systemd[1]: Reloading. Dec 6 02:55:26 localhost systemd-sysv-generator[25040]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 02:55:26 localhost systemd-rc-local-generator[25036]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 02:55:26 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 02:55:26 localhost systemd[1]: Starting Ceph OSD losetup... Dec 6 02:55:26 localhost bash[25055]: /dev/loop3: [64516]:8400144 (/var/lib/ceph-osd-0.img) Dec 6 02:55:26 localhost systemd[1]: Finished Ceph OSD losetup. Dec 6 02:55:26 localhost lvm[25056]: PV /dev/loop3 online, VG ceph_vg0 is complete. Dec 6 02:55:26 localhost lvm[25056]: VG ceph_vg0 finished Dec 6 02:55:27 localhost python3[25072]: ansible-ansible.builtin.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 6 02:55:30 localhost python3[25089]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 6 02:55:30 localhost python3[25105]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=7G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 02:55:30 localhost kernel: loop4: detected capacity change from 0 to 14680064 Dec 6 02:55:31 localhost python3[25127]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 02:55:31 localhost lvm[25130]: PV /dev/loop4 not used. Dec 6 02:55:31 localhost lvm[25140]: PV /dev/loop4 online, VG ceph_vg1 is complete. Dec 6 02:55:31 localhost systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1. Dec 6 02:55:31 localhost lvm[25142]: 1 logical volume(s) in volume group "ceph_vg1" now active Dec 6 02:55:31 localhost systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully. Dec 6 02:55:32 localhost python3[25190]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 02:55:32 localhost python3[25233]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765007731.8777587-54738-165649410462717/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 02:55:33 localhost python3[25263]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 02:55:33 localhost systemd[1]: Reloading. Dec 6 02:55:33 localhost systemd-sysv-generator[25290]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 02:55:33 localhost systemd-rc-local-generator[25287]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 02:55:33 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 02:55:33 localhost systemd[1]: Starting Ceph OSD losetup... Dec 6 02:55:33 localhost bash[25303]: /dev/loop4: [64516]:8399529 (/var/lib/ceph-osd-1.img) Dec 6 02:55:33 localhost systemd[1]: Finished Ceph OSD losetup. Dec 6 02:55:33 localhost lvm[25304]: PV /dev/loop4 online, VG ceph_vg1 is complete. Dec 6 02:55:33 localhost lvm[25304]: VG ceph_vg1 finished Dec 6 02:55:42 localhost python3[25350]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all', 'min'] gather_timeout=45 filter=[] fact_path=/etc/ansible/facts.d Dec 6 02:55:43 localhost python3[25370]: ansible-hostname Invoked with name=np0005548789.localdomain use=None Dec 6 02:55:43 localhost systemd[1]: Starting Hostname Service... Dec 6 02:55:43 localhost systemd[1]: Started Hostname Service. Dec 6 02:55:45 localhost python3[25393]: ansible-tempfile Invoked with state=file suffix=tmphosts prefix=ansible. path=None Dec 6 02:55:45 localhost sshd[25426]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:55:46 localhost python3[25443]: ansible-ansible.legacy.copy Invoked with remote_src=True src=/etc/hosts dest=/tmp/ansible.0gnzpzdmtmphosts mode=preserve backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 02:55:46 localhost python3[25473]: ansible-blockinfile Invoked with state=absent path=/tmp/ansible.0gnzpzdmtmphosts block= marker=# {mark} marker_begin=HEAT_HOSTS_START - Do not edit manually within this section! marker_end=HEAT_HOSTS_END create=False backup=False unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 02:55:47 localhost python3[25489]: ansible-blockinfile Invoked with create=True path=/tmp/ansible.0gnzpzdmtmphosts insertbefore=BOF block=192.168.122.106 np0005548788.localdomain np0005548788#012192.168.122.106 np0005548788.ctlplane.localdomain np0005548788.ctlplane#012192.168.122.107 np0005548789.localdomain np0005548789#012192.168.122.107 np0005548789.ctlplane.localdomain np0005548789.ctlplane#012192.168.122.108 np0005548790.localdomain np0005548790#012192.168.122.108 np0005548790.ctlplane.localdomain np0005548790.ctlplane#012192.168.122.103 np0005548785.localdomain np0005548785#012192.168.122.103 np0005548785.ctlplane.localdomain np0005548785.ctlplane#012192.168.122.104 np0005548786.localdomain np0005548786#012192.168.122.104 np0005548786.ctlplane.localdomain np0005548786.ctlplane#012192.168.122.105 np0005548787.localdomain np0005548787#012192.168.122.105 np0005548787.ctlplane.localdomain np0005548787.ctlplane#012#012192.168.122.100 undercloud.ctlplane.localdomain undercloud.ctlplane#012 marker=# {mark} marker_begin=START_HOST_ENTRIES_FOR_STACK: overcloud marker_end=END_HOST_ENTRIES_FOR_STACK: overcloud state=present backup=False unsafe_writes=False insertafter=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 02:55:47 localhost python3[25505]: ansible-ansible.legacy.command Invoked with _raw_params=cp "/tmp/ansible.0gnzpzdmtmphosts" "/etc/hosts" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 02:55:48 localhost python3[25522]: ansible-file Invoked with path=/tmp/ansible.0gnzpzdmtmphosts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 02:55:50 localhost python3[25538]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 02:55:51 localhost python3[25556]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 6 02:55:55 localhost python3[25606]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 02:55:56 localhost python3[25651]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765007755.1418512-55569-180179163738915/source dest=/etc/chrony.conf owner=root group=root mode=420 follow=False _original_basename=chrony.conf.j2 checksum=4fd4fbbb2de00c70a54478b7feb8ef8adf6a3362 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 02:55:56 localhost sshd[25666]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:55:57 localhost sshd[25668]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:55:57 localhost python3[25685]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 02:55:58 localhost python3[25703]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 6 02:55:58 localhost chronyd[762]: chronyd exiting Dec 6 02:55:58 localhost systemd[1]: Stopping NTP client/server... Dec 6 02:55:58 localhost systemd[1]: chronyd.service: Deactivated successfully. Dec 6 02:55:58 localhost systemd[1]: Stopped NTP client/server. Dec 6 02:55:58 localhost systemd[1]: chronyd.service: Consumed 97ms CPU time, read 1.9M from disk, written 4.0K to disk. Dec 6 02:55:58 localhost systemd[1]: Starting NTP client/server... Dec 6 02:55:59 localhost chronyd[25710]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Dec 6 02:55:59 localhost chronyd[25710]: Frequency -30.154 +/- 0.056 ppm read from /var/lib/chrony/drift Dec 6 02:55:59 localhost chronyd[25710]: Loaded seccomp filter (level 2) Dec 6 02:55:59 localhost systemd[1]: Started NTP client/server. Dec 6 02:55:59 localhost python3[25759]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/chrony-online.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 02:56:00 localhost python3[25802]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765007759.5466952-55759-222539856691277/source dest=/etc/systemd/system/chrony-online.service _original_basename=chrony-online.service follow=False checksum=d4d85e046d61f558ac7ec8178c6d529d893e81e1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 02:56:00 localhost python3[25832]: ansible-systemd Invoked with state=started name=chrony-online.service enabled=True daemon-reload=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 02:56:00 localhost systemd[1]: Reloading. Dec 6 02:56:00 localhost systemd-sysv-generator[25859]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 02:56:00 localhost systemd-rc-local-generator[25856]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 02:56:00 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 02:56:01 localhost systemd[1]: Reloading. Dec 6 02:56:01 localhost systemd-rc-local-generator[25893]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 02:56:01 localhost systemd-sysv-generator[25898]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 02:56:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 02:56:01 localhost systemd[1]: Starting chronyd online sources service... Dec 6 02:56:01 localhost chronyc[25908]: 200 OK Dec 6 02:56:01 localhost systemd[1]: chrony-online.service: Deactivated successfully. Dec 6 02:56:01 localhost systemd[1]: Finished chronyd online sources service. Dec 6 02:56:02 localhost python3[25924]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 02:56:02 localhost chronyd[25710]: System clock was stepped by 0.000000 seconds Dec 6 02:56:02 localhost python3[25941]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 02:56:03 localhost chronyd[25710]: Selected source 51.222.111.13 (pool.ntp.org) Dec 6 02:56:13 localhost python3[25958]: ansible-timezone Invoked with name=UTC hwclock=None Dec 6 02:56:13 localhost systemd[1]: Starting Time & Date Service... Dec 6 02:56:13 localhost systemd[1]: Started Time & Date Service. Dec 6 02:56:13 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 6 02:56:14 localhost python3[25981]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 6 02:56:14 localhost chronyd[25710]: chronyd exiting Dec 6 02:56:14 localhost systemd[1]: Stopping NTP client/server... Dec 6 02:56:14 localhost systemd[1]: chronyd.service: Deactivated successfully. Dec 6 02:56:14 localhost systemd[1]: Stopped NTP client/server. Dec 6 02:56:14 localhost systemd[1]: Starting NTP client/server... Dec 6 02:56:14 localhost chronyd[25988]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Dec 6 02:56:14 localhost chronyd[25988]: Frequency -30.154 +/- 0.056 ppm read from /var/lib/chrony/drift Dec 6 02:56:14 localhost chronyd[25988]: Loaded seccomp filter (level 2) Dec 6 02:56:14 localhost systemd[1]: Started NTP client/server. Dec 6 02:56:18 localhost chronyd[25988]: Selected source 192.95.27.155 (pool.ntp.org) Dec 6 02:56:35 localhost sshd[26183]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:56:41 localhost sshd[26185]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:56:43 localhost systemd[1]: systemd-timedated.service: Deactivated successfully. Dec 6 02:56:57 localhost sshd[26189]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:57:04 localhost sshd[26191]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:57:28 localhost sshd[26193]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:58:03 localhost sshd[26197]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:58:07 localhost sshd[26199]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:58:09 localhost sshd[26201]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:58:13 localhost sshd[26203]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:58:24 localhost sshd[26205]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:58:24 localhost systemd-logind[766]: New session 14 of user ceph-admin. Dec 6 02:58:24 localhost systemd[1]: Created slice User Slice of UID 1002. Dec 6 02:58:24 localhost systemd[1]: Starting User Runtime Directory /run/user/1002... Dec 6 02:58:24 localhost systemd[1]: Finished User Runtime Directory /run/user/1002. Dec 6 02:58:24 localhost systemd[1]: Starting User Manager for UID 1002... Dec 6 02:58:24 localhost sshd[26222]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:58:24 localhost systemd[26209]: Queued start job for default target Main User Target. Dec 6 02:58:24 localhost systemd[26209]: Created slice User Application Slice. Dec 6 02:58:24 localhost systemd[26209]: Started Mark boot as successful after the user session has run 2 minutes. Dec 6 02:58:24 localhost systemd[26209]: Started Daily Cleanup of User's Temporary Directories. Dec 6 02:58:24 localhost systemd[26209]: Reached target Paths. Dec 6 02:58:24 localhost systemd[26209]: Reached target Timers. Dec 6 02:58:24 localhost systemd[26209]: Starting D-Bus User Message Bus Socket... Dec 6 02:58:24 localhost systemd[26209]: Starting Create User's Volatile Files and Directories... Dec 6 02:58:24 localhost systemd[26209]: Listening on D-Bus User Message Bus Socket. Dec 6 02:58:24 localhost systemd[26209]: Reached target Sockets. Dec 6 02:58:24 localhost systemd[26209]: Finished Create User's Volatile Files and Directories. Dec 6 02:58:24 localhost systemd[26209]: Reached target Basic System. Dec 6 02:58:24 localhost systemd[26209]: Reached target Main User Target. Dec 6 02:58:24 localhost systemd[26209]: Startup finished in 114ms. Dec 6 02:58:24 localhost systemd[1]: Started User Manager for UID 1002. Dec 6 02:58:24 localhost systemd[1]: Started Session 14 of User ceph-admin. Dec 6 02:58:24 localhost systemd-logind[766]: New session 16 of user ceph-admin. Dec 6 02:58:24 localhost systemd[1]: Started Session 16 of User ceph-admin. Dec 6 02:58:24 localhost sshd[26244]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:58:24 localhost systemd-logind[766]: New session 17 of user ceph-admin. Dec 6 02:58:24 localhost systemd[1]: Started Session 17 of User ceph-admin. Dec 6 02:58:25 localhost sshd[26263]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:58:25 localhost systemd-logind[766]: New session 18 of user ceph-admin. Dec 6 02:58:25 localhost systemd[1]: Started Session 18 of User ceph-admin. Dec 6 02:58:25 localhost sshd[26282]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:58:25 localhost systemd-logind[766]: New session 19 of user ceph-admin. Dec 6 02:58:25 localhost systemd[1]: Started Session 19 of User ceph-admin. Dec 6 02:58:25 localhost sshd[26301]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:58:26 localhost systemd-logind[766]: New session 20 of user ceph-admin. Dec 6 02:58:26 localhost systemd[1]: Started Session 20 of User ceph-admin. Dec 6 02:58:26 localhost sshd[26320]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:58:26 localhost systemd-logind[766]: New session 21 of user ceph-admin. Dec 6 02:58:26 localhost systemd[1]: Started Session 21 of User ceph-admin. Dec 6 02:58:26 localhost sshd[26339]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:58:26 localhost systemd-logind[766]: New session 22 of user ceph-admin. Dec 6 02:58:26 localhost systemd[1]: Started Session 22 of User ceph-admin. Dec 6 02:58:27 localhost sshd[26358]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:58:27 localhost systemd-logind[766]: New session 23 of user ceph-admin. Dec 6 02:58:27 localhost systemd[1]: Started Session 23 of User ceph-admin. Dec 6 02:58:27 localhost sshd[26377]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:58:27 localhost systemd-logind[766]: New session 24 of user ceph-admin. Dec 6 02:58:27 localhost systemd[1]: Started Session 24 of User ceph-admin. Dec 6 02:58:28 localhost sshd[26394]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:58:28 localhost systemd-logind[766]: New session 25 of user ceph-admin. Dec 6 02:58:28 localhost systemd[1]: Started Session 25 of User ceph-admin. Dec 6 02:58:28 localhost sshd[26413]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:58:28 localhost systemd-logind[766]: New session 26 of user ceph-admin. Dec 6 02:58:28 localhost systemd[1]: Started Session 26 of User ceph-admin. Dec 6 02:58:28 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 6 02:58:47 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 6 02:58:48 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 6 02:58:48 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 6 02:58:48 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 6 02:58:48 localhost systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 26629 (sysctl) Dec 6 02:58:48 localhost systemd[1]: Mounting Arbitrary Executable File Formats File System... Dec 6 02:58:48 localhost systemd[1]: Mounted Arbitrary Executable File Formats File System. Dec 6 02:58:49 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 6 02:58:50 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 6 02:58:50 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 6 02:58:53 localhost kernel: VFS: idmapped mount is not enabled. Dec 6 02:59:02 localhost sshd[26853]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:59:13 localhost podman[26770]: Dec 6 02:59:13 localhost podman[26770]: 2025-12-06 07:59:13.531811098 +0000 UTC m=+23.102943100 container create 7350105fe9b4d3bcc45729d2644680810591819ba664a3ae0750045a241db623 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_lewin, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, ceph=True, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, release=1763362218, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 02:59:13 localhost podman[26770]: 2025-12-06 07:58:50.470185507 +0000 UTC m=+0.041317539 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 02:59:13 localhost systemd[1]: Created slice Slice /machine. Dec 6 02:59:13 localhost systemd[1]: Started libpod-conmon-7350105fe9b4d3bcc45729d2644680810591819ba664a3ae0750045a241db623.scope. Dec 6 02:59:13 localhost systemd[1]: Started libcrun container. Dec 6 02:59:13 localhost podman[26770]: 2025-12-06 07:59:13.619856271 +0000 UTC m=+23.190988313 container init 7350105fe9b4d3bcc45729d2644680810591819ba664a3ae0750045a241db623 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_lewin, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, release=1763362218, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 6 02:59:13 localhost podman[26770]: 2025-12-06 07:59:13.629673162 +0000 UTC m=+23.200805174 container start 7350105fe9b4d3bcc45729d2644680810591819ba664a3ae0750045a241db623 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_lewin, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, vcs-type=git, GIT_CLEAN=True, GIT_BRANCH=main, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, RELEASE=main, architecture=x86_64, build-date=2025-11-26T19:44:28Z, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7) Dec 6 02:59:13 localhost podman[26770]: 2025-12-06 07:59:13.629894739 +0000 UTC m=+23.201026831 container attach 7350105fe9b4d3bcc45729d2644680810591819ba664a3ae0750045a241db623 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_lewin, architecture=x86_64, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, RELEASE=main, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph) Dec 6 02:59:13 localhost distracted_lewin[26872]: 167 167 Dec 6 02:59:13 localhost systemd[1]: libpod-7350105fe9b4d3bcc45729d2644680810591819ba664a3ae0750045a241db623.scope: Deactivated successfully. Dec 6 02:59:13 localhost podman[26770]: 2025-12-06 07:59:13.634156 +0000 UTC m=+23.205288042 container died 7350105fe9b4d3bcc45729d2644680810591819ba664a3ae0750045a241db623 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_lewin, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, GIT_BRANCH=main, maintainer=Guillaume Abrioux , name=rhceph, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, version=7, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main) Dec 6 02:59:13 localhost podman[26877]: 2025-12-06 07:59:13.721854951 +0000 UTC m=+0.075957472 container remove 7350105fe9b4d3bcc45729d2644680810591819ba664a3ae0750045a241db623 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_lewin, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, GIT_CLEAN=True, CEPH_POINT_RELEASE=, vcs-type=git, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, RELEASE=main, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 6 02:59:13 localhost systemd[1]: libpod-conmon-7350105fe9b4d3bcc45729d2644680810591819ba664a3ae0750045a241db623.scope: Deactivated successfully. Dec 6 02:59:13 localhost podman[26897]: Dec 6 02:59:14 localhost podman[26897]: 2025-12-06 07:59:13.972943278 +0000 UTC m=+0.074718195 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 02:59:14 localhost systemd[1]: var-lib-containers-storage-overlay-b5ce0689e8642a6d5ec9da0f1e03028fe29ddd2780e4a00484a4b95724ab05f5-merged.mount: Deactivated successfully. Dec 6 02:59:17 localhost podman[26897]: 2025-12-06 07:59:17.695444377 +0000 UTC m=+3.797219294 container create fb83a09b9cfd2438e292df7e2c48345cadffd8f1eda05dae7ec2baae9e21431b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_franklin, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_CLEAN=True, com.redhat.component=rhceph-container, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, description=Red Hat Ceph Storage 7, io.openshift.expose-services=) Dec 6 02:59:17 localhost systemd[1]: Started libpod-conmon-fb83a09b9cfd2438e292df7e2c48345cadffd8f1eda05dae7ec2baae9e21431b.scope. Dec 6 02:59:17 localhost systemd[1]: Started libcrun container. Dec 6 02:59:17 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d347132d95c91b44e65918585cb82ec910914e068081c068616733ad518cb1c/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 6 02:59:17 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d347132d95c91b44e65918585cb82ec910914e068081c068616733ad518cb1c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 6 02:59:17 localhost podman[26897]: 2025-12-06 07:59:17.782661733 +0000 UTC m=+3.884436650 container init fb83a09b9cfd2438e292df7e2c48345cadffd8f1eda05dae7ec2baae9e21431b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_franklin, vendor=Red Hat, Inc., GIT_CLEAN=True, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, ceph=True, name=rhceph, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, architecture=x86_64, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 02:59:17 localhost podman[26897]: 2025-12-06 07:59:17.794786175 +0000 UTC m=+3.896561092 container start fb83a09b9cfd2438e292df7e2c48345cadffd8f1eda05dae7ec2baae9e21431b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_franklin, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, GIT_BRANCH=main, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , architecture=x86_64, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, GIT_CLEAN=True, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, version=7) Dec 6 02:59:17 localhost podman[26897]: 2025-12-06 07:59:17.798867751 +0000 UTC m=+3.900642718 container attach fb83a09b9cfd2438e292df7e2c48345cadffd8f1eda05dae7ec2baae9e21431b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_franklin, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, vendor=Red Hat, Inc., RELEASE=main, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, ceph=True, name=rhceph, distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 6 02:59:17 localhost sshd[27172]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:59:18 localhost pedantic_franklin[27167]: [ Dec 6 02:59:18 localhost pedantic_franklin[27167]: { Dec 6 02:59:18 localhost pedantic_franklin[27167]: "available": false, Dec 6 02:59:18 localhost pedantic_franklin[27167]: "ceph_device": false, Dec 6 02:59:18 localhost pedantic_franklin[27167]: "device_id": "QEMU_DVD-ROM_QM00001", Dec 6 02:59:18 localhost pedantic_franklin[27167]: "lsm_data": {}, Dec 6 02:59:18 localhost pedantic_franklin[27167]: "lvs": [], Dec 6 02:59:18 localhost pedantic_franklin[27167]: "path": "/dev/sr0", Dec 6 02:59:18 localhost pedantic_franklin[27167]: "rejected_reasons": [ Dec 6 02:59:18 localhost pedantic_franklin[27167]: "Has a FileSystem", Dec 6 02:59:18 localhost pedantic_franklin[27167]: "Insufficient space (<5GB)" Dec 6 02:59:18 localhost pedantic_franklin[27167]: ], Dec 6 02:59:18 localhost pedantic_franklin[27167]: "sys_api": { Dec 6 02:59:18 localhost pedantic_franklin[27167]: "actuators": null, Dec 6 02:59:18 localhost pedantic_franklin[27167]: "device_nodes": "sr0", Dec 6 02:59:18 localhost pedantic_franklin[27167]: "human_readable_size": "482.00 KB", Dec 6 02:59:18 localhost pedantic_franklin[27167]: "id_bus": "ata", Dec 6 02:59:18 localhost pedantic_franklin[27167]: "model": "QEMU DVD-ROM", Dec 6 02:59:18 localhost pedantic_franklin[27167]: "nr_requests": "2", Dec 6 02:59:18 localhost pedantic_franklin[27167]: "partitions": {}, Dec 6 02:59:18 localhost pedantic_franklin[27167]: "path": "/dev/sr0", Dec 6 02:59:18 localhost pedantic_franklin[27167]: "removable": "1", Dec 6 02:59:18 localhost pedantic_franklin[27167]: "rev": "2.5+", Dec 6 02:59:18 localhost pedantic_franklin[27167]: "ro": "0", Dec 6 02:59:18 localhost pedantic_franklin[27167]: "rotational": "1", Dec 6 02:59:18 localhost pedantic_franklin[27167]: "sas_address": "", Dec 6 02:59:18 localhost pedantic_franklin[27167]: "sas_device_handle": "", Dec 6 02:59:18 localhost pedantic_franklin[27167]: "scheduler_mode": "mq-deadline", Dec 6 02:59:18 localhost pedantic_franklin[27167]: "sectors": 0, Dec 6 02:59:18 localhost pedantic_franklin[27167]: "sectorsize": "2048", Dec 6 02:59:18 localhost pedantic_franklin[27167]: "size": 493568.0, Dec 6 02:59:18 localhost pedantic_franklin[27167]: "support_discard": "0", Dec 6 02:59:18 localhost pedantic_franklin[27167]: "type": "disk", Dec 6 02:59:18 localhost pedantic_franklin[27167]: "vendor": "QEMU" Dec 6 02:59:18 localhost pedantic_franklin[27167]: } Dec 6 02:59:18 localhost pedantic_franklin[27167]: } Dec 6 02:59:18 localhost pedantic_franklin[27167]: ] Dec 6 02:59:18 localhost systemd[1]: libpod-fb83a09b9cfd2438e292df7e2c48345cadffd8f1eda05dae7ec2baae9e21431b.scope: Deactivated successfully. Dec 6 02:59:18 localhost podman[26897]: 2025-12-06 07:59:18.638878601 +0000 UTC m=+4.740653548 container died fb83a09b9cfd2438e292df7e2c48345cadffd8f1eda05dae7ec2baae9e21431b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_franklin, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, architecture=x86_64, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, distribution-scope=public, maintainer=Guillaume Abrioux , io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, release=1763362218, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc.) Dec 6 02:59:18 localhost systemd[1]: var-lib-containers-storage-overlay-7d347132d95c91b44e65918585cb82ec910914e068081c068616733ad518cb1c-merged.mount: Deactivated successfully. Dec 6 02:59:18 localhost podman[28474]: 2025-12-06 07:59:18.746885766 +0000 UTC m=+0.093099488 container remove fb83a09b9cfd2438e292df7e2c48345cadffd8f1eda05dae7ec2baae9e21431b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_franklin, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, name=rhceph, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, version=7, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, ceph=True, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, release=1763362218, vcs-type=git, build-date=2025-11-26T19:44:28Z) Dec 6 02:59:18 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 6 02:59:18 localhost systemd[1]: libpod-conmon-fb83a09b9cfd2438e292df7e2c48345cadffd8f1eda05dae7ec2baae9e21431b.scope: Deactivated successfully. Dec 6 02:59:19 localhost systemd[1]: systemd-coredump.socket: Deactivated successfully. Dec 6 02:59:19 localhost systemd[1]: Closed Process Core Dump Socket. Dec 6 02:59:19 localhost systemd[1]: Stopping Process Core Dump Socket... Dec 6 02:59:19 localhost systemd[1]: Listening on Process Core Dump Socket. Dec 6 02:59:19 localhost systemd[1]: Reloading. Dec 6 02:59:19 localhost systemd-sysv-generator[28560]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 02:59:19 localhost systemd-rc-local-generator[28557]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 02:59:19 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 02:59:19 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 6 02:59:19 localhost systemd[1]: Reloading. Dec 6 02:59:19 localhost systemd-rc-local-generator[28596]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 02:59:19 localhost systemd-sysv-generator[28599]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 02:59:19 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 02:59:21 localhost sshd[28605]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:59:25 localhost sshd[28607]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:59:42 localhost sshd[28609]: main: sshd: ssh-rsa algorithm is disabled Dec 6 02:59:49 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 6 02:59:50 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 6 02:59:50 localhost podman[28683]: Dec 6 02:59:50 localhost podman[28683]: 2025-12-06 07:59:50.236578506 +0000 UTC m=+0.109937237 container create 143a75030e53358180320dd3b7e07301732dd53d6350cf4ab50c5c8c7136e96d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_wilson, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, distribution-scope=public, version=7, GIT_CLEAN=True, name=rhceph, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, architecture=x86_64, ceph=True, io.openshift.tags=rhceph ceph) Dec 6 02:59:50 localhost podman[28683]: 2025-12-06 07:59:50.168906675 +0000 UTC m=+0.042265396 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 02:59:50 localhost systemd[1]: Started libpod-conmon-143a75030e53358180320dd3b7e07301732dd53d6350cf4ab50c5c8c7136e96d.scope. Dec 6 02:59:50 localhost systemd[1]: Started libcrun container. Dec 6 02:59:50 localhost podman[28683]: 2025-12-06 07:59:50.3043168 +0000 UTC m=+0.177675521 container init 143a75030e53358180320dd3b7e07301732dd53d6350cf4ab50c5c8c7136e96d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_wilson, release=1763362218, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, CEPH_POINT_RELEASE=, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, GIT_BRANCH=main, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc.) Dec 6 02:59:50 localhost podman[28683]: 2025-12-06 07:59:50.315590151 +0000 UTC m=+0.188948872 container start 143a75030e53358180320dd3b7e07301732dd53d6350cf4ab50c5c8c7136e96d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_wilson, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.openshift.expose-services=, GIT_CLEAN=True, CEPH_POINT_RELEASE=, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, com.redhat.component=rhceph-container, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64) Dec 6 02:59:50 localhost podman[28683]: 2025-12-06 07:59:50.315904614 +0000 UTC m=+0.189263385 container attach 143a75030e53358180320dd3b7e07301732dd53d6350cf4ab50c5c8c7136e96d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_wilson, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, RELEASE=main, architecture=x86_64, GIT_BRANCH=main, io.buildah.version=1.41.4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, release=1763362218, io.openshift.expose-services=, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers) Dec 6 02:59:50 localhost angry_wilson[28698]: 167 167 Dec 6 02:59:50 localhost systemd[1]: libpod-143a75030e53358180320dd3b7e07301732dd53d6350cf4ab50c5c8c7136e96d.scope: Deactivated successfully. Dec 6 02:59:50 localhost podman[28683]: 2025-12-06 07:59:50.319983963 +0000 UTC m=+0.193342684 container died 143a75030e53358180320dd3b7e07301732dd53d6350cf4ab50c5c8c7136e96d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_wilson, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , io.openshift.expose-services=, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, architecture=x86_64, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 6 02:59:50 localhost podman[28703]: 2025-12-06 07:59:50.40669703 +0000 UTC m=+0.075423646 container remove 143a75030e53358180320dd3b7e07301732dd53d6350cf4ab50c5c8c7136e96d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_wilson, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, ceph=True, release=1763362218, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, name=rhceph, build-date=2025-11-26T19:44:28Z, distribution-scope=public, RELEASE=main, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 02:59:50 localhost systemd[1]: libpod-conmon-143a75030e53358180320dd3b7e07301732dd53d6350cf4ab50c5c8c7136e96d.scope: Deactivated successfully. Dec 6 02:59:50 localhost systemd[1]: Reloading. Dec 6 02:59:50 localhost systemd-rc-local-generator[28742]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 02:59:50 localhost systemd-sysv-generator[28747]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 02:59:50 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 02:59:50 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 6 02:59:50 localhost systemd[1]: Reloading. Dec 6 02:59:50 localhost systemd-rc-local-generator[28782]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 02:59:50 localhost systemd-sysv-generator[28785]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 02:59:50 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 02:59:51 localhost systemd[1]: Reached target All Ceph clusters and services. Dec 6 02:59:51 localhost systemd[1]: Reloading. Dec 6 02:59:51 localhost systemd-rc-local-generator[28821]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 02:59:51 localhost systemd-sysv-generator[28826]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 02:59:51 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 02:59:51 localhost systemd[1]: Reached target Ceph cluster 1939e851-b10c-5c3b-9bb7-8e7f380233e8. Dec 6 02:59:51 localhost systemd[1]: Reloading. Dec 6 02:59:51 localhost systemd-sysv-generator[28862]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 02:59:51 localhost systemd-rc-local-generator[28859]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 02:59:51 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 02:59:51 localhost systemd[1]: Reloading. Dec 6 02:59:51 localhost systemd-rc-local-generator[28900]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 02:59:51 localhost systemd-sysv-generator[28906]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 02:59:51 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 02:59:51 localhost systemd[1]: Created slice Slice /system/ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8. Dec 6 02:59:51 localhost systemd[1]: Reached target System Time Set. Dec 6 02:59:51 localhost systemd[1]: Reached target System Time Synchronized. Dec 6 02:59:51 localhost systemd[1]: Starting Ceph crash.np0005548789 for 1939e851-b10c-5c3b-9bb7-8e7f380233e8... Dec 6 02:59:51 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 6 02:59:51 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Dec 6 02:59:52 localhost podman[28965]: Dec 6 02:59:52 localhost podman[28965]: 2025-12-06 07:59:52.077199824 +0000 UTC m=+0.063493538 container create ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, ceph=True, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, GIT_BRANCH=main, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 6 02:59:52 localhost podman[28965]: 2025-12-06 07:59:52.048423397 +0000 UTC m=+0.034717121 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 02:59:52 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f75e1c5dc68106d13b590be81c783884e46aa279b6185456f5563774cd3c14da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 6 02:59:52 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f75e1c5dc68106d13b590be81c783884e46aa279b6185456f5563774cd3c14da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 6 02:59:52 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f75e1c5dc68106d13b590be81c783884e46aa279b6185456f5563774cd3c14da/merged/etc/ceph/ceph.client.crash.np0005548789.keyring supports timestamps until 2038 (0x7fffffff) Dec 6 02:59:52 localhost podman[28965]: 2025-12-06 07:59:52.191496151 +0000 UTC m=+0.177789855 container init ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, release=1763362218, ceph=True, RELEASE=main, com.redhat.component=rhceph-container, distribution-scope=public, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, description=Red Hat Ceph Storage 7) Dec 6 02:59:52 localhost podman[28965]: 2025-12-06 07:59:52.201891488 +0000 UTC m=+0.188185192 container start ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, name=rhceph, com.redhat.component=rhceph-container, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , ceph=True, version=7, distribution-scope=public) Dec 6 02:59:52 localhost bash[28965]: ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 Dec 6 02:59:52 localhost systemd[1]: Started Ceph crash.np0005548789 for 1939e851-b10c-5c3b-9bb7-8e7f380233e8. Dec 6 02:59:52 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789[28980]: INFO:ceph-crash:pinging cluster to exercise our key Dec 6 02:59:52 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789[28980]: 2025-12-06T07:59:52.381+0000 7f7965a25640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory Dec 6 02:59:52 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789[28980]: 2025-12-06T07:59:52.381+0000 7f7965a25640 -1 AuthRegistry(0x7f79600680d0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx Dec 6 02:59:52 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789[28980]: 2025-12-06T07:59:52.381+0000 7f7965a25640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory Dec 6 02:59:52 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789[28980]: 2025-12-06T07:59:52.381+0000 7f7965a25640 -1 AuthRegistry(0x7f7965a24000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx Dec 6 02:59:52 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789[28980]: 2025-12-06T07:59:52.393+0000 7f795ffff640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1] Dec 6 02:59:52 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789[28980]: 2025-12-06T07:59:52.395+0000 7f7964a23640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1] Dec 6 02:59:52 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789[28980]: 2025-12-06T07:59:52.395+0000 7f7965224640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1] Dec 6 02:59:52 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789[28980]: 2025-12-06T07:59:52.395+0000 7f7965a25640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication Dec 6 02:59:52 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789[28980]: [errno 13] RADOS permission denied (error connecting to the cluster) Dec 6 02:59:52 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789[28980]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s Dec 6 02:59:53 localhost podman[29067]: Dec 6 02:59:53 localhost podman[29067]: 2025-12-06 07:59:53.012377205 +0000 UTC m=+0.075934775 container create 5ed0100e3c2733c3eb992470839044245fc813f312d73820944c47774a776eb8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_davinci, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, vcs-type=git, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, maintainer=Guillaume Abrioux , RELEASE=main, description=Red Hat Ceph Storage 7, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, ceph=True, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, GIT_BRANCH=main, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 6 02:59:53 localhost systemd[1]: Started libpod-conmon-5ed0100e3c2733c3eb992470839044245fc813f312d73820944c47774a776eb8.scope. Dec 6 02:59:53 localhost systemd[1]: Started libcrun container. Dec 6 02:59:53 localhost podman[29067]: 2025-12-06 07:59:53.079608488 +0000 UTC m=+0.143166058 container init 5ed0100e3c2733c3eb992470839044245fc813f312d73820944c47774a776eb8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_davinci, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, RELEASE=main, name=rhceph, architecture=x86_64, maintainer=Guillaume Abrioux , GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, release=1763362218, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 6 02:59:53 localhost podman[29067]: 2025-12-06 07:59:52.983080018 +0000 UTC m=+0.046637578 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 02:59:53 localhost systemd[1]: tmp-crun.fZ2w30.mount: Deactivated successfully. Dec 6 02:59:53 localhost podman[29067]: 2025-12-06 07:59:53.090523426 +0000 UTC m=+0.154080996 container start 5ed0100e3c2733c3eb992470839044245fc813f312d73820944c47774a776eb8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_davinci, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, ceph=True, io.openshift.expose-services=, vcs-type=git, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, com.redhat.component=rhceph-container, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7) Dec 6 02:59:53 localhost podman[29067]: 2025-12-06 07:59:53.090974654 +0000 UTC m=+0.154532254 container attach 5ed0100e3c2733c3eb992470839044245fc813f312d73820944c47774a776eb8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_davinci, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, ceph=True, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, distribution-scope=public, io.buildah.version=1.41.4, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7) Dec 6 02:59:53 localhost nice_davinci[29082]: 167 167 Dec 6 02:59:53 localhost systemd[1]: libpod-5ed0100e3c2733c3eb992470839044245fc813f312d73820944c47774a776eb8.scope: Deactivated successfully. Dec 6 02:59:53 localhost podman[29067]: 2025-12-06 07:59:53.09520936 +0000 UTC m=+0.158766930 container died 5ed0100e3c2733c3eb992470839044245fc813f312d73820944c47774a776eb8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_davinci, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, release=1763362218, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, name=rhceph, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc.) Dec 6 02:59:53 localhost podman[29087]: 2025-12-06 07:59:53.180059413 +0000 UTC m=+0.074093483 container remove 5ed0100e3c2733c3eb992470839044245fc813f312d73820944c47774a776eb8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_davinci, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, release=1763362218, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, ceph=True, com.redhat.component=rhceph-container, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, distribution-scope=public, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=) Dec 6 02:59:53 localhost systemd[1]: libpod-conmon-5ed0100e3c2733c3eb992470839044245fc813f312d73820944c47774a776eb8.scope: Deactivated successfully. Dec 6 02:59:53 localhost podman[29106]: Dec 6 02:59:53 localhost podman[29106]: 2025-12-06 07:59:53.380311428 +0000 UTC m=+0.069341088 container create 38a13d0e460f417fe530dd46a18274b8389e4682afbb9b75997400958b78b3d1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_greider, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, name=rhceph, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vendor=Red Hat, Inc., GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, ceph=True, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, version=7) Dec 6 02:59:53 localhost systemd[1]: Started libpod-conmon-38a13d0e460f417fe530dd46a18274b8389e4682afbb9b75997400958b78b3d1.scope. Dec 6 02:59:53 localhost systemd[1]: Started libcrun container. Dec 6 02:59:53 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dec8f4be64f7107f934f752e8830e402d5e21097387621af1ed9f64e6da86cc/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 6 02:59:53 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dec8f4be64f7107f934f752e8830e402d5e21097387621af1ed9f64e6da86cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 6 02:59:53 localhost podman[29106]: 2025-12-06 07:59:53.352529759 +0000 UTC m=+0.041559429 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 02:59:53 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dec8f4be64f7107f934f752e8830e402d5e21097387621af1ed9f64e6da86cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 6 02:59:53 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dec8f4be64f7107f934f752e8830e402d5e21097387621af1ed9f64e6da86cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 6 02:59:53 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dec8f4be64f7107f934f752e8830e402d5e21097387621af1ed9f64e6da86cc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff) Dec 6 02:59:53 localhost podman[29106]: 2025-12-06 07:59:53.498478756 +0000 UTC m=+0.187508396 container init 38a13d0e460f417fe530dd46a18274b8389e4682afbb9b75997400958b78b3d1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_greider, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_CLEAN=True, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, vcs-type=git, ceph=True, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, version=7, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, RELEASE=main, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4) Dec 6 02:59:53 localhost podman[29106]: 2025-12-06 07:59:53.508439997 +0000 UTC m=+0.197469647 container start 38a13d0e460f417fe530dd46a18274b8389e4682afbb9b75997400958b78b3d1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_greider, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, release=1763362218, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, RELEASE=main, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , GIT_CLEAN=True, version=7, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 6 02:59:53 localhost podman[29106]: 2025-12-06 07:59:53.508708997 +0000 UTC m=+0.197738667 container attach 38a13d0e460f417fe530dd46a18274b8389e4682afbb9b75997400958b78b3d1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_greider, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, name=rhceph, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, maintainer=Guillaume Abrioux , GIT_BRANCH=main, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=) Dec 6 02:59:53 localhost angry_greider[29122]: --> passed data devices: 0 physical, 2 LVM Dec 6 02:59:53 localhost angry_greider[29122]: --> relative data size: 1.0 Dec 6 02:59:54 localhost systemd[1]: var-lib-containers-storage-overlay-c5a62fb99d29291a28bbfc397e985ab005ba51a5933a7594dd4a4809fd49c8b1-merged.mount: Deactivated successfully. Dec 6 02:59:54 localhost angry_greider[29122]: Running command: /usr/bin/ceph-authtool --gen-print-key Dec 6 02:59:54 localhost angry_greider[29122]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 1f710487-3a3c-4f3d-8622-d6fac6224470 Dec 6 02:59:54 localhost lvm[29176]: PV /dev/loop3 online, VG ceph_vg0 is complete. Dec 6 02:59:54 localhost lvm[29176]: VG ceph_vg0 finished Dec 6 02:59:54 localhost angry_greider[29122]: Running command: /usr/bin/ceph-authtool --gen-print-key Dec 6 02:59:54 localhost angry_greider[29122]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1 Dec 6 02:59:54 localhost angry_greider[29122]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0 Dec 6 02:59:54 localhost angry_greider[29122]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 Dec 6 02:59:54 localhost angry_greider[29122]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block Dec 6 02:59:54 localhost angry_greider[29122]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap Dec 6 02:59:55 localhost angry_greider[29122]: stderr: got monmap epoch 3 Dec 6 02:59:55 localhost angry_greider[29122]: --> Creating keyring file for osd.1 Dec 6 02:59:55 localhost angry_greider[29122]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring Dec 6 02:59:55 localhost angry_greider[29122]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/ Dec 6 02:59:55 localhost angry_greider[29122]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 1f710487-3a3c-4f3d-8622-d6fac6224470 --setuser ceph --setgroup ceph Dec 6 02:59:57 localhost angry_greider[29122]: stderr: 2025-12-06T07:59:55.210+0000 7f71cd170a80 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3] Dec 6 02:59:57 localhost angry_greider[29122]: stderr: 2025-12-06T07:59:55.210+0000 7f71cd170a80 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid Dec 6 02:59:57 localhost angry_greider[29122]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0 Dec 6 02:59:57 localhost angry_greider[29122]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 Dec 6 02:59:57 localhost angry_greider[29122]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config Dec 6 02:59:57 localhost angry_greider[29122]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block Dec 6 02:59:57 localhost angry_greider[29122]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block Dec 6 02:59:57 localhost angry_greider[29122]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 Dec 6 02:59:57 localhost angry_greider[29122]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 Dec 6 02:59:57 localhost angry_greider[29122]: --> ceph-volume lvm activate successful for osd ID: 1 Dec 6 02:59:57 localhost angry_greider[29122]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0 Dec 6 02:59:57 localhost angry_greider[29122]: Running command: /usr/bin/ceph-authtool --gen-print-key Dec 6 02:59:57 localhost angry_greider[29122]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 876fe068-f1aa-42bd-a56b-91d35874dd8e Dec 6 02:59:58 localhost lvm[30120]: PV /dev/loop4 online, VG ceph_vg1 is complete. Dec 6 02:59:58 localhost lvm[30120]: VG ceph_vg1 finished Dec 6 02:59:58 localhost angry_greider[29122]: Running command: /usr/bin/ceph-authtool --gen-print-key Dec 6 02:59:58 localhost angry_greider[29122]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-4 Dec 6 02:59:58 localhost angry_greider[29122]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1 Dec 6 02:59:58 localhost angry_greider[29122]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 Dec 6 02:59:58 localhost angry_greider[29122]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-4/block Dec 6 02:59:58 localhost angry_greider[29122]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-4/activate.monmap Dec 6 02:59:58 localhost angry_greider[29122]: stderr: got monmap epoch 3 Dec 6 02:59:58 localhost angry_greider[29122]: --> Creating keyring file for osd.4 Dec 6 02:59:58 localhost angry_greider[29122]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4/keyring Dec 6 02:59:58 localhost angry_greider[29122]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4/ Dec 6 02:59:58 localhost angry_greider[29122]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 4 --monmap /var/lib/ceph/osd/ceph-4/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-4/ --osd-uuid 876fe068-f1aa-42bd-a56b-91d35874dd8e --setuser ceph --setgroup ceph Dec 6 03:00:01 localhost angry_greider[29122]: stderr: 2025-12-06T07:59:59.043+0000 7fa55524ca80 -1 bluestore(/var/lib/ceph/osd/ceph-4//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3] Dec 6 03:00:01 localhost angry_greider[29122]: stderr: 2025-12-06T07:59:59.043+0000 7fa55524ca80 -1 bluestore(/var/lib/ceph/osd/ceph-4/) _read_fsid unparsable uuid Dec 6 03:00:01 localhost angry_greider[29122]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1 Dec 6 03:00:01 localhost angry_greider[29122]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 Dec 6 03:00:01 localhost angry_greider[29122]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-4 --no-mon-config Dec 6 03:00:01 localhost angry_greider[29122]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-4/block Dec 6 03:00:01 localhost angry_greider[29122]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-4/block Dec 6 03:00:01 localhost angry_greider[29122]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 Dec 6 03:00:01 localhost angry_greider[29122]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 Dec 6 03:00:01 localhost angry_greider[29122]: --> ceph-volume lvm activate successful for osd ID: 4 Dec 6 03:00:01 localhost angry_greider[29122]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1 Dec 6 03:00:01 localhost systemd[1]: libpod-38a13d0e460f417fe530dd46a18274b8389e4682afbb9b75997400958b78b3d1.scope: Deactivated successfully. Dec 6 03:00:01 localhost systemd[1]: libpod-38a13d0e460f417fe530dd46a18274b8389e4682afbb9b75997400958b78b3d1.scope: Consumed 3.719s CPU time. Dec 6 03:00:01 localhost podman[29106]: 2025-12-06 08:00:01.878714914 +0000 UTC m=+8.567744594 container died 38a13d0e460f417fe530dd46a18274b8389e4682afbb9b75997400958b78b3d1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_greider, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., RELEASE=main, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, version=7, io.openshift.tags=rhceph ceph, name=rhceph, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, vcs-type=git, description=Red Hat Ceph Storage 7, distribution-scope=public, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , release=1763362218, com.redhat.component=rhceph-container) Dec 6 03:00:01 localhost systemd[1]: var-lib-containers-storage-overlay-2dec8f4be64f7107f934f752e8830e402d5e21097387621af1ed9f64e6da86cc-merged.mount: Deactivated successfully. Dec 6 03:00:01 localhost podman[31036]: 2025-12-06 08:00:01.952673211 +0000 UTC m=+0.066956233 container remove 38a13d0e460f417fe530dd46a18274b8389e4682afbb9b75997400958b78b3d1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_greider, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, GIT_CLEAN=True, io.openshift.expose-services=, vcs-type=git, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, version=7, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, name=rhceph, distribution-scope=public) Dec 6 03:00:01 localhost systemd[1]: libpod-conmon-38a13d0e460f417fe530dd46a18274b8389e4682afbb9b75997400958b78b3d1.scope: Deactivated successfully. Dec 6 03:00:02 localhost podman[31119]: Dec 6 03:00:02 localhost podman[31119]: 2025-12-06 08:00:02.638183943 +0000 UTC m=+0.072520481 container create a631457bcfd3b6fe04c2edf5730c5696b322d44436f9d12eaabbb7030ad24b53 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_cori, maintainer=Guillaume Abrioux , version=7, GIT_BRANCH=main, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, RELEASE=main, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, io.openshift.expose-services=, GIT_CLEAN=True, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph) Dec 6 03:00:02 localhost systemd[1]: Started libpod-conmon-a631457bcfd3b6fe04c2edf5730c5696b322d44436f9d12eaabbb7030ad24b53.scope. Dec 6 03:00:02 localhost systemd[1]: Started libcrun container. Dec 6 03:00:02 localhost podman[31119]: 2025-12-06 08:00:02.606507332 +0000 UTC m=+0.040843890 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 03:00:02 localhost podman[31119]: 2025-12-06 08:00:02.707246588 +0000 UTC m=+0.141583126 container init a631457bcfd3b6fe04c2edf5730c5696b322d44436f9d12eaabbb7030ad24b53 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_cori, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, RELEASE=main, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, version=7, name=rhceph, GIT_CLEAN=True, distribution-scope=public, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux ) Dec 6 03:00:02 localhost podman[31119]: 2025-12-06 08:00:02.716135226 +0000 UTC m=+0.150471754 container start a631457bcfd3b6fe04c2edf5730c5696b322d44436f9d12eaabbb7030ad24b53 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_cori, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, version=7, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=) Dec 6 03:00:02 localhost podman[31119]: 2025-12-06 08:00:02.71724304 +0000 UTC m=+0.151579628 container attach a631457bcfd3b6fe04c2edf5730c5696b322d44436f9d12eaabbb7030ad24b53 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_cori, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, release=1763362218) Dec 6 03:00:02 localhost amazing_cori[31134]: 167 167 Dec 6 03:00:02 localhost systemd[1]: libpod-a631457bcfd3b6fe04c2edf5730c5696b322d44436f9d12eaabbb7030ad24b53.scope: Deactivated successfully. Dec 6 03:00:02 localhost podman[31119]: 2025-12-06 08:00:02.721497706 +0000 UTC m=+0.155834734 container died a631457bcfd3b6fe04c2edf5730c5696b322d44436f9d12eaabbb7030ad24b53 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_cori, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, description=Red Hat Ceph Storage 7, architecture=x86_64, release=1763362218, vendor=Red Hat, Inc., distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, vcs-type=git, com.redhat.component=rhceph-container, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, io.openshift.tags=rhceph ceph) Dec 6 03:00:02 localhost podman[31139]: 2025-12-06 08:00:02.809130409 +0000 UTC m=+0.075535959 container remove a631457bcfd3b6fe04c2edf5730c5696b322d44436f9d12eaabbb7030ad24b53 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_cori, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, distribution-scope=public, CEPH_POINT_RELEASE=, vcs-type=git, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, GIT_BRANCH=main, io.openshift.expose-services=, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, com.redhat.component=rhceph-container, vendor=Red Hat, Inc.) Dec 6 03:00:02 localhost systemd[1]: libpod-conmon-a631457bcfd3b6fe04c2edf5730c5696b322d44436f9d12eaabbb7030ad24b53.scope: Deactivated successfully. Dec 6 03:00:02 localhost systemd[1]: var-lib-containers-storage-overlay-475b25ef132999e27939deb7fa4afcdff68a8ceb5311a9c4de79b332733b9ac0-merged.mount: Deactivated successfully. Dec 6 03:00:03 localhost podman[31160]: Dec 6 03:00:03 localhost podman[31160]: 2025-12-06 08:00:03.01977939 +0000 UTC m=+0.068872029 container create 8e539d9b44cc3e4b9b91830dfe33909cac74a401560f3ec34187289a7ead2b21 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_sinoussi, release=1763362218, description=Red Hat Ceph Storage 7, distribution-scope=public, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, ceph=True, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, architecture=x86_64, GIT_CLEAN=True, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container) Dec 6 03:00:03 localhost systemd[1]: Started libpod-conmon-8e539d9b44cc3e4b9b91830dfe33909cac74a401560f3ec34187289a7ead2b21.scope. Dec 6 03:00:03 localhost systemd[1]: Started libcrun container. Dec 6 03:00:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e4e551b6994d78ba73ccd0eb46cfbea5f6021c8361b5d3699f7f0bb35bd30f4/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 6 03:00:03 localhost podman[31160]: 2025-12-06 08:00:02.994163027 +0000 UTC m=+0.043255696 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 03:00:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e4e551b6994d78ba73ccd0eb46cfbea5f6021c8361b5d3699f7f0bb35bd30f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 6 03:00:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e4e551b6994d78ba73ccd0eb46cfbea5f6021c8361b5d3699f7f0bb35bd30f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 6 03:00:03 localhost podman[31160]: 2025-12-06 08:00:03.112087676 +0000 UTC m=+0.161180335 container init 8e539d9b44cc3e4b9b91830dfe33909cac74a401560f3ec34187289a7ead2b21 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_sinoussi, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, version=7, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, release=1763362218, RELEASE=main, io.openshift.expose-services=, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, distribution-scope=public, GIT_CLEAN=True) Dec 6 03:00:03 localhost podman[31160]: 2025-12-06 08:00:03.122026775 +0000 UTC m=+0.171119444 container start 8e539d9b44cc3e4b9b91830dfe33909cac74a401560f3ec34187289a7ead2b21 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_sinoussi, build-date=2025-11-26T19:44:28Z, RELEASE=main, version=7, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, name=rhceph, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 6 03:00:03 localhost podman[31160]: 2025-12-06 08:00:03.122363988 +0000 UTC m=+0.171456717 container attach 8e539d9b44cc3e4b9b91830dfe33909cac74a401560f3ec34187289a7ead2b21 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_sinoussi, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, GIT_BRANCH=main, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=) Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: { Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: "1": [ Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: { Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: "devices": [ Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: "/dev/loop3" Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: ], Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: "lv_name": "ceph_lv0", Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: "lv_path": "/dev/ceph_vg0/ceph_lv0", Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: "lv_size": "7511998464", Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=jxsEmx-Am0R-cg61-xEnr-yg8f-zYJK-uQKeR2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1939e851-b10c-5c3b-9bb7-8e7f380233e8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1f710487-3a3c-4f3d-8622-d6fac6224470,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0", Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: "lv_uuid": "jxsEmx-Am0R-cg61-xEnr-yg8f-zYJK-uQKeR2", Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: "name": "ceph_lv0", Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: "path": "/dev/ceph_vg0/ceph_lv0", Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: "tags": { Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: "ceph.block_device": "/dev/ceph_vg0/ceph_lv0", Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: "ceph.block_uuid": "jxsEmx-Am0R-cg61-xEnr-yg8f-zYJK-uQKeR2", Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: "ceph.cephx_lockbox_secret": "", Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: "ceph.cluster_fsid": "1939e851-b10c-5c3b-9bb7-8e7f380233e8", Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: "ceph.cluster_name": "ceph", Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: "ceph.crush_device_class": "", Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: "ceph.encrypted": "0", Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: "ceph.osd_fsid": "1f710487-3a3c-4f3d-8622-d6fac6224470", Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: "ceph.osd_id": "1", Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: "ceph.osdspec_affinity": "default_drive_group", Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: "ceph.type": "block", Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: "ceph.vdo": "0" Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: }, Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: "type": "block", Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: "vg_name": "ceph_vg0" Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: } Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: ], Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: "4": [ Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: { Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: "devices": [ Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: "/dev/loop4" Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: ], Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: "lv_name": "ceph_lv1", Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: "lv_path": "/dev/ceph_vg1/ceph_lv1", Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: "lv_size": "7511998464", Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=HIwJoG-9m23-NWng-tx4I-d4U0-RA89-Raie0n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1939e851-b10c-5c3b-9bb7-8e7f380233e8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=876fe068-f1aa-42bd-a56b-91d35874dd8e,ceph.osd_id=4,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0", Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: "lv_uuid": "HIwJoG-9m23-NWng-tx4I-d4U0-RA89-Raie0n", Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: "name": "ceph_lv1", Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: "path": "/dev/ceph_vg1/ceph_lv1", Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: "tags": { Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: "ceph.block_device": "/dev/ceph_vg1/ceph_lv1", Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: "ceph.block_uuid": "HIwJoG-9m23-NWng-tx4I-d4U0-RA89-Raie0n", Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: "ceph.cephx_lockbox_secret": "", Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: "ceph.cluster_fsid": "1939e851-b10c-5c3b-9bb7-8e7f380233e8", Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: "ceph.cluster_name": "ceph", Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: "ceph.crush_device_class": "", Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: "ceph.encrypted": "0", Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: "ceph.osd_fsid": "876fe068-f1aa-42bd-a56b-91d35874dd8e", Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: "ceph.osd_id": "4", Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: "ceph.osdspec_affinity": "default_drive_group", Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: "ceph.type": "block", Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: "ceph.vdo": "0" Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: }, Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: "type": "block", Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: "vg_name": "ceph_vg1" Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: } Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: ] Dec 6 03:00:03 localhost optimistic_sinoussi[31175]: } Dec 6 03:00:03 localhost systemd[1]: libpod-8e539d9b44cc3e4b9b91830dfe33909cac74a401560f3ec34187289a7ead2b21.scope: Deactivated successfully. Dec 6 03:00:03 localhost podman[31160]: 2025-12-06 08:00:03.457667492 +0000 UTC m=+0.506760191 container died 8e539d9b44cc3e4b9b91830dfe33909cac74a401560f3ec34187289a7ead2b21 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_sinoussi, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, distribution-scope=public, version=7, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, build-date=2025-11-26T19:44:28Z, vcs-type=git, io.openshift.expose-services=, name=rhceph, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph) Dec 6 03:00:03 localhost podman[31184]: 2025-12-06 08:00:03.549213399 +0000 UTC m=+0.083034774 container remove 8e539d9b44cc3e4b9b91830dfe33909cac74a401560f3ec34187289a7ead2b21 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_sinoussi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Guillaume Abrioux , GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, distribution-scope=public, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, ceph=True, vcs-type=git, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, release=1763362218, architecture=x86_64, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7) Dec 6 03:00:03 localhost systemd[1]: libpod-conmon-8e539d9b44cc3e4b9b91830dfe33909cac74a401560f3ec34187289a7ead2b21.scope: Deactivated successfully. Dec 6 03:00:03 localhost systemd[1]: tmp-crun.uplUrX.mount: Deactivated successfully. Dec 6 03:00:03 localhost systemd[1]: var-lib-containers-storage-overlay-5e4e551b6994d78ba73ccd0eb46cfbea5f6021c8361b5d3699f7f0bb35bd30f4-merged.mount: Deactivated successfully. Dec 6 03:00:04 localhost podman[31271]: Dec 6 03:00:04 localhost podman[31271]: 2025-12-06 08:00:04.305574315 +0000 UTC m=+0.076495948 container create 82cbcc4fedc793ef32a1fe7ddba6188791bd9aaca1b2bf6bd03aac7253f50ec0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_wozniak, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, vcs-type=git, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, version=7, name=rhceph, architecture=x86_64, GIT_CLEAN=True, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 6 03:00:04 localhost systemd[1]: Started libpod-conmon-82cbcc4fedc793ef32a1fe7ddba6188791bd9aaca1b2bf6bd03aac7253f50ec0.scope. Dec 6 03:00:04 localhost systemd[1]: Started libcrun container. Dec 6 03:00:04 localhost podman[31271]: 2025-12-06 08:00:04.355703829 +0000 UTC m=+0.126625502 container init 82cbcc4fedc793ef32a1fe7ddba6188791bd9aaca1b2bf6bd03aac7253f50ec0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_wozniak, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, ceph=True, version=7, GIT_BRANCH=main, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , RELEASE=main) Dec 6 03:00:04 localhost podman[31271]: 2025-12-06 08:00:04.365290574 +0000 UTC m=+0.136212227 container start 82cbcc4fedc793ef32a1fe7ddba6188791bd9aaca1b2bf6bd03aac7253f50ec0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_wozniak, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, name=rhceph, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., RELEASE=main, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, GIT_BRANCH=main, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, ceph=True, architecture=x86_64, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7) Dec 6 03:00:04 localhost podman[31271]: 2025-12-06 08:00:04.365516523 +0000 UTC m=+0.136438176 container attach 82cbcc4fedc793ef32a1fe7ddba6188791bd9aaca1b2bf6bd03aac7253f50ec0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_wozniak, distribution-scope=public, maintainer=Guillaume Abrioux , RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, CEPH_POINT_RELEASE=, vcs-type=git, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, io.buildah.version=1.41.4, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 6 03:00:04 localhost musing_wozniak[31287]: 167 167 Dec 6 03:00:04 localhost systemd[1]: libpod-82cbcc4fedc793ef32a1fe7ddba6188791bd9aaca1b2bf6bd03aac7253f50ec0.scope: Deactivated successfully. Dec 6 03:00:04 localhost podman[31271]: 2025-12-06 08:00:04.367713309 +0000 UTC m=+0.138634972 container died 82cbcc4fedc793ef32a1fe7ddba6188791bd9aaca1b2bf6bd03aac7253f50ec0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_wozniak, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, version=7, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, distribution-scope=public, description=Red Hat Ceph Storage 7, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 6 03:00:04 localhost podman[31271]: 2025-12-06 08:00:04.269501712 +0000 UTC m=+0.040423365 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 03:00:04 localhost podman[31292]: 2025-12-06 08:00:04.455804279 +0000 UTC m=+0.074635303 container remove 82cbcc4fedc793ef32a1fe7ddba6188791bd9aaca1b2bf6bd03aac7253f50ec0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_wozniak, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, version=7, ceph=True, vendor=Red Hat, Inc., RELEASE=main, io.openshift.expose-services=, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=) Dec 6 03:00:04 localhost systemd[1]: libpod-conmon-82cbcc4fedc793ef32a1fe7ddba6188791bd9aaca1b2bf6bd03aac7253f50ec0.scope: Deactivated successfully. Dec 6 03:00:04 localhost podman[31319]: Dec 6 03:00:04 localhost podman[31319]: 2025-12-06 08:00:04.790557792 +0000 UTC m=+0.081722332 container create af83484c350c05d44ef4f982d1e4a01668bba121a73a2450b299db2ab8aec446 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-1-activate-test, CEPH_POINT_RELEASE=, ceph=True, version=7, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, architecture=x86_64, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , name=rhceph, build-date=2025-11-26T19:44:28Z, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc.) Dec 6 03:00:04 localhost systemd[1]: Started libpod-conmon-af83484c350c05d44ef4f982d1e4a01668bba121a73a2450b299db2ab8aec446.scope. Dec 6 03:00:04 localhost systemd[1]: Started libcrun container. Dec 6 03:00:04 localhost podman[31319]: 2025-12-06 08:00:04.762357708 +0000 UTC m=+0.053522248 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 03:00:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbe5d02d81f41d6af86961773c0bd6b6b7e53f323918505650ca531e08ed8274/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 6 03:00:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbe5d02d81f41d6af86961773c0bd6b6b7e53f323918505650ca531e08ed8274/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 6 03:00:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbe5d02d81f41d6af86961773c0bd6b6b7e53f323918505650ca531e08ed8274/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 6 03:00:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbe5d02d81f41d6af86961773c0bd6b6b7e53f323918505650ca531e08ed8274/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 6 03:00:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbe5d02d81f41d6af86961773c0bd6b6b7e53f323918505650ca531e08ed8274/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff) Dec 6 03:00:04 localhost podman[31319]: 2025-12-06 08:00:04.909036683 +0000 UTC m=+0.200201223 container init af83484c350c05d44ef4f982d1e4a01668bba121a73a2450b299db2ab8aec446 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-1-activate-test, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, ceph=True, GIT_CLEAN=True, vcs-type=git, description=Red Hat Ceph Storage 7, release=1763362218, io.openshift.expose-services=) Dec 6 03:00:04 localhost podman[31319]: 2025-12-06 08:00:04.919520004 +0000 UTC m=+0.210684574 container start af83484c350c05d44ef4f982d1e4a01668bba121a73a2450b299db2ab8aec446 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-1-activate-test, GIT_CLEAN=True, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, version=7, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, description=Red Hat Ceph Storage 7, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, RELEASE=main, build-date=2025-11-26T19:44:28Z, architecture=x86_64, io.openshift.tags=rhceph ceph, ceph=True) Dec 6 03:00:04 localhost podman[31319]: 2025-12-06 08:00:04.919883758 +0000 UTC m=+0.211048378 container attach af83484c350c05d44ef4f982d1e4a01668bba121a73a2450b299db2ab8aec446 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-1-activate-test, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, maintainer=Guillaume Abrioux , GIT_BRANCH=main, vcs-type=git, description=Red Hat Ceph Storage 7, architecture=x86_64, version=7, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public) Dec 6 03:00:04 localhost systemd[1]: var-lib-containers-storage-overlay-e107116fe6780fe23b2e832ceddab708ea4d834cfb946d87980d14f7cb6dc17c-merged.mount: Deactivated successfully. Dec 6 03:00:05 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-1-activate-test[31335]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID] Dec 6 03:00:05 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-1-activate-test[31335]: [--no-systemd] [--no-tmpfs] Dec 6 03:00:05 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-1-activate-test[31335]: ceph-volume activate: error: unrecognized arguments: --bad-option Dec 6 03:00:05 localhost systemd[1]: libpod-af83484c350c05d44ef4f982d1e4a01668bba121a73a2450b299db2ab8aec446.scope: Deactivated successfully. Dec 6 03:00:05 localhost podman[31319]: 2025-12-06 08:00:05.144158783 +0000 UTC m=+0.435323353 container died af83484c350c05d44ef4f982d1e4a01668bba121a73a2450b299db2ab8aec446 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-1-activate-test, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , RELEASE=main, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=) Dec 6 03:00:05 localhost systemd[1]: var-lib-containers-storage-overlay-dbe5d02d81f41d6af86961773c0bd6b6b7e53f323918505650ca531e08ed8274-merged.mount: Deactivated successfully. Dec 6 03:00:05 localhost systemd-journald[619]: Field hash table of /run/log/journal/4b30904fc4748c16d0c72dbebcabab49/system.journal has a fill level at 75.1 (250 of 333 items), suggesting rotation. Dec 6 03:00:05 localhost systemd-journald[619]: /run/log/journal/4b30904fc4748c16d0c72dbebcabab49/system.journal: Journal header limits reached or header out-of-date, rotating. Dec 6 03:00:05 localhost rsyslogd[760]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 6 03:00:05 localhost podman[31340]: 2025-12-06 08:00:05.210047384 +0000 UTC m=+0.059378998 container remove af83484c350c05d44ef4f982d1e4a01668bba121a73a2450b299db2ab8aec446 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-1-activate-test, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, version=7, vcs-type=git, ceph=True, CEPH_POINT_RELEASE=, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, io.buildah.version=1.41.4, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 6 03:00:05 localhost systemd[1]: libpod-conmon-af83484c350c05d44ef4f982d1e4a01668bba121a73a2450b299db2ab8aec446.scope: Deactivated successfully. Dec 6 03:00:05 localhost rsyslogd[760]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 6 03:00:05 localhost systemd[1]: Reloading. Dec 6 03:00:05 localhost systemd-rc-local-generator[31396]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:00:05 localhost systemd-sysv-generator[31401]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:00:05 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:00:05 localhost systemd[1]: Reloading. Dec 6 03:00:05 localhost systemd-sysv-generator[31445]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:00:05 localhost systemd-rc-local-generator[31442]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:00:05 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:00:05 localhost systemd[1]: Starting Ceph osd.1 for 1939e851-b10c-5c3b-9bb7-8e7f380233e8... Dec 6 03:00:06 localhost podman[31504]: Dec 6 03:00:06 localhost podman[31504]: 2025-12-06 08:00:06.223117526 +0000 UTC m=+0.073936347 container create 597c74aa715fc44def4998c1135bac2b866e166c5a0712bcbc80ba247f8c8eab (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-1-activate, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , ceph=True, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, architecture=x86_64, version=7, GIT_BRANCH=main, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, CEPH_POINT_RELEASE=) Dec 6 03:00:06 localhost systemd[1]: Started libcrun container. Dec 6 03:00:06 localhost podman[31504]: 2025-12-06 08:00:06.193146871 +0000 UTC m=+0.043965692 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 03:00:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36e6fdcc8fbd0db6e6b9bebfdebafeabb7e03697f9c1110028f80fb3a60bc94b/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 6 03:00:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36e6fdcc8fbd0db6e6b9bebfdebafeabb7e03697f9c1110028f80fb3a60bc94b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 6 03:00:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36e6fdcc8fbd0db6e6b9bebfdebafeabb7e03697f9c1110028f80fb3a60bc94b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 6 03:00:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36e6fdcc8fbd0db6e6b9bebfdebafeabb7e03697f9c1110028f80fb3a60bc94b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 6 03:00:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36e6fdcc8fbd0db6e6b9bebfdebafeabb7e03697f9c1110028f80fb3a60bc94b/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff) Dec 6 03:00:06 localhost podman[31504]: 2025-12-06 08:00:06.358667265 +0000 UTC m=+0.209486076 container init 597c74aa715fc44def4998c1135bac2b866e166c5a0712bcbc80ba247f8c8eab (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-1-activate, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, distribution-scope=public, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.component=rhceph-container, io.openshift.expose-services=, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , name=rhceph, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc.) Dec 6 03:00:06 localhost systemd[1]: tmp-crun.ZpMZlz.mount: Deactivated successfully. Dec 6 03:00:06 localhost podman[31504]: 2025-12-06 08:00:06.372863351 +0000 UTC m=+0.223682162 container start 597c74aa715fc44def4998c1135bac2b866e166c5a0712bcbc80ba247f8c8eab (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-1-activate, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, name=rhceph, CEPH_POINT_RELEASE=, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, release=1763362218, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph) Dec 6 03:00:06 localhost podman[31504]: 2025-12-06 08:00:06.373126271 +0000 UTC m=+0.223945082 container attach 597c74aa715fc44def4998c1135bac2b866e166c5a0712bcbc80ba247f8c8eab (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-1-activate, build-date=2025-11-26T19:44:28Z, RELEASE=main, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, release=1763362218, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 6 03:00:06 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-1-activate[31518]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 Dec 6 03:00:06 localhost bash[31504]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 Dec 6 03:00:06 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-1-activate[31518]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0 Dec 6 03:00:06 localhost bash[31504]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0 Dec 6 03:00:07 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-1-activate[31518]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0 Dec 6 03:00:07 localhost bash[31504]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0 Dec 6 03:00:07 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-1-activate[31518]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 Dec 6 03:00:07 localhost bash[31504]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 Dec 6 03:00:07 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-1-activate[31518]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-1/block Dec 6 03:00:07 localhost bash[31504]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-1/block Dec 6 03:00:07 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-1-activate[31518]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 Dec 6 03:00:07 localhost bash[31504]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 Dec 6 03:00:07 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-1-activate[31518]: --> ceph-volume raw activate successful for osd ID: 1 Dec 6 03:00:07 localhost bash[31504]: --> ceph-volume raw activate successful for osd ID: 1 Dec 6 03:00:07 localhost podman[31504]: 2025-12-06 08:00:07.056370074 +0000 UTC m=+0.907188865 container died 597c74aa715fc44def4998c1135bac2b866e166c5a0712bcbc80ba247f8c8eab (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-1-activate, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., GIT_CLEAN=True, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, CEPH_POINT_RELEASE=, version=7, architecture=x86_64, release=1763362218, io.openshift.tags=rhceph ceph, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, com.redhat.component=rhceph-container) Dec 6 03:00:07 localhost systemd[1]: libpod-597c74aa715fc44def4998c1135bac2b866e166c5a0712bcbc80ba247f8c8eab.scope: Deactivated successfully. Dec 6 03:00:07 localhost podman[31648]: 2025-12-06 08:00:07.140624474 +0000 UTC m=+0.067894131 container remove 597c74aa715fc44def4998c1135bac2b866e166c5a0712bcbc80ba247f8c8eab (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-1-activate, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, vcs-type=git, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, io.openshift.expose-services=, name=rhceph, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container) Dec 6 03:00:07 localhost systemd[1]: var-lib-containers-storage-overlay-36e6fdcc8fbd0db6e6b9bebfdebafeabb7e03697f9c1110028f80fb3a60bc94b-merged.mount: Deactivated successfully. Dec 6 03:00:07 localhost podman[31708]: Dec 6 03:00:07 localhost podman[31708]: 2025-12-06 08:00:07.441979518 +0000 UTC m=+0.068981783 container create f0af0a8a2c0d4c2d38536a2fae1423e2ee2dcdd544ad6a01502152adae4fd66a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-1, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, distribution-scope=public, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., architecture=x86_64, RELEASE=main, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, version=7, name=rhceph, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git) Dec 6 03:00:07 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/626593515b4b4f1d9cc9f07e015e464e5aa32faa3ff81c2051697af9e4db66cd/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 6 03:00:07 localhost podman[31708]: 2025-12-06 08:00:07.416053453 +0000 UTC m=+0.043055728 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 03:00:07 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/626593515b4b4f1d9cc9f07e015e464e5aa32faa3ff81c2051697af9e4db66cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 6 03:00:07 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/626593515b4b4f1d9cc9f07e015e464e5aa32faa3ff81c2051697af9e4db66cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 6 03:00:07 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/626593515b4b4f1d9cc9f07e015e464e5aa32faa3ff81c2051697af9e4db66cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 6 03:00:07 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/626593515b4b4f1d9cc9f07e015e464e5aa32faa3ff81c2051697af9e4db66cd/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff) Dec 6 03:00:07 localhost podman[31708]: 2025-12-06 08:00:07.553677653 +0000 UTC m=+0.180679918 container init f0af0a8a2c0d4c2d38536a2fae1423e2ee2dcdd544ad6a01502152adae4fd66a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-1, io.openshift.expose-services=, version=7, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, vcs-type=git, architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, vendor=Red Hat, Inc., name=rhceph, release=1763362218, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 6 03:00:07 localhost podman[31708]: 2025-12-06 08:00:07.562481558 +0000 UTC m=+0.189483823 container start f0af0a8a2c0d4c2d38536a2fae1423e2ee2dcdd544ad6a01502152adae4fd66a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-1, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, release=1763362218, GIT_CLEAN=True, version=7, distribution-scope=public, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, maintainer=Guillaume Abrioux , RELEASE=main, name=rhceph, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, GIT_BRANCH=main) Dec 6 03:00:07 localhost bash[31708]: f0af0a8a2c0d4c2d38536a2fae1423e2ee2dcdd544ad6a01502152adae4fd66a Dec 6 03:00:07 localhost systemd[1]: Started Ceph osd.1 for 1939e851-b10c-5c3b-9bb7-8e7f380233e8. Dec 6 03:00:07 localhost ceph-osd[31726]: set uid:gid to 167:167 (ceph:ceph) Dec 6 03:00:07 localhost ceph-osd[31726]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-osd, pid 2 Dec 6 03:00:07 localhost ceph-osd[31726]: pidfile_write: ignore empty --pid-file Dec 6 03:00:07 localhost ceph-osd[31726]: bdev(0x55d1180dee00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block Dec 6 03:00:07 localhost ceph-osd[31726]: bdev(0x55d1180dee00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument Dec 6 03:00:07 localhost ceph-osd[31726]: bdev(0x55d1180dee00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 6 03:00:07 localhost ceph-osd[31726]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Dec 6 03:00:07 localhost ceph-osd[31726]: bdev(0x55d1180df180 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block Dec 6 03:00:07 localhost ceph-osd[31726]: bdev(0x55d1180df180 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument Dec 6 03:00:07 localhost ceph-osd[31726]: bdev(0x55d1180df180 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 6 03:00:07 localhost ceph-osd[31726]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB Dec 6 03:00:07 localhost ceph-osd[31726]: bdev(0x55d1180df180 /var/lib/ceph/osd/ceph-1/block) close Dec 6 03:00:07 localhost ceph-osd[31726]: bdev(0x55d1180dee00 /var/lib/ceph/osd/ceph-1/block) close Dec 6 03:00:08 localhost ceph-osd[31726]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal Dec 6 03:00:08 localhost ceph-osd[31726]: load: jerasure load: lrc Dec 6 03:00:08 localhost ceph-osd[31726]: bdev(0x55d1180dee00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block Dec 6 03:00:08 localhost ceph-osd[31726]: bdev(0x55d1180dee00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument Dec 6 03:00:08 localhost ceph-osd[31726]: bdev(0x55d1180dee00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 6 03:00:08 localhost ceph-osd[31726]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Dec 6 03:00:08 localhost ceph-osd[31726]: bdev(0x55d1180dee00 /var/lib/ceph/osd/ceph-1/block) close Dec 6 03:00:08 localhost podman[31819]: Dec 6 03:00:08 localhost podman[31819]: 2025-12-06 08:00:08.381850414 +0000 UTC m=+0.064726617 container create 6c336cb3e5387a7b758f25f809560528f6a37e213826d47eed9754d05d8d14c9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_cannon, CEPH_POINT_RELEASE=, architecture=x86_64, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, vcs-type=git, vendor=Red Hat, Inc., version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 6 03:00:08 localhost ceph-osd[31726]: bdev(0x55d1180dee00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block Dec 6 03:00:08 localhost ceph-osd[31726]: bdev(0x55d1180dee00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument Dec 6 03:00:08 localhost ceph-osd[31726]: bdev(0x55d1180dee00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 6 03:00:08 localhost ceph-osd[31726]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Dec 6 03:00:08 localhost ceph-osd[31726]: bdev(0x55d1180dee00 /var/lib/ceph/osd/ceph-1/block) close Dec 6 03:00:08 localhost systemd[1]: Started libpod-conmon-6c336cb3e5387a7b758f25f809560528f6a37e213826d47eed9754d05d8d14c9.scope. Dec 6 03:00:08 localhost systemd[1]: Started libcrun container. Dec 6 03:00:08 localhost podman[31819]: 2025-12-06 08:00:08.352359948 +0000 UTC m=+0.035236151 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 03:00:08 localhost podman[31819]: 2025-12-06 08:00:08.461875098 +0000 UTC m=+0.144751301 container init 6c336cb3e5387a7b758f25f809560528f6a37e213826d47eed9754d05d8d14c9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_cannon, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.openshift.expose-services=, GIT_CLEAN=True, vcs-type=git, RELEASE=main, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7) Dec 6 03:00:08 localhost systemd[1]: tmp-crun.yuQ3KR.mount: Deactivated successfully. Dec 6 03:00:08 localhost interesting_cannon[31839]: 167 167 Dec 6 03:00:08 localhost systemd[1]: libpod-6c336cb3e5387a7b758f25f809560528f6a37e213826d47eed9754d05d8d14c9.scope: Deactivated successfully. Dec 6 03:00:08 localhost podman[31819]: 2025-12-06 08:00:08.479019659 +0000 UTC m=+0.161895852 container start 6c336cb3e5387a7b758f25f809560528f6a37e213826d47eed9754d05d8d14c9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_cannon, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, com.redhat.component=rhceph-container, GIT_BRANCH=main, name=rhceph, build-date=2025-11-26T19:44:28Z, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, version=7, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 6 03:00:08 localhost podman[31819]: 2025-12-06 08:00:08.479291031 +0000 UTC m=+0.162167274 container attach 6c336cb3e5387a7b758f25f809560528f6a37e213826d47eed9754d05d8d14c9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_cannon, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, release=1763362218, maintainer=Guillaume Abrioux , name=rhceph, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, GIT_CLEAN=True, CEPH_POINT_RELEASE=, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc.) Dec 6 03:00:08 localhost podman[31819]: 2025-12-06 08:00:08.481249717 +0000 UTC m=+0.164125910 container died 6c336cb3e5387a7b758f25f809560528f6a37e213826d47eed9754d05d8d14c9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_cannon, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, release=1763362218, vcs-type=git, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.openshift.expose-services=, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main) Dec 6 03:00:08 localhost podman[31846]: 2025-12-06 08:00:08.562217928 +0000 UTC m=+0.075226928 container remove 6c336cb3e5387a7b758f25f809560528f6a37e213826d47eed9754d05d8d14c9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_cannon, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, vcs-type=git, com.redhat.component=rhceph-container, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.openshift.tags=rhceph ceph, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, version=7, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, ceph=True, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 6 03:00:08 localhost systemd[1]: libpod-conmon-6c336cb3e5387a7b758f25f809560528f6a37e213826d47eed9754d05d8d14c9.scope: Deactivated successfully. Dec 6 03:00:08 localhost ceph-osd[31726]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second Dec 6 03:00:08 localhost ceph-osd[31726]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196 Dec 6 03:00:08 localhost ceph-osd[31726]: bdev(0x55d1180dee00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block Dec 6 03:00:08 localhost ceph-osd[31726]: bdev(0x55d1180dee00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument Dec 6 03:00:08 localhost ceph-osd[31726]: bdev(0x55d1180dee00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 6 03:00:08 localhost ceph-osd[31726]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Dec 6 03:00:08 localhost ceph-osd[31726]: bdev(0x55d1180df180 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block Dec 6 03:00:08 localhost ceph-osd[31726]: bdev(0x55d1180df180 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument Dec 6 03:00:08 localhost ceph-osd[31726]: bdev(0x55d1180df180 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 6 03:00:08 localhost ceph-osd[31726]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB Dec 6 03:00:08 localhost ceph-osd[31726]: bluefs mount Dec 6 03:00:08 localhost ceph-osd[31726]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000 Dec 6 03:00:08 localhost ceph-osd[31726]: bluefs mount shared_bdev_used = 0 Dec 6 03:00:08 localhost ceph-osd[31726]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: RocksDB version: 7.9.2 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Git sha 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Compile date 2025-09-23 00:00:00 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: DB SUMMARY Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: DB Session ID: 94CK83XUDEHNZM6YXUMG Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: CURRENT file: CURRENT Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: IDENTITY file: IDENTITY Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: MANIFEST file: MANIFEST-000032 size: 1007 Bytes Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: SST files in db.slow dir, Total Num: 0, files: Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.error_if_exists: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.create_if_missing: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.paranoid_checks: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.flush_verify_memtable_count: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.env: 0x55d118372cb0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.fs: LegacyFileSystem Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.info_log: 0x55d119086b80 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_file_opening_threads: 16 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.statistics: (nil) Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.use_fsync: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_log_file_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_manifest_file_size: 1073741824 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.log_file_time_to_roll: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.keep_log_file_num: 1000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.recycle_log_file_num: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.allow_fallocate: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.allow_mmap_reads: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.allow_mmap_writes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.use_direct_reads: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.create_missing_column_families: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.db_log_dir: Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.wal_dir: db.wal Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.table_cache_numshardbits: 6 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.WAL_ttl_seconds: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.WAL_size_limit_MB: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.manifest_preallocation_size: 4194304 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.is_fd_close_on_exec: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.advise_random_on_open: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.db_write_buffer_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.write_buffer_manager: 0x55d1180c8140 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.access_hint_on_compaction_start: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.random_access_max_buffer_size: 1048576 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.use_adaptive_mutex: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.rate_limiter: (nil) Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.wal_recovery_mode: 2 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.enable_thread_tracking: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.enable_pipelined_write: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.unordered_write: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.allow_concurrent_memtable_write: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.write_thread_max_yield_usec: 100 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.write_thread_slow_yield_usec: 3 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.row_cache: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.wal_filter: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.avoid_flush_during_recovery: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.allow_ingest_behind: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.two_write_queues: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.manual_wal_flush: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.wal_compression: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.atomic_flush: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.persist_stats_to_disk: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.write_dbid_to_manifest: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.log_readahead_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.file_checksum_gen_factory: Unknown Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.best_efforts_recovery: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.allow_data_in_errors: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.db_host_id: __hostname__ Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.enforce_single_del_contracts: true Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_background_jobs: 4 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_background_compactions: -1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_subcompactions: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.avoid_flush_during_shutdown: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.writable_file_max_buffer_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.delayed_write_rate : 16777216 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_total_wal_size: 1073741824 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.stats_dump_period_sec: 600 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.stats_persist_period_sec: 600 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.stats_history_buffer_size: 1048576 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_open_files: -1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bytes_per_sync: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.wal_bytes_per_sync: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.strict_bytes_per_sync: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_readahead_size: 2097152 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_background_flushes: -1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Compression algorithms supported: Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: #011kZSTD supported: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: #011kXpressCompression supported: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: #011kBZip2Compression supported: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: #011kLZ4Compression supported: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: #011kZlibCompression supported: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: #011kLZ4HCCompression supported: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: #011kSnappyCompression supported: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Fast CRC32 supported: Supported on x86 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: DMutex implementation: pthread_mutex_t Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default) Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.merge_operator: .T:int64_array.b:bitwise_xor Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_filter: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_filter_factory: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.sst_partitioner_factory: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_factory: SkipListFactory Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.table_factory: BlockBasedTable Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d119086d40)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55d1180b6850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.write_buffer_size: 16777216 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_number: 64 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression: LZ4 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression: Disabled Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.prefix_extractor: nullptr Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.num_levels: 7 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.window_bits: -14 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.level: 32767 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.strategy: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.enabled: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.target_file_size_base: 67108864 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.target_file_size_multiplier: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.arena_block_size: 1048576 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.disable_auto_compactions: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.table_properties_collectors: Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.inplace_update_support: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_huge_page_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bloom_locality: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_successive_merges: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.paranoid_file_checks: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.force_consistency_checks: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.report_bg_io_stats: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.ttl: 2592000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.enable_blob_files: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.min_blob_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_file_size: 268435456 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_compression_type: NoCompression Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.enable_blob_garbage_collection: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_file_starting_level: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0) Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]: Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.merge_operator: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_filter: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_filter_factory: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.sst_partitioner_factory: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_factory: SkipListFactory Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.table_factory: BlockBasedTable Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d119086d40)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55d1180b6850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.write_buffer_size: 16777216 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_number: 64 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression: LZ4 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression: Disabled Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.prefix_extractor: nullptr Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.num_levels: 7 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.window_bits: -14 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.level: 32767 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.strategy: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.enabled: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.target_file_size_base: 67108864 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.target_file_size_multiplier: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.arena_block_size: 1048576 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.disable_auto_compactions: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.inplace_update_support: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_huge_page_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bloom_locality: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_successive_merges: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.paranoid_file_checks: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.force_consistency_checks: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.report_bg_io_stats: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.ttl: 2592000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.enable_blob_files: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.min_blob_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_file_size: 268435456 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_compression_type: NoCompression Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.enable_blob_garbage_collection: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_file_starting_level: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1) Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]: Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.merge_operator: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_filter: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_filter_factory: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.sst_partitioner_factory: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_factory: SkipListFactory Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.table_factory: BlockBasedTable Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d119086d40)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55d1180b6850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.write_buffer_size: 16777216 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_number: 64 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression: LZ4 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression: Disabled Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.prefix_extractor: nullptr Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.num_levels: 7 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.window_bits: -14 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.level: 32767 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.strategy: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.enabled: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.target_file_size_base: 67108864 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.target_file_size_multiplier: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.arena_block_size: 1048576 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.disable_auto_compactions: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.inplace_update_support: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_huge_page_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bloom_locality: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_successive_merges: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.paranoid_file_checks: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.force_consistency_checks: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.report_bg_io_stats: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.ttl: 2592000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.enable_blob_files: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.min_blob_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_file_size: 268435456 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_compression_type: NoCompression Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.enable_blob_garbage_collection: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_file_starting_level: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2) Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]: Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.merge_operator: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_filter: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_filter_factory: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.sst_partitioner_factory: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_factory: SkipListFactory Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.table_factory: BlockBasedTable Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d119086d40)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55d1180b6850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.write_buffer_size: 16777216 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_number: 64 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression: LZ4 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression: Disabled Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.prefix_extractor: nullptr Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.num_levels: 7 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.window_bits: -14 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.level: 32767 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.strategy: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.enabled: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.target_file_size_base: 67108864 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.target_file_size_multiplier: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.arena_block_size: 1048576 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.disable_auto_compactions: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.inplace_update_support: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_huge_page_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bloom_locality: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_successive_merges: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.paranoid_file_checks: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.force_consistency_checks: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.report_bg_io_stats: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.ttl: 2592000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.enable_blob_files: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.min_blob_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_file_size: 268435456 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_compression_type: NoCompression Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.enable_blob_garbage_collection: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_file_starting_level: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0) Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]: Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.merge_operator: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_filter: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_filter_factory: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.sst_partitioner_factory: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_factory: SkipListFactory Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.table_factory: BlockBasedTable Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d119086d40)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55d1180b6850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.write_buffer_size: 16777216 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_number: 64 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression: LZ4 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression: Disabled Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.prefix_extractor: nullptr Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.num_levels: 7 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.window_bits: -14 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.level: 32767 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.strategy: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.enabled: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.target_file_size_base: 67108864 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.target_file_size_multiplier: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.arena_block_size: 1048576 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.disable_auto_compactions: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.inplace_update_support: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_huge_page_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bloom_locality: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_successive_merges: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.paranoid_file_checks: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.force_consistency_checks: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.report_bg_io_stats: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.ttl: 2592000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.enable_blob_files: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.min_blob_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_file_size: 268435456 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_compression_type: NoCompression Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.enable_blob_garbage_collection: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_file_starting_level: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1) Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]: Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.merge_operator: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_filter: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_filter_factory: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.sst_partitioner_factory: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_factory: SkipListFactory Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.table_factory: BlockBasedTable Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d119086d40)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55d1180b6850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.write_buffer_size: 16777216 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_number: 64 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression: LZ4 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression: Disabled Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.prefix_extractor: nullptr Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.num_levels: 7 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.window_bits: -14 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.level: 32767 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.strategy: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.enabled: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.target_file_size_base: 67108864 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.target_file_size_multiplier: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.arena_block_size: 1048576 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.disable_auto_compactions: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.inplace_update_support: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_huge_page_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bloom_locality: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_successive_merges: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.paranoid_file_checks: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.force_consistency_checks: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.report_bg_io_stats: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.ttl: 2592000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.enable_blob_files: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.min_blob_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_file_size: 268435456 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_compression_type: NoCompression Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.enable_blob_garbage_collection: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_file_starting_level: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2) Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]: Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.merge_operator: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_filter: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_filter_factory: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.sst_partitioner_factory: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_factory: SkipListFactory Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.table_factory: BlockBasedTable Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d119086d40)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55d1180b6850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.write_buffer_size: 16777216 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_number: 64 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression: LZ4 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression: Disabled Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.prefix_extractor: nullptr Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.num_levels: 7 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.window_bits: -14 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.level: 32767 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.strategy: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.enabled: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.target_file_size_base: 67108864 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.target_file_size_multiplier: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.arena_block_size: 1048576 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.disable_auto_compactions: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.inplace_update_support: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_huge_page_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bloom_locality: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_successive_merges: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.paranoid_file_checks: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.force_consistency_checks: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.report_bg_io_stats: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.ttl: 2592000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.enable_blob_files: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.min_blob_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_file_size: 268435456 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_compression_type: NoCompression Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.enable_blob_garbage_collection: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_file_starting_level: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0) Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]: Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.merge_operator: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_filter: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_filter_factory: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.sst_partitioner_factory: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_factory: SkipListFactory Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.table_factory: BlockBasedTable Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d119086f60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55d1180b62d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.write_buffer_size: 16777216 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_number: 64 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression: LZ4 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression: Disabled Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.prefix_extractor: nullptr Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.num_levels: 7 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.window_bits: -14 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.level: 32767 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.strategy: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.enabled: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.target_file_size_base: 67108864 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.target_file_size_multiplier: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.arena_block_size: 1048576 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.disable_auto_compactions: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.inplace_update_support: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_huge_page_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bloom_locality: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_successive_merges: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.paranoid_file_checks: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.force_consistency_checks: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.report_bg_io_stats: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.ttl: 2592000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.enable_blob_files: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.min_blob_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_file_size: 268435456 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_compression_type: NoCompression Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.enable_blob_garbage_collection: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_file_starting_level: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1) Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]: Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.merge_operator: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_filter: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_filter_factory: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.sst_partitioner_factory: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_factory: SkipListFactory Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.table_factory: BlockBasedTable Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d119086f60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55d1180b62d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.write_buffer_size: 16777216 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_number: 64 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression: LZ4 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression: Disabled Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.prefix_extractor: nullptr Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.num_levels: 7 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.window_bits: -14 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.level: 32767 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.strategy: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.enabled: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.target_file_size_base: 67108864 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.target_file_size_multiplier: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.arena_block_size: 1048576 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.disable_auto_compactions: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.inplace_update_support: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_huge_page_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bloom_locality: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_successive_merges: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.paranoid_file_checks: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.force_consistency_checks: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.report_bg_io_stats: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.ttl: 2592000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.enable_blob_files: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.min_blob_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_file_size: 268435456 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_compression_type: NoCompression Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.enable_blob_garbage_collection: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_file_starting_level: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2) Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]: Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.merge_operator: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_filter: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_filter_factory: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.sst_partitioner_factory: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_factory: SkipListFactory Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.table_factory: BlockBasedTable Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d119086f60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55d1180b62d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.write_buffer_size: 16777216 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_number: 64 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression: LZ4 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression: Disabled Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.prefix_extractor: nullptr Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.num_levels: 7 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.window_bits: -14 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.level: 32767 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.strategy: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.enabled: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.target_file_size_base: 67108864 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.target_file_size_multiplier: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.arena_block_size: 1048576 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.disable_auto_compactions: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.inplace_update_support: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_huge_page_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bloom_locality: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_successive_merges: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.paranoid_file_checks: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.force_consistency_checks: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.report_bg_io_stats: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.ttl: 2592000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.enable_blob_files: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.min_blob_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_file_size: 268435456 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_compression_type: NoCompression Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.enable_blob_garbage_collection: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_file_starting_level: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L) Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P) Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: a8776748-4cbd-414c-ba30-4c8b866ee02f Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008008721082, "job": 1, "event": "recovery_started", "wal_files": [31]} Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008008721779, "job": 1, "event": "recovery_finished"} Dec 6 03:00:08 localhost ceph-osd[31726]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0 Dec 6 03:00:08 localhost ceph-osd[31726]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025 Dec 6 03:00:08 localhost ceph-osd[31726]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240 Dec 6 03:00:08 localhost ceph-osd[31726]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3 Dec 6 03:00:08 localhost ceph-osd[31726]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000 Dec 6 03:00:08 localhost ceph-osd[31726]: freelist init Dec 6 03:00:08 localhost ceph-osd[31726]: freelist _read_cfg Dec 6 03:00:08 localhost ceph-osd[31726]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete Dec 6 03:00:08 localhost ceph-osd[31726]: bluefs umount Dec 6 03:00:08 localhost ceph-osd[31726]: bdev(0x55d1180df180 /var/lib/ceph/osd/ceph-1/block) close Dec 6 03:00:08 localhost podman[32070]: Dec 6 03:00:08 localhost podman[32070]: 2025-12-06 08:00:08.854346001 +0000 UTC m=+0.058999642 container create c060757ee533e0eeaee36da875f269e6a9be2f9d4f8f6f19a4c4c083c8503eb8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-4-activate-test, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, build-date=2025-11-26T19:44:28Z, vcs-type=git, ceph=True, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, GIT_BRANCH=main, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.openshift.tags=rhceph ceph) Dec 6 03:00:08 localhost systemd[1]: Started libpod-conmon-c060757ee533e0eeaee36da875f269e6a9be2f9d4f8f6f19a4c4c083c8503eb8.scope. Dec 6 03:00:08 localhost systemd[1]: Started libcrun container. Dec 6 03:00:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2a1bb509f16d22ba1841ed29c5a3f6f38c01b4714016f09e3c4d5a545c16b96/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 6 03:00:08 localhost podman[32070]: 2025-12-06 08:00:08.828171606 +0000 UTC m=+0.032825247 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 03:00:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2a1bb509f16d22ba1841ed29c5a3f6f38c01b4714016f09e3c4d5a545c16b96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 6 03:00:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2a1bb509f16d22ba1841ed29c5a3f6f38c01b4714016f09e3c4d5a545c16b96/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 6 03:00:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2a1bb509f16d22ba1841ed29c5a3f6f38c01b4714016f09e3c4d5a545c16b96/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 6 03:00:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2a1bb509f16d22ba1841ed29c5a3f6f38c01b4714016f09e3c4d5a545c16b96/merged/var/lib/ceph/osd/ceph-4 supports timestamps until 2038 (0x7fffffff) Dec 6 03:00:08 localhost ceph-osd[31726]: bdev(0x55d1180df180 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block Dec 6 03:00:08 localhost ceph-osd[31726]: bdev(0x55d1180df180 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument Dec 6 03:00:08 localhost ceph-osd[31726]: bdev(0x55d1180df180 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 6 03:00:08 localhost ceph-osd[31726]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB Dec 6 03:00:08 localhost ceph-osd[31726]: bluefs mount Dec 6 03:00:08 localhost podman[32070]: 2025-12-06 08:00:08.981027614 +0000 UTC m=+0.185681245 container init c060757ee533e0eeaee36da875f269e6a9be2f9d4f8f6f19a4c4c083c8503eb8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-4-activate-test, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, RELEASE=main, GIT_CLEAN=True, architecture=x86_64, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, release=1763362218, distribution-scope=public) Dec 6 03:00:08 localhost ceph-osd[31726]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000 Dec 6 03:00:08 localhost ceph-osd[31726]: bluefs mount shared_bdev_used = 4718592 Dec 6 03:00:08 localhost ceph-osd[31726]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: RocksDB version: 7.9.2 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Git sha 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Compile date 2025-09-23 00:00:00 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: DB SUMMARY Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: DB Session ID: 94CK83XUDEHNZM6YXUMH Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: CURRENT file: CURRENT Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: IDENTITY file: IDENTITY Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: MANIFEST file: MANIFEST-000032 size: 1007 Bytes Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: SST files in db.slow dir, Total Num: 0, files: Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.error_if_exists: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.create_if_missing: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.paranoid_checks: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.flush_verify_memtable_count: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.env: 0x55d1182044d0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.fs: LegacyFileSystem Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.info_log: 0x55d1190eb7c0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_file_opening_threads: 16 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.statistics: (nil) Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.use_fsync: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_log_file_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_manifest_file_size: 1073741824 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.log_file_time_to_roll: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.keep_log_file_num: 1000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.recycle_log_file_num: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.allow_fallocate: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.allow_mmap_reads: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.allow_mmap_writes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.use_direct_reads: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.create_missing_column_families: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.db_log_dir: Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.wal_dir: db.wal Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.table_cache_numshardbits: 6 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.WAL_ttl_seconds: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.WAL_size_limit_MB: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.manifest_preallocation_size: 4194304 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.is_fd_close_on_exec: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.advise_random_on_open: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.db_write_buffer_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.write_buffer_manager: 0x55d1180c95e0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.access_hint_on_compaction_start: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.random_access_max_buffer_size: 1048576 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.use_adaptive_mutex: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.rate_limiter: (nil) Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.wal_recovery_mode: 2 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.enable_thread_tracking: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.enable_pipelined_write: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.unordered_write: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.allow_concurrent_memtable_write: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.write_thread_max_yield_usec: 100 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.write_thread_slow_yield_usec: 3 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.row_cache: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.wal_filter: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.avoid_flush_during_recovery: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.allow_ingest_behind: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.two_write_queues: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.manual_wal_flush: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.wal_compression: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.atomic_flush: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.persist_stats_to_disk: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.write_dbid_to_manifest: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.log_readahead_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.file_checksum_gen_factory: Unknown Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.best_efforts_recovery: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.allow_data_in_errors: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.db_host_id: __hostname__ Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.enforce_single_del_contracts: true Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_background_jobs: 4 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_background_compactions: -1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_subcompactions: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.avoid_flush_during_shutdown: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.writable_file_max_buffer_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.delayed_write_rate : 16777216 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_total_wal_size: 1073741824 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.stats_dump_period_sec: 600 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.stats_persist_period_sec: 600 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.stats_history_buffer_size: 1048576 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_open_files: -1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bytes_per_sync: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.wal_bytes_per_sync: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.strict_bytes_per_sync: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_readahead_size: 2097152 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_background_flushes: -1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Compression algorithms supported: Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: #011kZSTD supported: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: #011kXpressCompression supported: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: #011kBZip2Compression supported: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: #011kLZ4Compression supported: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: #011kZlibCompression supported: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: #011kLZ4HCCompression supported: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: #011kSnappyCompression supported: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Fast CRC32 supported: Supported on x86 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: DMutex implementation: pthread_mutex_t Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default) Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.merge_operator: .T:int64_array.b:bitwise_xor Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_filter: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_filter_factory: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.sst_partitioner_factory: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_factory: SkipListFactory Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.table_factory: BlockBasedTable Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d1190eab00)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55d1180b62d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.write_buffer_size: 16777216 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_number: 64 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression: LZ4 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression: Disabled Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.prefix_extractor: nullptr Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.num_levels: 7 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.window_bits: -14 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.level: 32767 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.strategy: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.enabled: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.target_file_size_base: 67108864 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.target_file_size_multiplier: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.arena_block_size: 1048576 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.disable_auto_compactions: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.table_properties_collectors: Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.inplace_update_support: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_huge_page_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bloom_locality: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_successive_merges: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.paranoid_file_checks: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.force_consistency_checks: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.report_bg_io_stats: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.ttl: 2592000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.enable_blob_files: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.min_blob_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_file_size: 268435456 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_compression_type: NoCompression Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.enable_blob_garbage_collection: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_file_starting_level: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0) Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]: Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.merge_operator: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_filter: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_filter_factory: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.sst_partitioner_factory: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_factory: SkipListFactory Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.table_factory: BlockBasedTable Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d1190eab00)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55d1180b62d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.write_buffer_size: 16777216 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_number: 64 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression: LZ4 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression: Disabled Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.prefix_extractor: nullptr Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.num_levels: 7 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.window_bits: -14 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.level: 32767 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.strategy: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.enabled: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.target_file_size_base: 67108864 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.target_file_size_multiplier: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.arena_block_size: 1048576 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.disable_auto_compactions: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.inplace_update_support: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_huge_page_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bloom_locality: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_successive_merges: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.paranoid_file_checks: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.force_consistency_checks: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.report_bg_io_stats: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.ttl: 2592000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.enable_blob_files: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.min_blob_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_file_size: 268435456 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_compression_type: NoCompression Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.enable_blob_garbage_collection: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_file_starting_level: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1) Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]: Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.merge_operator: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_filter: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_filter_factory: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.sst_partitioner_factory: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_factory: SkipListFactory Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.table_factory: BlockBasedTable Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d1190eab00)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55d1180b62d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.write_buffer_size: 16777216 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_number: 64 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression: LZ4 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression: Disabled Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.prefix_extractor: nullptr Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.num_levels: 7 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.window_bits: -14 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.level: 32767 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.strategy: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.enabled: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.target_file_size_base: 67108864 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.target_file_size_multiplier: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.arena_block_size: 1048576 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.disable_auto_compactions: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.inplace_update_support: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_huge_page_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bloom_locality: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_successive_merges: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.paranoid_file_checks: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.force_consistency_checks: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.report_bg_io_stats: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.ttl: 2592000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.enable_blob_files: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.min_blob_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_file_size: 268435456 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_compression_type: NoCompression Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.enable_blob_garbage_collection: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_file_starting_level: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2) Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]: Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.merge_operator: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_filter: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_filter_factory: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.sst_partitioner_factory: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_factory: SkipListFactory Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.table_factory: BlockBasedTable Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d1190eab00)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55d1180b62d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.write_buffer_size: 16777216 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_number: 64 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression: LZ4 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression: Disabled Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.prefix_extractor: nullptr Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.num_levels: 7 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.window_bits: -14 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.level: 32767 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.strategy: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.enabled: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.target_file_size_base: 67108864 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.target_file_size_multiplier: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.arena_block_size: 1048576 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.disable_auto_compactions: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.inplace_update_support: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_huge_page_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bloom_locality: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_successive_merges: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.paranoid_file_checks: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.force_consistency_checks: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.report_bg_io_stats: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.ttl: 2592000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.enable_blob_files: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.min_blob_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_file_size: 268435456 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_compression_type: NoCompression Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.enable_blob_garbage_collection: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_file_starting_level: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0) Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]: Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.merge_operator: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_filter: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_filter_factory: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.sst_partitioner_factory: None Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_factory: SkipListFactory Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.table_factory: BlockBasedTable Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d1190eab00)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55d1180b62d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.write_buffer_size: 16777216 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_number: 64 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression: LZ4 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression: Disabled Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.prefix_extractor: nullptr Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.num_levels: 7 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.window_bits: -14 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.level: 32767 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.strategy: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.enabled: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.target_file_size_base: 67108864 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.target_file_size_multiplier: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.arena_block_size: 1048576 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.disable_auto_compactions: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.inplace_update_support: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.memtable_huge_page_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.bloom_locality: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.max_successive_merges: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.paranoid_file_checks: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.force_consistency_checks: 1 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.report_bg_io_stats: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.ttl: 2592000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.enable_blob_files: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.min_blob_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_file_size: 268435456 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_compression_type: NoCompression Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.enable_blob_garbage_collection: false Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 6 03:00:08 localhost podman[32070]: 2025-12-06 08:00:08.992337677 +0000 UTC m=+0.196991288 container start c060757ee533e0eeaee36da875f269e6a9be2f9d4f8f6f19a4c4c083c8503eb8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-4-activate-test, RELEASE=main, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.openshift.expose-services=, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, release=1763362218, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , version=7, ceph=True, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc.) Dec 6 03:00:08 localhost podman[32070]: 2025-12-06 08:00:08.992493323 +0000 UTC m=+0.197146934 container attach c060757ee533e0eeaee36da875f269e6a9be2f9d4f8f6f19a4c4c083c8503eb8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-4-activate-test, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, RELEASE=main, GIT_CLEAN=True, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.blob_file_starting_level: 0 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1) Dec 6 03:00:08 localhost ceph-osd[31726]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]: Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.merge_operator: None Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_filter: None Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_filter_factory: None Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.sst_partitioner_factory: None Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.memtable_factory: SkipListFactory Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.table_factory: BlockBasedTable Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d1190eab00)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55d1180b62d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.write_buffer_size: 16777216 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_number: 64 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compression: LZ4 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression: Disabled Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.prefix_extractor: nullptr Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.num_levels: 7 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.window_bits: -14 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.level: 32767 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.strategy: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.enabled: false Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.target_file_size_base: 67108864 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.target_file_size_multiplier: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.arena_block_size: 1048576 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.disable_auto_compactions: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.inplace_update_support: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.memtable_huge_page_size: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bloom_locality: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_successive_merges: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.paranoid_file_checks: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.force_consistency_checks: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.report_bg_io_stats: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.ttl: 2592000 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.enable_blob_files: false Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.min_blob_size: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.blob_file_size: 268435456 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.blob_compression_type: NoCompression Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.enable_blob_garbage_collection: false Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.blob_file_starting_level: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2) Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]: Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.merge_operator: None Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_filter: None Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_filter_factory: None Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.sst_partitioner_factory: None Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.memtable_factory: SkipListFactory Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.table_factory: BlockBasedTable Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d1190eab00)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55d1180b62d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.write_buffer_size: 16777216 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_number: 64 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compression: LZ4 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression: Disabled Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.prefix_extractor: nullptr Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.num_levels: 7 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.window_bits: -14 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.level: 32767 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.strategy: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.enabled: false Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.target_file_size_base: 67108864 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.target_file_size_multiplier: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.arena_block_size: 1048576 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.disable_auto_compactions: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.inplace_update_support: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.memtable_huge_page_size: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bloom_locality: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_successive_merges: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.paranoid_file_checks: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.force_consistency_checks: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.report_bg_io_stats: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.ttl: 2592000 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.enable_blob_files: false Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.min_blob_size: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.blob_file_size: 268435456 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.blob_compression_type: NoCompression Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.enable_blob_garbage_collection: false Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.blob_file_starting_level: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0) Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]: Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.merge_operator: None Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_filter: None Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_filter_factory: None Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.sst_partitioner_factory: None Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.memtable_factory: SkipListFactory Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.table_factory: BlockBasedTable Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d1190eb080)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55d1180b7610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.write_buffer_size: 16777216 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_number: 64 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compression: LZ4 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression: Disabled Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.prefix_extractor: nullptr Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.num_levels: 7 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.window_bits: -14 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.level: 32767 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.strategy: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.enabled: false Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.target_file_size_base: 67108864 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.target_file_size_multiplier: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.arena_block_size: 1048576 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.disable_auto_compactions: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.inplace_update_support: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.memtable_huge_page_size: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bloom_locality: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_successive_merges: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.paranoid_file_checks: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.force_consistency_checks: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.report_bg_io_stats: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.ttl: 2592000 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.enable_blob_files: false Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.min_blob_size: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.blob_file_size: 268435456 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.blob_compression_type: NoCompression Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.enable_blob_garbage_collection: false Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.blob_file_starting_level: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1) Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]: Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.merge_operator: None Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_filter: None Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_filter_factory: None Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.sst_partitioner_factory: None Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.memtable_factory: SkipListFactory Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.table_factory: BlockBasedTable Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d1190eb080)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55d1180b7610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.write_buffer_size: 16777216 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_number: 64 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compression: LZ4 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression: Disabled Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.prefix_extractor: nullptr Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.num_levels: 7 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.window_bits: -14 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.level: 32767 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.strategy: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.enabled: false Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.target_file_size_base: 67108864 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.target_file_size_multiplier: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.arena_block_size: 1048576 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.disable_auto_compactions: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.inplace_update_support: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.memtable_huge_page_size: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bloom_locality: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_successive_merges: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.paranoid_file_checks: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.force_consistency_checks: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.report_bg_io_stats: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.ttl: 2592000 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.enable_blob_files: false Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.min_blob_size: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.blob_file_size: 268435456 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.blob_compression_type: NoCompression Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.enable_blob_garbage_collection: false Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.blob_file_starting_level: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2) Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]: Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.merge_operator: None Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_filter: None Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_filter_factory: None Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.sst_partitioner_factory: None Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.memtable_factory: SkipListFactory Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.table_factory: BlockBasedTable Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d1190eb080)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55d1180b7610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.write_buffer_size: 16777216 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_number: 64 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compression: LZ4 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression: Disabled Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.prefix_extractor: nullptr Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.num_levels: 7 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.window_bits: -14 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.level: 32767 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.strategy: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.enabled: false Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.target_file_size_base: 67108864 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.target_file_size_multiplier: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.arena_block_size: 1048576 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.disable_auto_compactions: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.inplace_update_support: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.memtable_huge_page_size: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.bloom_locality: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.max_successive_merges: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.paranoid_file_checks: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.force_consistency_checks: 1 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.report_bg_io_stats: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.ttl: 2592000 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.enable_blob_files: false Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.min_blob_size: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.blob_file_size: 268435456 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.blob_compression_type: NoCompression Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.enable_blob_garbage_collection: false Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.blob_file_starting_level: 0 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L) Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P) Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: a8776748-4cbd-414c-ba30-4c8b866ee02f Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008009002171, "job": 1, "event": "recovery_started", "wal_files": [31]} Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008009007555, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1261, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765008009, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a8776748-4cbd-414c-ba30-4c8b866ee02f", "db_session_id": "94CK83XUDEHNZM6YXUMH", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}} Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008009011635, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1609, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765008009, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a8776748-4cbd-414c-ba30-4c8b866ee02f", "db_session_id": "94CK83XUDEHNZM6YXUMH", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}} Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008009015516, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1290, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765008009, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a8776748-4cbd-414c-ba30-4c8b866ee02f", "db_session_id": "94CK83XUDEHNZM6YXUMH", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}} Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: [db/db_impl/db_impl_open.cc:1432] Failed to truncate log #31: IO error: No such file or directory: While open a file for appending: db.wal/000031.log: No such file or directory Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008009019424, "job": 1, "event": "recovery_finished"} Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: [db/version_set.cc:5047] Creating manifest 40 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55d119120380 Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: DB pointer 0x55d118fe5a00 Dec 6 03:00:09 localhost ceph-osd[31726]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0 Dec 6 03:00:09 localhost ceph-osd[31726]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4 Dec 6 03:00:09 localhost ceph-osd[31726]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 6 03:00:09 localhost ceph-osd[31726]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.005 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.005 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.005 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.005 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d1180b62d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d1180b62d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d1180b62d0#2 capacity: 460.80 MB usag Dec 6 03:00:09 localhost ceph-osd[31726]: /builddir/build/BUILD/ceph-18.2.1/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs Dec 6 03:00:09 localhost ceph-osd[31726]: /builddir/build/BUILD/ceph-18.2.1/src/cls/hello/cls_hello.cc:316: loading cls_hello Dec 6 03:00:09 localhost ceph-osd[31726]: _get_class not permitted to load lua Dec 6 03:00:09 localhost ceph-osd[31726]: _get_class not permitted to load sdk Dec 6 03:00:09 localhost ceph-osd[31726]: _get_class not permitted to load test_remote_reads Dec 6 03:00:09 localhost ceph-osd[31726]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients Dec 6 03:00:09 localhost ceph-osd[31726]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons Dec 6 03:00:09 localhost ceph-osd[31726]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds Dec 6 03:00:09 localhost ceph-osd[31726]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature Dec 6 03:00:09 localhost ceph-osd[31726]: osd.1 0 load_pgs Dec 6 03:00:09 localhost ceph-osd[31726]: osd.1 0 load_pgs opened 0 pgs Dec 6 03:00:09 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-1[31722]: 2025-12-06T08:00:09.054+0000 7fea544e1a80 -1 osd.1 0 log_to_monitors true Dec 6 03:00:09 localhost ceph-osd[31726]: osd.1 0 log_to_monitors true Dec 6 03:00:09 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-4-activate-test[32084]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID] Dec 6 03:00:09 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-4-activate-test[32084]: [--no-systemd] [--no-tmpfs] Dec 6 03:00:09 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-4-activate-test[32084]: ceph-volume activate: error: unrecognized arguments: --bad-option Dec 6 03:00:09 localhost systemd[1]: libpod-c060757ee533e0eeaee36da875f269e6a9be2f9d4f8f6f19a4c4c083c8503eb8.scope: Deactivated successfully. Dec 6 03:00:09 localhost podman[32070]: 2025-12-06 08:00:09.223530262 +0000 UTC m=+0.428183943 container died c060757ee533e0eeaee36da875f269e6a9be2f9d4f8f6f19a4c4c083c8503eb8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-4-activate-test, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, RELEASE=main, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, GIT_BRANCH=main, vcs-type=git, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , name=rhceph, description=Red Hat Ceph Storage 7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, version=7, architecture=x86_64, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 6 03:00:09 localhost podman[32304]: 2025-12-06 08:00:09.295409268 +0000 UTC m=+0.063550171 container remove c060757ee533e0eeaee36da875f269e6a9be2f9d4f8f6f19a4c4c083c8503eb8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-4-activate-test, io.openshift.expose-services=, release=1763362218, vcs-type=git, GIT_BRANCH=main, name=rhceph, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, version=7, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 6 03:00:09 localhost systemd[1]: libpod-conmon-c060757ee533e0eeaee36da875f269e6a9be2f9d4f8f6f19a4c4c083c8503eb8.scope: Deactivated successfully. Dec 6 03:00:09 localhost systemd[1]: tmp-crun.qu9pSr.mount: Deactivated successfully. Dec 6 03:00:09 localhost systemd[1]: var-lib-containers-storage-overlay-c460c71d6d316c31195dbc0386d81e0bda44fb65a44f4b710533bcd0b6a13523-merged.mount: Deactivated successfully. Dec 6 03:00:09 localhost systemd[1]: Reloading. Dec 6 03:00:09 localhost systemd-rc-local-generator[32356]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:00:09 localhost systemd-sysv-generator[32361]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:00:09 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:00:09 localhost systemd[1]: Reloading. Dec 6 03:00:10 localhost systemd-sysv-generator[32403]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:00:10 localhost systemd-rc-local-generator[32398]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:00:10 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : purged_snaps scrub starts Dec 6 03:00:10 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : purged_snaps scrub ok Dec 6 03:00:10 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:00:10 localhost systemd[1]: Starting Ceph osd.4 for 1939e851-b10c-5c3b-9bb7-8e7f380233e8... Dec 6 03:00:10 localhost podman[32461]: Dec 6 03:00:10 localhost podman[32461]: 2025-12-06 08:00:10.50941114 +0000 UTC m=+0.073796801 container create 7928993af342d158682f3e2479011935f4d1c74cdb27150d01f8d3055cf76c26 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-4-activate, GIT_BRANCH=main, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, com.redhat.component=rhceph-container, distribution-scope=public, RELEASE=main, vcs-type=git, io.buildah.version=1.41.4, GIT_CLEAN=True, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 03:00:10 localhost systemd[1]: Started libcrun container. Dec 6 03:00:10 localhost podman[32461]: 2025-12-06 08:00:10.479649965 +0000 UTC m=+0.044035626 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 03:00:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10d3e03b0082edccbb0473fc8c800342c53a718e7ccaf25d40e921884621a8eb/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 6 03:00:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10d3e03b0082edccbb0473fc8c800342c53a718e7ccaf25d40e921884621a8eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 6 03:00:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10d3e03b0082edccbb0473fc8c800342c53a718e7ccaf25d40e921884621a8eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 6 03:00:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10d3e03b0082edccbb0473fc8c800342c53a718e7ccaf25d40e921884621a8eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 6 03:00:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10d3e03b0082edccbb0473fc8c800342c53a718e7ccaf25d40e921884621a8eb/merged/var/lib/ceph/osd/ceph-4 supports timestamps until 2038 (0x7fffffff) Dec 6 03:00:10 localhost podman[32461]: 2025-12-06 08:00:10.63552945 +0000 UTC m=+0.199915081 container init 7928993af342d158682f3e2479011935f4d1c74cdb27150d01f8d3055cf76c26 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-4-activate, architecture=x86_64, distribution-scope=public, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, RELEASE=main, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, name=rhceph, io.openshift.expose-services=, release=1763362218, io.openshift.tags=rhceph ceph, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, version=7, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git) Dec 6 03:00:10 localhost podman[32461]: 2025-12-06 08:00:10.644538913 +0000 UTC m=+0.208924544 container start 7928993af342d158682f3e2479011935f4d1c74cdb27150d01f8d3055cf76c26 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-4-activate, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, version=7, architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, GIT_CLEAN=True, RELEASE=main, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, build-date=2025-11-26T19:44:28Z, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 6 03:00:10 localhost podman[32461]: 2025-12-06 08:00:10.644856076 +0000 UTC m=+0.209241757 container attach 7928993af342d158682f3e2479011935f4d1c74cdb27150d01f8d3055cf76c26 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-4-activate, com.redhat.component=rhceph-container, architecture=x86_64, RELEASE=main, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, vendor=Red Hat, Inc., GIT_CLEAN=True, maintainer=Guillaume Abrioux , vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, distribution-scope=public, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 6 03:00:10 localhost ceph-osd[31726]: osd.1 0 done with init, starting boot process Dec 6 03:00:10 localhost ceph-osd[31726]: osd.1 0 start_boot Dec 6 03:00:10 localhost ceph-osd[31726]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1 Dec 6 03:00:10 localhost ceph-osd[31726]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0 Dec 6 03:00:10 localhost ceph-osd[31726]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3 Dec 6 03:00:10 localhost ceph-osd[31726]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10 Dec 6 03:00:10 localhost ceph-osd[31726]: osd.1 0 bench count 12288000 bsize 4 KiB Dec 6 03:00:11 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-4-activate[32476]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 Dec 6 03:00:11 localhost bash[32461]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 Dec 6 03:00:11 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-4-activate[32476]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-4 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1 Dec 6 03:00:11 localhost bash[32461]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-4 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1 Dec 6 03:00:11 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-4-activate[32476]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1 Dec 6 03:00:11 localhost bash[32461]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1 Dec 6 03:00:11 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-4-activate[32476]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 Dec 6 03:00:11 localhost bash[32461]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 Dec 6 03:00:11 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-4-activate[32476]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-4/block Dec 6 03:00:11 localhost bash[32461]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-4/block Dec 6 03:00:11 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-4-activate[32476]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 Dec 6 03:00:11 localhost bash[32461]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 Dec 6 03:00:11 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-4-activate[32476]: --> ceph-volume raw activate successful for osd ID: 4 Dec 6 03:00:11 localhost bash[32461]: --> ceph-volume raw activate successful for osd ID: 4 Dec 6 03:00:11 localhost systemd[1]: libpod-7928993af342d158682f3e2479011935f4d1c74cdb27150d01f8d3055cf76c26.scope: Deactivated successfully. Dec 6 03:00:11 localhost podman[32461]: 2025-12-06 08:00:11.268198972 +0000 UTC m=+0.832584603 container died 7928993af342d158682f3e2479011935f4d1c74cdb27150d01f8d3055cf76c26 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-4-activate, io.openshift.expose-services=, release=1763362218, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, ceph=True, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, name=rhceph, RELEASE=main, io.buildah.version=1.41.4, vendor=Red Hat, Inc., version=7) Dec 6 03:00:11 localhost systemd[1]: tmp-crun.MW5epm.mount: Deactivated successfully. Dec 6 03:00:11 localhost podman[32592]: 2025-12-06 08:00:11.390733052 +0000 UTC m=+0.112208776 container remove 7928993af342d158682f3e2479011935f4d1c74cdb27150d01f8d3055cf76c26 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-4-activate, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, version=7, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, ceph=True, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, GIT_BRANCH=main, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, release=1763362218, architecture=x86_64) Dec 6 03:00:11 localhost systemd[1]: var-lib-containers-storage-overlay-10d3e03b0082edccbb0473fc8c800342c53a718e7ccaf25d40e921884621a8eb-merged.mount: Deactivated successfully. Dec 6 03:00:11 localhost podman[32648]: Dec 6 03:00:11 localhost podman[32648]: 2025-12-06 08:00:11.709609543 +0000 UTC m=+0.072306083 container create 26aaf6c09d77c2d4415791ab52c1b187900752ba32ce368e3441bd0a51c6a7d9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-4, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, version=7, name=rhceph, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, CEPH_POINT_RELEASE=, distribution-scope=public, GIT_BRANCH=main, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 6 03:00:11 localhost podman[32648]: 2025-12-06 08:00:11.682694418 +0000 UTC m=+0.045390968 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 03:00:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d835c0e84d0d73d25a0e895cf870af98669736b533feb63c312a535935e7da2b/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 6 03:00:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d835c0e84d0d73d25a0e895cf870af98669736b533feb63c312a535935e7da2b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 6 03:00:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d835c0e84d0d73d25a0e895cf870af98669736b533feb63c312a535935e7da2b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 6 03:00:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d835c0e84d0d73d25a0e895cf870af98669736b533feb63c312a535935e7da2b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 6 03:00:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d835c0e84d0d73d25a0e895cf870af98669736b533feb63c312a535935e7da2b/merged/var/lib/ceph/osd/ceph-4 supports timestamps until 2038 (0x7fffffff) Dec 6 03:00:11 localhost podman[32648]: 2025-12-06 08:00:11.866305191 +0000 UTC m=+0.229001731 container init 26aaf6c09d77c2d4415791ab52c1b187900752ba32ce368e3441bd0a51c6a7d9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-4, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, release=1763362218, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_CLEAN=True, ceph=True, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, version=7, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 6 03:00:11 localhost podman[32648]: 2025-12-06 08:00:11.899508301 +0000 UTC m=+0.262204841 container start 26aaf6c09d77c2d4415791ab52c1b187900752ba32ce368e3441bd0a51c6a7d9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-4, version=7, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, ceph=True, io.buildah.version=1.41.4, name=rhceph, vcs-type=git, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 6 03:00:11 localhost bash[32648]: 26aaf6c09d77c2d4415791ab52c1b187900752ba32ce368e3441bd0a51c6a7d9 Dec 6 03:00:11 localhost systemd[1]: Started Ceph osd.4 for 1939e851-b10c-5c3b-9bb7-8e7f380233e8. Dec 6 03:00:11 localhost ceph-osd[32665]: set uid:gid to 167:167 (ceph:ceph) Dec 6 03:00:11 localhost ceph-osd[32665]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-osd, pid 2 Dec 6 03:00:11 localhost ceph-osd[32665]: pidfile_write: ignore empty --pid-file Dec 6 03:00:11 localhost ceph-osd[32665]: bdev(0x55e146912e00 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block Dec 6 03:00:11 localhost ceph-osd[32665]: bdev(0x55e146912e00 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument Dec 6 03:00:11 localhost ceph-osd[32665]: bdev(0x55e146912e00 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 6 03:00:11 localhost ceph-osd[32665]: bluestore(/var/lib/ceph/osd/ceph-4) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Dec 6 03:00:11 localhost ceph-osd[32665]: bdev(0x55e146913180 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block Dec 6 03:00:11 localhost ceph-osd[32665]: bdev(0x55e146913180 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument Dec 6 03:00:11 localhost ceph-osd[32665]: bdev(0x55e146913180 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 6 03:00:11 localhost ceph-osd[32665]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-4/block size 7.0 GiB Dec 6 03:00:11 localhost ceph-osd[32665]: bdev(0x55e146913180 /var/lib/ceph/osd/ceph-4/block) close Dec 6 03:00:12 localhost ceph-osd[32665]: bdev(0x55e146912e00 /var/lib/ceph/osd/ceph-4/block) close Dec 6 03:00:12 localhost ceph-osd[32665]: starting osd.4 osd_data /var/lib/ceph/osd/ceph-4 /var/lib/ceph/osd/ceph-4/journal Dec 6 03:00:12 localhost ceph-osd[32665]: load: jerasure load: lrc Dec 6 03:00:12 localhost ceph-osd[32665]: bdev(0x55e146912e00 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block Dec 6 03:00:12 localhost ceph-osd[32665]: bdev(0x55e146912e00 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument Dec 6 03:00:12 localhost ceph-osd[32665]: bdev(0x55e146912e00 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 6 03:00:12 localhost ceph-osd[32665]: bluestore(/var/lib/ceph/osd/ceph-4) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Dec 6 03:00:12 localhost ceph-osd[32665]: bdev(0x55e146912e00 /var/lib/ceph/osd/ceph-4/block) close Dec 6 03:00:12 localhost ceph-osd[32665]: bdev(0x55e146912e00 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block Dec 6 03:00:12 localhost ceph-osd[32665]: bdev(0x55e146912e00 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument Dec 6 03:00:12 localhost ceph-osd[32665]: bdev(0x55e146912e00 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 6 03:00:12 localhost ceph-osd[32665]: bluestore(/var/lib/ceph/osd/ceph-4) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Dec 6 03:00:12 localhost ceph-osd[32665]: bdev(0x55e146912e00 /var/lib/ceph/osd/ceph-4/block) close Dec 6 03:00:12 localhost ceph-osd[32665]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second Dec 6 03:00:12 localhost ceph-osd[32665]: osd.4:0.OSDShard using op scheduler mclock_scheduler, cutoff=196 Dec 6 03:00:12 localhost ceph-osd[32665]: bdev(0x55e146912e00 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block Dec 6 03:00:12 localhost ceph-osd[32665]: bdev(0x55e146912e00 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument Dec 6 03:00:12 localhost ceph-osd[32665]: bdev(0x55e146912e00 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 6 03:00:12 localhost ceph-osd[32665]: bluestore(/var/lib/ceph/osd/ceph-4) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Dec 6 03:00:12 localhost ceph-osd[32665]: bdev(0x55e146913180 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block Dec 6 03:00:12 localhost ceph-osd[32665]: bdev(0x55e146913180 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument Dec 6 03:00:12 localhost ceph-osd[32665]: bdev(0x55e146913180 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 6 03:00:12 localhost ceph-osd[32665]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-4/block size 7.0 GiB Dec 6 03:00:12 localhost ceph-osd[32665]: bluefs mount Dec 6 03:00:12 localhost ceph-osd[32665]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000 Dec 6 03:00:12 localhost ceph-osd[32665]: bluefs mount shared_bdev_used = 0 Dec 6 03:00:12 localhost ceph-osd[32665]: bluestore(/var/lib/ceph/osd/ceph-4) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: RocksDB version: 7.9.2 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Git sha 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Compile date 2025-09-23 00:00:00 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: DB SUMMARY Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: DB Session ID: 3EP43754JMQKP9Z6PGMN Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: CURRENT file: CURRENT Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: IDENTITY file: IDENTITY Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: MANIFEST file: MANIFEST-000032 size: 1007 Bytes Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: SST files in db.slow dir, Total Num: 0, files: Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.error_if_exists: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.create_if_missing: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.paranoid_checks: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.flush_verify_memtable_count: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.env: 0x55e146ba6cb0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.fs: LegacyFileSystem Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.info_log: 0x55e1478a4b80 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_file_opening_threads: 16 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.statistics: (nil) Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.use_fsync: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_log_file_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_manifest_file_size: 1073741824 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.log_file_time_to_roll: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.keep_log_file_num: 1000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.recycle_log_file_num: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.allow_fallocate: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.allow_mmap_reads: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.allow_mmap_writes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.use_direct_reads: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.create_missing_column_families: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.db_log_dir: Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.wal_dir: db.wal Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.table_cache_numshardbits: 6 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.WAL_ttl_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.WAL_size_limit_MB: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.manifest_preallocation_size: 4194304 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.is_fd_close_on_exec: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.advise_random_on_open: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.db_write_buffer_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.write_buffer_manager: 0x55e1468fc140 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.access_hint_on_compaction_start: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.random_access_max_buffer_size: 1048576 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.use_adaptive_mutex: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.rate_limiter: (nil) Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.wal_recovery_mode: 2 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.enable_thread_tracking: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.enable_pipelined_write: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.unordered_write: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.allow_concurrent_memtable_write: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.write_thread_max_yield_usec: 100 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.write_thread_slow_yield_usec: 3 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.row_cache: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.wal_filter: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.avoid_flush_during_recovery: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.allow_ingest_behind: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.two_write_queues: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.manual_wal_flush: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.wal_compression: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.atomic_flush: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.persist_stats_to_disk: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.write_dbid_to_manifest: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.log_readahead_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.file_checksum_gen_factory: Unknown Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.best_efforts_recovery: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.allow_data_in_errors: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.db_host_id: __hostname__ Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.enforce_single_del_contracts: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_background_jobs: 4 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_background_compactions: -1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_subcompactions: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.avoid_flush_during_shutdown: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.writable_file_max_buffer_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.delayed_write_rate : 16777216 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_total_wal_size: 1073741824 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.stats_dump_period_sec: 600 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.stats_persist_period_sec: 600 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.stats_history_buffer_size: 1048576 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_open_files: -1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bytes_per_sync: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.wal_bytes_per_sync: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.strict_bytes_per_sync: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_readahead_size: 2097152 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_background_flushes: -1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Compression algorithms supported: Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: #011kZSTD supported: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: #011kXpressCompression supported: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: #011kBZip2Compression supported: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: #011kLZ4Compression supported: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: #011kZlibCompression supported: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: #011kLZ4HCCompression supported: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: #011kSnappyCompression supported: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Fast CRC32 supported: Supported on x86 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: DMutex implementation: pthread_mutex_t Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default) Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.merge_operator: .T:int64_array.b:bitwise_xor Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_filter: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_filter_factory: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.sst_partitioner_factory: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_factory: SkipListFactory Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.table_factory: BlockBasedTable Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e1478a4d40)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55e1468ea850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.write_buffer_size: 16777216 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_number: 64 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression: LZ4 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression: Disabled Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.prefix_extractor: nullptr Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.num_levels: 7 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.window_bits: -14 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.level: 32767 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.strategy: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.enabled: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.target_file_size_base: 67108864 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.target_file_size_multiplier: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.arena_block_size: 1048576 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.disable_auto_compactions: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.table_properties_collectors: Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.inplace_update_support: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_huge_page_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bloom_locality: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_successive_merges: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.paranoid_file_checks: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.force_consistency_checks: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.report_bg_io_stats: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.ttl: 2592000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.enable_blob_files: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.min_blob_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_file_size: 268435456 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_compression_type: NoCompression Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.enable_blob_garbage_collection: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_file_starting_level: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0) Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]: Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.merge_operator: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_filter: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_filter_factory: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.sst_partitioner_factory: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_factory: SkipListFactory Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.table_factory: BlockBasedTable Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e1478a4d40)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55e1468ea850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.write_buffer_size: 16777216 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_number: 64 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression: LZ4 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression: Disabled Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.prefix_extractor: nullptr Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.num_levels: 7 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.window_bits: -14 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.level: 32767 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.strategy: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.enabled: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.target_file_size_base: 67108864 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.target_file_size_multiplier: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.arena_block_size: 1048576 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.disable_auto_compactions: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.inplace_update_support: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_huge_page_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bloom_locality: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_successive_merges: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.paranoid_file_checks: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.force_consistency_checks: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.report_bg_io_stats: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.ttl: 2592000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.enable_blob_files: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.min_blob_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_file_size: 268435456 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_compression_type: NoCompression Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.enable_blob_garbage_collection: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_file_starting_level: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1) Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]: Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.merge_operator: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_filter: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_filter_factory: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.sst_partitioner_factory: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_factory: SkipListFactory Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.table_factory: BlockBasedTable Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e1478a4d40)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55e1468ea850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.write_buffer_size: 16777216 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_number: 64 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression: LZ4 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression: Disabled Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.prefix_extractor: nullptr Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.num_levels: 7 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.window_bits: -14 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.level: 32767 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.strategy: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.enabled: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.target_file_size_base: 67108864 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.target_file_size_multiplier: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.arena_block_size: 1048576 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.disable_auto_compactions: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.inplace_update_support: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_huge_page_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bloom_locality: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_successive_merges: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.paranoid_file_checks: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.force_consistency_checks: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.report_bg_io_stats: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.ttl: 2592000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.enable_blob_files: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.min_blob_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_file_size: 268435456 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_compression_type: NoCompression Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.enable_blob_garbage_collection: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_file_starting_level: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2) Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]: Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.merge_operator: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_filter: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_filter_factory: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.sst_partitioner_factory: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_factory: SkipListFactory Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.table_factory: BlockBasedTable Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e1478a4d40)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55e1468ea850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.write_buffer_size: 16777216 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_number: 64 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression: LZ4 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression: Disabled Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.prefix_extractor: nullptr Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.num_levels: 7 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.window_bits: -14 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.level: 32767 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.strategy: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.enabled: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.target_file_size_base: 67108864 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.target_file_size_multiplier: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.arena_block_size: 1048576 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.disable_auto_compactions: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.inplace_update_support: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_huge_page_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bloom_locality: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_successive_merges: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.paranoid_file_checks: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.force_consistency_checks: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.report_bg_io_stats: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.ttl: 2592000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.enable_blob_files: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.min_blob_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_file_size: 268435456 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_compression_type: NoCompression Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.enable_blob_garbage_collection: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_file_starting_level: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0) Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]: Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.merge_operator: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_filter: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_filter_factory: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.sst_partitioner_factory: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_factory: SkipListFactory Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.table_factory: BlockBasedTable Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e1478a4d40)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55e1468ea850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.write_buffer_size: 16777216 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_number: 64 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression: LZ4 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression: Disabled Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.prefix_extractor: nullptr Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.num_levels: 7 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.window_bits: -14 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.level: 32767 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.strategy: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.enabled: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.target_file_size_base: 67108864 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.target_file_size_multiplier: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.arena_block_size: 1048576 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.disable_auto_compactions: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.inplace_update_support: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_huge_page_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bloom_locality: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_successive_merges: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.paranoid_file_checks: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.force_consistency_checks: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.report_bg_io_stats: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.ttl: 2592000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.enable_blob_files: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.min_blob_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_file_size: 268435456 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_compression_type: NoCompression Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.enable_blob_garbage_collection: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_file_starting_level: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1) Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]: Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.merge_operator: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_filter: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_filter_factory: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.sst_partitioner_factory: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_factory: SkipListFactory Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.table_factory: BlockBasedTable Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e1478a4d40)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55e1468ea850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.write_buffer_size: 16777216 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_number: 64 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression: LZ4 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression: Disabled Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.prefix_extractor: nullptr Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.num_levels: 7 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.window_bits: -14 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.level: 32767 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.strategy: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.enabled: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.target_file_size_base: 67108864 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.target_file_size_multiplier: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.arena_block_size: 1048576 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.disable_auto_compactions: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.inplace_update_support: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_huge_page_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bloom_locality: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_successive_merges: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.paranoid_file_checks: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.force_consistency_checks: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.report_bg_io_stats: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.ttl: 2592000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.enable_blob_files: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.min_blob_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_file_size: 268435456 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_compression_type: NoCompression Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.enable_blob_garbage_collection: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_file_starting_level: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2) Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]: Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.merge_operator: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_filter: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_filter_factory: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.sst_partitioner_factory: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_factory: SkipListFactory Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.table_factory: BlockBasedTable Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e1478a4d40)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55e1468ea850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.write_buffer_size: 16777216 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_number: 64 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression: LZ4 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression: Disabled Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.prefix_extractor: nullptr Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.num_levels: 7 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.window_bits: -14 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.level: 32767 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.strategy: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.enabled: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.target_file_size_base: 67108864 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.target_file_size_multiplier: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.arena_block_size: 1048576 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.disable_auto_compactions: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.inplace_update_support: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_huge_page_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bloom_locality: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_successive_merges: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.paranoid_file_checks: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.force_consistency_checks: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.report_bg_io_stats: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.ttl: 2592000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.enable_blob_files: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.min_blob_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_file_size: 268435456 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_compression_type: NoCompression Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.enable_blob_garbage_collection: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_file_starting_level: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0) Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]: Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.merge_operator: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_filter: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_filter_factory: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.sst_partitioner_factory: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_factory: SkipListFactory Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.table_factory: BlockBasedTable Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e1478a4f60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55e1468ea2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.write_buffer_size: 16777216 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_number: 64 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression: LZ4 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression: Disabled Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.prefix_extractor: nullptr Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.num_levels: 7 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.window_bits: -14 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.level: 32767 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.strategy: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.enabled: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.target_file_size_base: 67108864 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.target_file_size_multiplier: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.arena_block_size: 1048576 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.disable_auto_compactions: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.inplace_update_support: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_huge_page_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bloom_locality: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_successive_merges: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.paranoid_file_checks: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.force_consistency_checks: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.report_bg_io_stats: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.ttl: 2592000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.enable_blob_files: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.min_blob_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_file_size: 268435456 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_compression_type: NoCompression Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.enable_blob_garbage_collection: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_file_starting_level: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1) Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]: Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.merge_operator: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_filter: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_filter_factory: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.sst_partitioner_factory: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_factory: SkipListFactory Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.table_factory: BlockBasedTable Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e1478a4f60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55e1468ea2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.write_buffer_size: 16777216 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_number: 64 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression: LZ4 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression: Disabled Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.prefix_extractor: nullptr Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.num_levels: 7 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.window_bits: -14 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.level: 32767 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.strategy: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.enabled: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.target_file_size_base: 67108864 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.target_file_size_multiplier: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.arena_block_size: 1048576 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.disable_auto_compactions: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.inplace_update_support: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_huge_page_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bloom_locality: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_successive_merges: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.paranoid_file_checks: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.force_consistency_checks: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.report_bg_io_stats: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.ttl: 2592000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.enable_blob_files: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.min_blob_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_file_size: 268435456 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_compression_type: NoCompression Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.enable_blob_garbage_collection: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_file_starting_level: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2) Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]: Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.merge_operator: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_filter: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_filter_factory: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.sst_partitioner_factory: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_factory: SkipListFactory Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.table_factory: BlockBasedTable Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e1478a4f60)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55e1468ea2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.write_buffer_size: 16777216 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_number: 64 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression: LZ4 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression: Disabled Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.prefix_extractor: nullptr Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.num_levels: 7 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.window_bits: -14 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.level: 32767 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.strategy: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.enabled: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.target_file_size_base: 67108864 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.target_file_size_multiplier: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.arena_block_size: 1048576 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.disable_auto_compactions: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.inplace_update_support: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_huge_page_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bloom_locality: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_successive_merges: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.paranoid_file_checks: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.force_consistency_checks: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.report_bg_io_stats: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.ttl: 2592000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.enable_blob_files: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.min_blob_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_file_size: 268435456 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_compression_type: NoCompression Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.enable_blob_garbage_collection: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_file_starting_level: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L) Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P) Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 2be23737-9346-4332-89ed-d1c619a8d7ac Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008012523491, "job": 1, "event": "recovery_started", "wal_files": [31]} Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008012523642, "job": 1, "event": "recovery_finished"} Dec 6 03:00:12 localhost ceph-osd[32665]: bluestore(/var/lib/ceph/osd/ceph-4) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0 Dec 6 03:00:12 localhost ceph-osd[32665]: bluestore(/var/lib/ceph/osd/ceph-4) _open_super_meta old nid_max 1025 Dec 6 03:00:12 localhost ceph-osd[32665]: bluestore(/var/lib/ceph/osd/ceph-4) _open_super_meta old blobid_max 10240 Dec 6 03:00:12 localhost ceph-osd[32665]: bluestore(/var/lib/ceph/osd/ceph-4) _open_super_meta ondisk_format 4 compat_ondisk_format 3 Dec 6 03:00:12 localhost ceph-osd[32665]: bluestore(/var/lib/ceph/osd/ceph-4) _open_super_meta min_alloc_size 0x1000 Dec 6 03:00:12 localhost ceph-osd[32665]: freelist init Dec 6 03:00:12 localhost ceph-osd[32665]: freelist _read_cfg Dec 6 03:00:12 localhost ceph-osd[32665]: bluestore(/var/lib/ceph/osd/ceph-4) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete Dec 6 03:00:12 localhost ceph-osd[32665]: bluefs umount Dec 6 03:00:12 localhost ceph-osd[32665]: bdev(0x55e146913180 /var/lib/ceph/osd/ceph-4/block) close Dec 6 03:00:12 localhost podman[32951]: Dec 6 03:00:12 localhost podman[32951]: 2025-12-06 08:00:12.744274781 +0000 UTC m=+0.086159086 container create 062a7570aa00d3c5da84e3785e99b86cf5c20a4926571b10855a1c3817f75650 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_montalcini, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, com.redhat.component=rhceph-container, name=rhceph, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, GIT_BRANCH=main, release=1763362218, maintainer=Guillaume Abrioux , architecture=x86_64, vcs-type=git, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 6 03:00:12 localhost ceph-osd[32665]: bdev(0x55e146913180 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block Dec 6 03:00:12 localhost ceph-osd[32665]: bdev(0x55e146913180 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument Dec 6 03:00:12 localhost ceph-osd[32665]: bdev(0x55e146913180 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Dec 6 03:00:12 localhost ceph-osd[32665]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-4/block size 7.0 GiB Dec 6 03:00:12 localhost ceph-osd[32665]: bluefs mount Dec 6 03:00:12 localhost ceph-osd[32665]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000 Dec 6 03:00:12 localhost ceph-osd[32665]: bluefs mount shared_bdev_used = 4718592 Dec 6 03:00:12 localhost ceph-osd[32665]: bluestore(/var/lib/ceph/osd/ceph-4) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540 Dec 6 03:00:12 localhost systemd[1]: Started libpod-conmon-062a7570aa00d3c5da84e3785e99b86cf5c20a4926571b10855a1c3817f75650.scope. Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: RocksDB version: 7.9.2 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Git sha 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Compile date 2025-09-23 00:00:00 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: DB SUMMARY Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: DB Session ID: 3EP43754JMQKP9Z6PGMM Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: CURRENT file: CURRENT Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: IDENTITY file: IDENTITY Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: MANIFEST file: MANIFEST-000032 size: 1007 Bytes Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: SST files in db.slow dir, Total Num: 0, files: Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.error_if_exists: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.create_if_missing: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.paranoid_checks: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.flush_verify_memtable_count: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.env: 0x55e146956310 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.fs: LegacyFileSystem Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.info_log: 0x55e1478a5c80 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_file_opening_threads: 16 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.statistics: (nil) Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.use_fsync: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_log_file_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_manifest_file_size: 1073741824 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.log_file_time_to_roll: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.keep_log_file_num: 1000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.recycle_log_file_num: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.allow_fallocate: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.allow_mmap_reads: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.allow_mmap_writes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.use_direct_reads: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.create_missing_column_families: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.db_log_dir: Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.wal_dir: db.wal Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.table_cache_numshardbits: 6 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.WAL_ttl_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.WAL_size_limit_MB: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.manifest_preallocation_size: 4194304 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.is_fd_close_on_exec: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.advise_random_on_open: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.db_write_buffer_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.write_buffer_manager: 0x55e1468fd540 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.access_hint_on_compaction_start: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.random_access_max_buffer_size: 1048576 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.use_adaptive_mutex: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.rate_limiter: (nil) Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.wal_recovery_mode: 2 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.enable_thread_tracking: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.enable_pipelined_write: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.unordered_write: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.allow_concurrent_memtable_write: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.write_thread_max_yield_usec: 100 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.write_thread_slow_yield_usec: 3 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.row_cache: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.wal_filter: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.avoid_flush_during_recovery: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.allow_ingest_behind: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.two_write_queues: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.manual_wal_flush: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.wal_compression: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.atomic_flush: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.persist_stats_to_disk: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.write_dbid_to_manifest: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.log_readahead_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.file_checksum_gen_factory: Unknown Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.best_efforts_recovery: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.allow_data_in_errors: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.db_host_id: __hostname__ Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.enforce_single_del_contracts: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_background_jobs: 4 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_background_compactions: -1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_subcompactions: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.avoid_flush_during_shutdown: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.writable_file_max_buffer_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.delayed_write_rate : 16777216 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_total_wal_size: 1073741824 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.stats_dump_period_sec: 600 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.stats_persist_period_sec: 600 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.stats_history_buffer_size: 1048576 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_open_files: -1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bytes_per_sync: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.wal_bytes_per_sync: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.strict_bytes_per_sync: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_readahead_size: 2097152 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_background_flushes: -1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Compression algorithms supported: Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: #011kZSTD supported: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: #011kXpressCompression supported: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: #011kBZip2Compression supported: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: #011kLZ4Compression supported: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: #011kZlibCompression supported: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: #011kLZ4HCCompression supported: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: #011kSnappyCompression supported: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Fast CRC32 supported: Supported on x86 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: DMutex implementation: pthread_mutex_t Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default) Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.merge_operator: .T:int64_array.b:bitwise_xor Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_filter: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_filter_factory: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.sst_partitioner_factory: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_factory: SkipListFactory Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.table_factory: BlockBasedTable Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e1478e5600)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55e1468ea2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.write_buffer_size: 16777216 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_number: 64 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression: LZ4 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression: Disabled Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.prefix_extractor: nullptr Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.num_levels: 7 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.window_bits: -14 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.level: 32767 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.strategy: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.enabled: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.target_file_size_base: 67108864 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.target_file_size_multiplier: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.arena_block_size: 1048576 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.disable_auto_compactions: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.table_properties_collectors: Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.inplace_update_support: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_huge_page_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bloom_locality: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_successive_merges: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.paranoid_file_checks: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.force_consistency_checks: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.report_bg_io_stats: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.ttl: 2592000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.enable_blob_files: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.min_blob_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_file_size: 268435456 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_compression_type: NoCompression Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.enable_blob_garbage_collection: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_file_starting_level: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0) Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]: Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.merge_operator: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_filter: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_filter_factory: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.sst_partitioner_factory: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_factory: SkipListFactory Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.table_factory: BlockBasedTable Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e1478e5600)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55e1468ea2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.write_buffer_size: 16777216 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_number: 64 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression: LZ4 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression: Disabled Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.prefix_extractor: nullptr Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.num_levels: 7 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.window_bits: -14 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.level: 32767 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.strategy: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.enabled: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.target_file_size_base: 67108864 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.target_file_size_multiplier: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.arena_block_size: 1048576 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.disable_auto_compactions: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.inplace_update_support: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_huge_page_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bloom_locality: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_successive_merges: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.paranoid_file_checks: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.force_consistency_checks: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.report_bg_io_stats: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.ttl: 2592000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.enable_blob_files: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.min_blob_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_file_size: 268435456 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_compression_type: NoCompression Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.enable_blob_garbage_collection: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_file_starting_level: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1) Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]: Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.merge_operator: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_filter: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_filter_factory: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.sst_partitioner_factory: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_factory: SkipListFactory Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.table_factory: BlockBasedTable Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e1478e5600)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55e1468ea2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.write_buffer_size: 16777216 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_number: 64 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression: LZ4 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression: Disabled Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.prefix_extractor: nullptr Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.num_levels: 7 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.window_bits: -14 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.level: 32767 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.strategy: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.enabled: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.target_file_size_base: 67108864 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.target_file_size_multiplier: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.arena_block_size: 1048576 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.disable_auto_compactions: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.inplace_update_support: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_huge_page_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bloom_locality: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_successive_merges: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.paranoid_file_checks: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.force_consistency_checks: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.report_bg_io_stats: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.ttl: 2592000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.enable_blob_files: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.min_blob_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_file_size: 268435456 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_compression_type: NoCompression Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.enable_blob_garbage_collection: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_file_starting_level: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2) Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]: Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.merge_operator: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_filter: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_filter_factory: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.sst_partitioner_factory: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_factory: SkipListFactory Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.table_factory: BlockBasedTable Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e1478e5600)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55e1468ea2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.write_buffer_size: 16777216 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_number: 64 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression: LZ4 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression: Disabled Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.prefix_extractor: nullptr Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.num_levels: 7 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.window_bits: -14 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.level: 32767 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.strategy: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.enabled: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.target_file_size_base: 67108864 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.target_file_size_multiplier: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.arena_block_size: 1048576 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.disable_auto_compactions: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.inplace_update_support: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_huge_page_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bloom_locality: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_successive_merges: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.paranoid_file_checks: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.force_consistency_checks: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.report_bg_io_stats: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.ttl: 2592000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.enable_blob_files: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.min_blob_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_file_size: 268435456 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_compression_type: NoCompression Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.enable_blob_garbage_collection: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_file_starting_level: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0) Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]: Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.merge_operator: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_filter: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_filter_factory: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.sst_partitioner_factory: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_factory: SkipListFactory Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.table_factory: BlockBasedTable Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e1478e5600)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55e1468ea2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.write_buffer_size: 16777216 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_number: 64 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression: LZ4 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression: Disabled Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.prefix_extractor: nullptr Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.num_levels: 7 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.window_bits: -14 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.level: 32767 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.strategy: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.enabled: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.target_file_size_base: 67108864 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.target_file_size_multiplier: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.arena_block_size: 1048576 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.disable_auto_compactions: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.inplace_update_support: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_huge_page_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bloom_locality: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_successive_merges: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.paranoid_file_checks: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.force_consistency_checks: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.report_bg_io_stats: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.ttl: 2592000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.enable_blob_files: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.min_blob_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_file_size: 268435456 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_compression_type: NoCompression Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.enable_blob_garbage_collection: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 6 03:00:12 localhost podman[32951]: 2025-12-06 08:00:12.700724635 +0000 UTC m=+0.042608940 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_file_starting_level: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1) Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]: Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.merge_operator: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_filter: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_filter_factory: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.sst_partitioner_factory: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_factory: SkipListFactory Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.table_factory: BlockBasedTable Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e1478e5600)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55e1468ea2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.write_buffer_size: 16777216 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_number: 64 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression: LZ4 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression: Disabled Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.prefix_extractor: nullptr Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.num_levels: 7 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.window_bits: -14 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.level: 32767 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.strategy: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.enabled: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.target_file_size_base: 67108864 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.target_file_size_multiplier: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.arena_block_size: 1048576 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.disable_auto_compactions: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.inplace_update_support: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_huge_page_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bloom_locality: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_successive_merges: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.paranoid_file_checks: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.force_consistency_checks: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.report_bg_io_stats: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.ttl: 2592000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.enable_blob_files: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.min_blob_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_file_size: 268435456 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_compression_type: NoCompression Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.enable_blob_garbage_collection: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_file_starting_level: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2) Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]: Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.merge_operator: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_filter: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_filter_factory: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.sst_partitioner_factory: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_factory: SkipListFactory Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.table_factory: BlockBasedTable Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e1478e5600)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55e1468ea2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.write_buffer_size: 16777216 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_number: 64 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression: LZ4 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression: Disabled Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.prefix_extractor: nullptr Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.num_levels: 7 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.window_bits: -14 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.level: 32767 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.strategy: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.enabled: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.target_file_size_base: 67108864 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.target_file_size_multiplier: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.arena_block_size: 1048576 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.disable_auto_compactions: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.inplace_update_support: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_huge_page_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bloom_locality: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_successive_merges: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.paranoid_file_checks: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.force_consistency_checks: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.report_bg_io_stats: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.ttl: 2592000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.enable_blob_files: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.min_blob_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_file_size: 268435456 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_compression_type: NoCompression Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.enable_blob_garbage_collection: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_file_starting_level: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0) Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]: Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.merge_operator: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_filter: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_filter_factory: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.sst_partitioner_factory: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_factory: SkipListFactory Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.table_factory: BlockBasedTable Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e1478e53c0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55e1468eb610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.write_buffer_size: 16777216 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_number: 64 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression: LZ4 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression: Disabled Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.prefix_extractor: nullptr Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.num_levels: 7 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.window_bits: -14 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.level: 32767 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.strategy: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.enabled: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.target_file_size_base: 67108864 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.target_file_size_multiplier: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.arena_block_size: 1048576 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.disable_auto_compactions: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.inplace_update_support: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_huge_page_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bloom_locality: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_successive_merges: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.paranoid_file_checks: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.force_consistency_checks: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.report_bg_io_stats: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.ttl: 2592000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.enable_blob_files: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.min_blob_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_file_size: 268435456 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_compression_type: NoCompression Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.enable_blob_garbage_collection: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_file_starting_level: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1) Dec 6 03:00:12 localhost systemd[1]: Started libcrun container. Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]: Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.merge_operator: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_filter: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_filter_factory: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.sst_partitioner_factory: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_factory: SkipListFactory Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.table_factory: BlockBasedTable Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e1478e53c0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55e1468eb610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.write_buffer_size: 16777216 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_number: 64 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression: LZ4 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression: Disabled Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.prefix_extractor: nullptr Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.num_levels: 7 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.window_bits: -14 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.level: 32767 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.strategy: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.enabled: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.target_file_size_base: 67108864 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.target_file_size_multiplier: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.arena_block_size: 1048576 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.disable_auto_compactions: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.inplace_update_support: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_huge_page_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bloom_locality: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_successive_merges: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.paranoid_file_checks: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.force_consistency_checks: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.report_bg_io_stats: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.ttl: 2592000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.enable_blob_files: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.min_blob_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_file_size: 268435456 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_compression_type: NoCompression Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.enable_blob_garbage_collection: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_file_starting_level: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2) Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]: Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.merge_operator: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_filter: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_filter_factory: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.sst_partitioner_factory: None Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_factory: SkipListFactory Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.table_factory: BlockBasedTable Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e1478e53c0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55e1468eb610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.write_buffer_size: 16777216 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_number: 64 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression: LZ4 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression: Disabled Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.prefix_extractor: nullptr Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.num_levels: 7 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.window_bits: -14 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.level: 32767 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.strategy: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.enabled: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.target_file_size_base: 67108864 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.target_file_size_multiplier: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.arena_block_size: 1048576 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.disable_auto_compactions: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.inplace_update_support: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.memtable_huge_page_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.bloom_locality: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.max_successive_merges: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.paranoid_file_checks: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.force_consistency_checks: 1 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.report_bg_io_stats: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.ttl: 2592000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.enable_blob_files: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.min_blob_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_file_size: 268435456 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_compression_type: NoCompression Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.enable_blob_garbage_collection: false Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.blob_file_starting_level: 0 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L) Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P) Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 2be23737-9346-4332-89ed-d1c619a8d7ac Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008012804943, "job": 1, "event": "recovery_started", "wal_files": [31]} Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008012811285, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1261, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765008012, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2be23737-9346-4332-89ed-d1c619a8d7ac", "db_session_id": "3EP43754JMQKP9Z6PGMM", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}} Dec 6 03:00:12 localhost podman[32951]: 2025-12-06 08:00:12.830700026 +0000 UTC m=+0.172584331 container init 062a7570aa00d3c5da84e3785e99b86cf5c20a4926571b10855a1c3817f75650 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_montalcini, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, vendor=Red Hat, Inc., io.openshift.expose-services=, GIT_CLEAN=True, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, vcs-type=git, release=1763362218, distribution-scope=public, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, ceph=True, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008012833966, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1609, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765008012, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2be23737-9346-4332-89ed-d1c619a8d7ac", "db_session_id": "3EP43754JMQKP9Z6PGMM", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}} Dec 6 03:00:12 localhost elated_montalcini[32983]: 167 167 Dec 6 03:00:12 localhost systemd[1]: libpod-062a7570aa00d3c5da84e3785e99b86cf5c20a4926571b10855a1c3817f75650.scope: Deactivated successfully. Dec 6 03:00:12 localhost podman[32951]: 2025-12-06 08:00:12.863347496 +0000 UTC m=+0.205231781 container start 062a7570aa00d3c5da84e3785e99b86cf5c20a4926571b10855a1c3817f75650 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_montalcini, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, distribution-scope=public, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, version=7, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, RELEASE=main) Dec 6 03:00:12 localhost podman[32951]: 2025-12-06 08:00:12.863555444 +0000 UTC m=+0.205439799 container attach 062a7570aa00d3c5da84e3785e99b86cf5c20a4926571b10855a1c3817f75650 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_montalcini, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, RELEASE=main, description=Red Hat Ceph Storage 7, name=rhceph, version=7, vcs-type=git, io.buildah.version=1.41.4, distribution-scope=public, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008012864151, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1290, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765008012, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2be23737-9346-4332-89ed-d1c619a8d7ac", "db_session_id": "3EP43754JMQKP9Z6PGMM", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}} Dec 6 03:00:12 localhost podman[32951]: 2025-12-06 08:00:12.865073433 +0000 UTC m=+0.206957728 container died 062a7570aa00d3c5da84e3785e99b86cf5c20a4926571b10855a1c3817f75650 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_montalcini, name=rhceph, ceph=True, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, release=1763362218, distribution-scope=public, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, RELEASE=main, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7) Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/db_impl/db_impl_open.cc:1432] Failed to truncate log #31: IO error: No such file or directory: While open a file for appending: db.wal/000031.log: No such file or directory Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008012869926, "job": 1, "event": "recovery_finished"} Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/version_set.cc:5047] Creating manifest 40 Dec 6 03:00:12 localhost podman[33153]: 2025-12-06 08:00:12.951830031 +0000 UTC m=+0.100713396 container remove 062a7570aa00d3c5da84e3785e99b86cf5c20a4926571b10855a1c3817f75650 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_montalcini, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , vcs-type=git, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, RELEASE=main, description=Red Hat Ceph Storage 7, version=7, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, com.redhat.component=rhceph-container, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, architecture=x86_64) Dec 6 03:00:12 localhost systemd[1]: libpod-conmon-062a7570aa00d3c5da84e3785e99b86cf5c20a4926571b10855a1c3817f75650.scope: Deactivated successfully. Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55e14773e380 Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: DB pointer 0x55e147803a00 Dec 6 03:00:12 localhost ceph-osd[32665]: bluestore(/var/lib/ceph/osd/ceph-4) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0 Dec 6 03:00:12 localhost ceph-osd[32665]: bluestore(/var/lib/ceph/osd/ceph-4) _upgrade_super from 4, latest 4 Dec 6 03:00:12 localhost ceph-osd[32665]: bluestore(/var/lib/ceph/osd/ceph-4) _upgrade_super done Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 6 03:00:12 localhost ceph-osd[32665]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.2 total, 0.2 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e1468ea2d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e1468ea2d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e1468ea2d0#2 capacity: 460.80 MB usag Dec 6 03:00:12 localhost ceph-osd[32665]: /builddir/build/BUILD/ceph-18.2.1/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs Dec 6 03:00:12 localhost ceph-osd[32665]: /builddir/build/BUILD/ceph-18.2.1/src/cls/hello/cls_hello.cc:316: loading cls_hello Dec 6 03:00:12 localhost ceph-osd[32665]: _get_class not permitted to load lua Dec 6 03:00:12 localhost ceph-osd[32665]: _get_class not permitted to load sdk Dec 6 03:00:12 localhost ceph-osd[32665]: _get_class not permitted to load test_remote_reads Dec 6 03:00:12 localhost ceph-osd[32665]: osd.4 0 crush map has features 288232575208783872, adjusting msgr requires for clients Dec 6 03:00:12 localhost ceph-osd[32665]: osd.4 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons Dec 6 03:00:12 localhost ceph-osd[32665]: osd.4 0 crush map has features 288232575208783872, adjusting msgr requires for osds Dec 6 03:00:12 localhost ceph-osd[32665]: osd.4 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature Dec 6 03:00:12 localhost ceph-osd[32665]: osd.4 0 load_pgs Dec 6 03:00:12 localhost ceph-osd[32665]: osd.4 0 load_pgs opened 0 pgs Dec 6 03:00:12 localhost ceph-osd[32665]: osd.4 0 log_to_monitors true Dec 6 03:00:12 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-4[32661]: 2025-12-06T08:00:12.968+0000 7f2ecde47a80 -1 osd.4 0 log_to_monitors true Dec 6 03:00:13 localhost podman[33208]: Dec 6 03:00:13 localhost podman[33208]: 2025-12-06 08:00:13.152122957 +0000 UTC m=+0.076124924 container create a61415243badf521a125f5418ef4676d450c16091564e5e0c063250cb26e33c7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_ishizaka, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, vendor=Red Hat, Inc., release=1763362218, version=7, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, RELEASE=main, distribution-scope=public, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 6 03:00:13 localhost systemd[1]: Started libpod-conmon-a61415243badf521a125f5418ef4676d450c16091564e5e0c063250cb26e33c7.scope. Dec 6 03:00:13 localhost systemd[1]: Started libcrun container. Dec 6 03:00:13 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56c7bb213dc1e6a106017d35c011284484d6206a185226214dd6db3c880dc55d/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 6 03:00:13 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56c7bb213dc1e6a106017d35c011284484d6206a185226214dd6db3c880dc55d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 6 03:00:13 localhost podman[33208]: 2025-12-06 08:00:13.120209106 +0000 UTC m=+0.044211093 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 03:00:13 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56c7bb213dc1e6a106017d35c011284484d6206a185226214dd6db3c880dc55d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 6 03:00:13 localhost podman[33208]: 2025-12-06 08:00:13.241672615 +0000 UTC m=+0.165674582 container init a61415243badf521a125f5418ef4676d450c16091564e5e0c063250cb26e33c7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_ishizaka, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_CLEAN=True, GIT_BRANCH=main, name=rhceph, ceph=True, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z) Dec 6 03:00:13 localhost podman[33208]: 2025-12-06 08:00:13.247865957 +0000 UTC m=+0.171867894 container start a61415243badf521a125f5418ef4676d450c16091564e5e0c063250cb26e33c7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_ishizaka, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, architecture=x86_64, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, com.redhat.component=rhceph-container, distribution-scope=public, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, maintainer=Guillaume Abrioux , GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., ceph=True, io.openshift.expose-services=) Dec 6 03:00:13 localhost podman[33208]: 2025-12-06 08:00:13.248030374 +0000 UTC m=+0.172032381 container attach a61415243badf521a125f5418ef4676d450c16091564e5e0c063250cb26e33c7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_ishizaka, architecture=x86_64, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, name=rhceph, release=1763362218, version=7, GIT_CLEAN=True, vcs-type=git, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 6 03:00:13 localhost systemd[1]: var-lib-containers-storage-overlay-be233b13925594996adf79477244c69cf3ae9854042743bf672fc2db1adf5227-merged.mount: Deactivated successfully. Dec 6 03:00:13 localhost trusting_ishizaka[33223]: { Dec 6 03:00:13 localhost trusting_ishizaka[33223]: "1f710487-3a3c-4f3d-8622-d6fac6224470": { Dec 6 03:00:13 localhost trusting_ishizaka[33223]: "ceph_fsid": "1939e851-b10c-5c3b-9bb7-8e7f380233e8", Dec 6 03:00:13 localhost trusting_ishizaka[33223]: "device": "/dev/mapper/ceph_vg0-ceph_lv0", Dec 6 03:00:13 localhost trusting_ishizaka[33223]: "osd_id": 1, Dec 6 03:00:13 localhost trusting_ishizaka[33223]: "osd_uuid": "1f710487-3a3c-4f3d-8622-d6fac6224470", Dec 6 03:00:13 localhost trusting_ishizaka[33223]: "type": "bluestore" Dec 6 03:00:13 localhost trusting_ishizaka[33223]: }, Dec 6 03:00:13 localhost trusting_ishizaka[33223]: "876fe068-f1aa-42bd-a56b-91d35874dd8e": { Dec 6 03:00:13 localhost trusting_ishizaka[33223]: "ceph_fsid": "1939e851-b10c-5c3b-9bb7-8e7f380233e8", Dec 6 03:00:13 localhost trusting_ishizaka[33223]: "device": "/dev/mapper/ceph_vg1-ceph_lv1", Dec 6 03:00:13 localhost trusting_ishizaka[33223]: "osd_id": 4, Dec 6 03:00:13 localhost trusting_ishizaka[33223]: "osd_uuid": "876fe068-f1aa-42bd-a56b-91d35874dd8e", Dec 6 03:00:13 localhost trusting_ishizaka[33223]: "type": "bluestore" Dec 6 03:00:13 localhost trusting_ishizaka[33223]: } Dec 6 03:00:13 localhost trusting_ishizaka[33223]: } Dec 6 03:00:13 localhost systemd[1]: libpod-a61415243badf521a125f5418ef4676d450c16091564e5e0c063250cb26e33c7.scope: Deactivated successfully. Dec 6 03:00:13 localhost podman[33208]: 2025-12-06 08:00:13.792419997 +0000 UTC m=+0.716421944 container died a61415243badf521a125f5418ef4676d450c16091564e5e0c063250cb26e33c7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_ishizaka, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, CEPH_POINT_RELEASE=, GIT_CLEAN=True, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, RELEASE=main, GIT_BRANCH=main, com.redhat.component=rhceph-container, vcs-type=git, io.openshift.tags=rhceph ceph, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, release=1763362218, description=Red Hat Ceph Storage 7) Dec 6 03:00:13 localhost systemd[1]: tmp-crun.K2VloQ.mount: Deactivated successfully. Dec 6 03:00:13 localhost systemd[1]: var-lib-containers-storage-overlay-56c7bb213dc1e6a106017d35c011284484d6206a185226214dd6db3c880dc55d-merged.mount: Deactivated successfully. Dec 6 03:00:13 localhost podman[33259]: 2025-12-06 08:00:13.874097866 +0000 UTC m=+0.074613603 container remove a61415243badf521a125f5418ef4676d450c16091564e5e0c063250cb26e33c7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_ishizaka, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, version=7, GIT_CLEAN=True, vendor=Red Hat, Inc., RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , architecture=x86_64, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 6 03:00:13 localhost systemd[1]: libpod-conmon-a61415243badf521a125f5418ef4676d450c16091564e5e0c063250cb26e33c7.scope: Deactivated successfully. Dec 6 03:00:13 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : purged_snaps scrub starts Dec 6 03:00:13 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : purged_snaps scrub ok Dec 6 03:00:14 localhost ceph-osd[31726]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 24.918 iops: 6379.037 elapsed_sec: 0.470 Dec 6 03:00:14 localhost ceph-osd[31726]: log_channel(cluster) log [WRN] : OSD bench result of 6379.037428 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. Dec 6 03:00:14 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-1[31722]: 2025-12-06T08:00:14.279+0000 7fea50c75640 -1 osd.1 0 waiting for initial osdmap Dec 6 03:00:14 localhost ceph-osd[31726]: osd.1 0 waiting for initial osdmap Dec 6 03:00:14 localhost ceph-osd[31726]: osd.1 11 crush map has features 288514050185494528, adjusting msgr requires for clients Dec 6 03:00:14 localhost ceph-osd[31726]: osd.1 11 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons Dec 6 03:00:14 localhost ceph-osd[31726]: osd.1 11 crush map has features 3314932999778484224, adjusting msgr requires for osds Dec 6 03:00:14 localhost ceph-osd[31726]: osd.1 11 check_osdmap_features require_osd_release unknown -> reef Dec 6 03:00:14 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-1[31722]: 2025-12-06T08:00:14.295+0000 7fea4ba8a640 -1 osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory Dec 6 03:00:14 localhost ceph-osd[31726]: osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory Dec 6 03:00:14 localhost ceph-osd[31726]: osd.1 11 set_numa_affinity not setting numa affinity Dec 6 03:00:14 localhost ceph-osd[31726]: osd.1 11 _collect_metadata loop3: no unique device id for loop3: fallback method has no model nor serial Dec 6 03:00:14 localhost ceph-osd[32665]: osd.4 0 done with init, starting boot process Dec 6 03:00:14 localhost ceph-osd[32665]: osd.4 0 start_boot Dec 6 03:00:14 localhost ceph-osd[32665]: osd.4 0 maybe_override_options_for_qos osd_max_backfills set to 1 Dec 6 03:00:14 localhost ceph-osd[32665]: osd.4 0 maybe_override_options_for_qos osd_recovery_max_active set to 0 Dec 6 03:00:14 localhost ceph-osd[32665]: osd.4 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3 Dec 6 03:00:14 localhost ceph-osd[32665]: osd.4 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10 Dec 6 03:00:14 localhost ceph-osd[32665]: osd.4 0 bench count 12288000 bsize 4 KiB Dec 6 03:00:14 localhost ceph-osd[31726]: osd.1 12 state: booting -> active Dec 6 03:00:16 localhost podman[33387]: 2025-12-06 08:00:16.135811679 +0000 UTC m=+0.085257431 container exec ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, distribution-scope=public, ceph=True, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 6 03:00:16 localhost podman[33387]: 2025-12-06 08:00:16.26606844 +0000 UTC m=+0.215514182 container exec_died ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.openshift.expose-services=, GIT_BRANCH=main, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., version=7, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, ceph=True) Dec 6 03:00:16 localhost ceph-osd[31726]: osd.1 14 crush map has features 288514051259236352, adjusting msgr requires for clients Dec 6 03:00:16 localhost ceph-osd[31726]: osd.1 14 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons Dec 6 03:00:16 localhost ceph-osd[31726]: osd.1 14 crush map has features 3314933000852226048, adjusting msgr requires for osds Dec 6 03:00:17 localhost podman[33582]: Dec 6 03:00:17 localhost podman[33582]: 2025-12-06 08:00:17.959261024 +0000 UTC m=+0.080463543 container create 2814111e8d4784651e220743759d702252829c7367776880be297c040ecb485e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_raman, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., name=rhceph, io.openshift.tags=rhceph ceph, distribution-scope=public, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, ceph=True, CEPH_POINT_RELEASE=, version=7, architecture=x86_64, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, GIT_CLEAN=True, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z) Dec 6 03:00:18 localhost systemd[1]: Started libpod-conmon-2814111e8d4784651e220743759d702252829c7367776880be297c040ecb485e.scope. Dec 6 03:00:18 localhost systemd[1]: Started libcrun container. Dec 6 03:00:18 localhost podman[33582]: 2025-12-06 08:00:17.9244452 +0000 UTC m=+0.045647709 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 03:00:18 localhost podman[33582]: 2025-12-06 08:00:18.034391396 +0000 UTC m=+0.155593895 container init 2814111e8d4784651e220743759d702252829c7367776880be297c040ecb485e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_raman, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, com.redhat.component=rhceph-container, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, version=7, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, release=1763362218, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc.) Dec 6 03:00:18 localhost systemd[1]: tmp-crun.agFO7Q.mount: Deactivated successfully. Dec 6 03:00:18 localhost podman[33582]: 2025-12-06 08:00:18.048813352 +0000 UTC m=+0.170015841 container start 2814111e8d4784651e220743759d702252829c7367776880be297c040ecb485e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_raman, GIT_CLEAN=True, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, RELEASE=main, name=rhceph, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, architecture=x86_64) Dec 6 03:00:18 localhost podman[33582]: 2025-12-06 08:00:18.049220438 +0000 UTC m=+0.170422927 container attach 2814111e8d4784651e220743759d702252829c7367776880be297c040ecb485e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_raman, RELEASE=main, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., release=1763362218, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, name=rhceph, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, architecture=x86_64, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, CEPH_POINT_RELEASE=) Dec 6 03:00:18 localhost optimistic_raman[33597]: 167 167 Dec 6 03:00:18 localhost systemd[1]: libpod-2814111e8d4784651e220743759d702252829c7367776880be297c040ecb485e.scope: Deactivated successfully. Dec 6 03:00:18 localhost podman[33582]: 2025-12-06 08:00:18.055586987 +0000 UTC m=+0.176789536 container died 2814111e8d4784651e220743759d702252829c7367776880be297c040ecb485e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_raman, version=7, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, architecture=x86_64, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 6 03:00:18 localhost podman[33602]: 2025-12-06 08:00:18.146099002 +0000 UTC m=+0.076785929 container remove 2814111e8d4784651e220743759d702252829c7367776880be297c040ecb485e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_raman, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, distribution-scope=public, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, release=1763362218, ceph=True, RELEASE=main, architecture=x86_64, version=7, build-date=2025-11-26T19:44:28Z, vcs-type=git, vendor=Red Hat, Inc., GIT_CLEAN=True, maintainer=Guillaume Abrioux ) Dec 6 03:00:18 localhost systemd[1]: libpod-conmon-2814111e8d4784651e220743759d702252829c7367776880be297c040ecb485e.scope: Deactivated successfully. Dec 6 03:00:18 localhost podman[33621]: Dec 6 03:00:18 localhost podman[33621]: 2025-12-06 08:00:18.344889289 +0000 UTC m=+0.079977744 container create cd4cef547ad957d0bca97edb5522192192af225d69c0eb731063f49be89ec836 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_pasteur, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.openshift.expose-services=, GIT_BRANCH=main, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, distribution-scope=public, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, RELEASE=main, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph) Dec 6 03:00:18 localhost systemd[1]: Started libpod-conmon-cd4cef547ad957d0bca97edb5522192192af225d69c0eb731063f49be89ec836.scope. Dec 6 03:00:18 localhost systemd[1]: Started libcrun container. Dec 6 03:00:18 localhost podman[33621]: 2025-12-06 08:00:18.310388067 +0000 UTC m=+0.045476502 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 03:00:18 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5336e63345ed72e68d84ec37fd1934d59ba978f80334196b9943f6eb0118b02c/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 6 03:00:18 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5336e63345ed72e68d84ec37fd1934d59ba978f80334196b9943f6eb0118b02c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 6 03:00:18 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5336e63345ed72e68d84ec37fd1934d59ba978f80334196b9943f6eb0118b02c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 6 03:00:18 localhost podman[33621]: 2025-12-06 08:00:18.437280888 +0000 UTC m=+0.172369313 container init cd4cef547ad957d0bca97edb5522192192af225d69c0eb731063f49be89ec836 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_pasteur, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, version=7, com.redhat.component=rhceph-container, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., distribution-scope=public, name=rhceph, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_CLEAN=True, maintainer=Guillaume Abrioux , RELEASE=main) Dec 6 03:00:18 localhost podman[33621]: 2025-12-06 08:00:18.447613123 +0000 UTC m=+0.182701568 container start cd4cef547ad957d0bca97edb5522192192af225d69c0eb731063f49be89ec836 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_pasteur, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_BRANCH=main, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, ceph=True, vcs-type=git, distribution-scope=public, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, RELEASE=main) Dec 6 03:00:18 localhost podman[33621]: 2025-12-06 08:00:18.447841591 +0000 UTC m=+0.182930016 container attach cd4cef547ad957d0bca97edb5522192192af225d69c0eb731063f49be89ec836 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_pasteur, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, vcs-type=git, io.openshift.expose-services=, version=7, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, RELEASE=main, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, ceph=True, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 6 03:00:18 localhost ceph-osd[32665]: osd.4 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 23.704 iops: 6068.149 elapsed_sec: 0.494 Dec 6 03:00:18 localhost ceph-osd[32665]: log_channel(cluster) log [WRN] : OSD bench result of 6068.149371 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.4. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. Dec 6 03:00:18 localhost ceph-osd[32665]: osd.4 0 waiting for initial osdmap Dec 6 03:00:18 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-4[32661]: 2025-12-06T08:00:18.510+0000 7f2ec9dc6640 -1 osd.4 0 waiting for initial osdmap Dec 6 03:00:18 localhost ceph-osd[32665]: osd.4 15 crush map has features 288514051259236352, adjusting msgr requires for clients Dec 6 03:00:18 localhost ceph-osd[32665]: osd.4 15 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons Dec 6 03:00:18 localhost ceph-osd[32665]: osd.4 15 crush map has features 3314933000852226048, adjusting msgr requires for osds Dec 6 03:00:18 localhost ceph-osd[32665]: osd.4 15 check_osdmap_features require_osd_release unknown -> reef Dec 6 03:00:18 localhost ceph-osd[32665]: osd.4 15 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory Dec 6 03:00:18 localhost ceph-osd[32665]: osd.4 15 set_numa_affinity not setting numa affinity Dec 6 03:00:18 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-4[32661]: 2025-12-06T08:00:18.538+0000 7f2ec53f0640 -1 osd.4 15 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory Dec 6 03:00:18 localhost ceph-osd[32665]: osd.4 15 _collect_metadata loop4: no unique device id for loop4: fallback method has no model nor serial Dec 6 03:00:18 localhost ceph-osd[32665]: osd.4 16 state: booting -> active Dec 6 03:00:18 localhost systemd[1]: var-lib-containers-storage-overlay-f8be4287a234b7198115c856160ae4c831433b3f562b3214a42524bb2137b5ca-merged.mount: Deactivated successfully. Dec 6 03:00:19 localhost trusting_pasteur[33636]: [ Dec 6 03:00:19 localhost trusting_pasteur[33636]: { Dec 6 03:00:19 localhost trusting_pasteur[33636]: "available": false, Dec 6 03:00:19 localhost trusting_pasteur[33636]: "ceph_device": false, Dec 6 03:00:19 localhost trusting_pasteur[33636]: "device_id": "QEMU_DVD-ROM_QM00001", Dec 6 03:00:19 localhost trusting_pasteur[33636]: "lsm_data": {}, Dec 6 03:00:19 localhost trusting_pasteur[33636]: "lvs": [], Dec 6 03:00:19 localhost trusting_pasteur[33636]: "path": "/dev/sr0", Dec 6 03:00:19 localhost trusting_pasteur[33636]: "rejected_reasons": [ Dec 6 03:00:19 localhost trusting_pasteur[33636]: "Has a FileSystem", Dec 6 03:00:19 localhost trusting_pasteur[33636]: "Insufficient space (<5GB)" Dec 6 03:00:19 localhost trusting_pasteur[33636]: ], Dec 6 03:00:19 localhost trusting_pasteur[33636]: "sys_api": { Dec 6 03:00:19 localhost trusting_pasteur[33636]: "actuators": null, Dec 6 03:00:19 localhost trusting_pasteur[33636]: "device_nodes": "sr0", Dec 6 03:00:19 localhost trusting_pasteur[33636]: "human_readable_size": "482.00 KB", Dec 6 03:00:19 localhost trusting_pasteur[33636]: "id_bus": "ata", Dec 6 03:00:19 localhost trusting_pasteur[33636]: "model": "QEMU DVD-ROM", Dec 6 03:00:19 localhost trusting_pasteur[33636]: "nr_requests": "2", Dec 6 03:00:19 localhost trusting_pasteur[33636]: "partitions": {}, Dec 6 03:00:19 localhost trusting_pasteur[33636]: "path": "/dev/sr0", Dec 6 03:00:19 localhost trusting_pasteur[33636]: "removable": "1", Dec 6 03:00:19 localhost trusting_pasteur[33636]: "rev": "2.5+", Dec 6 03:00:19 localhost trusting_pasteur[33636]: "ro": "0", Dec 6 03:00:19 localhost trusting_pasteur[33636]: "rotational": "1", Dec 6 03:00:19 localhost trusting_pasteur[33636]: "sas_address": "", Dec 6 03:00:19 localhost trusting_pasteur[33636]: "sas_device_handle": "", Dec 6 03:00:19 localhost trusting_pasteur[33636]: "scheduler_mode": "mq-deadline", Dec 6 03:00:19 localhost trusting_pasteur[33636]: "sectors": 0, Dec 6 03:00:19 localhost trusting_pasteur[33636]: "sectorsize": "2048", Dec 6 03:00:19 localhost trusting_pasteur[33636]: "size": 493568.0, Dec 6 03:00:19 localhost trusting_pasteur[33636]: "support_discard": "0", Dec 6 03:00:19 localhost trusting_pasteur[33636]: "type": "disk", Dec 6 03:00:19 localhost trusting_pasteur[33636]: "vendor": "QEMU" Dec 6 03:00:19 localhost trusting_pasteur[33636]: } Dec 6 03:00:19 localhost trusting_pasteur[33636]: } Dec 6 03:00:19 localhost trusting_pasteur[33636]: ] Dec 6 03:00:19 localhost systemd[1]: libpod-cd4cef547ad957d0bca97edb5522192192af225d69c0eb731063f49be89ec836.scope: Deactivated successfully. Dec 6 03:00:19 localhost podman[33621]: 2025-12-06 08:00:19.234701733 +0000 UTC m=+0.969790178 container died cd4cef547ad957d0bca97edb5522192192af225d69c0eb731063f49be89ec836 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_pasteur, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, name=rhceph, vendor=Red Hat, Inc., vcs-type=git, version=7, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, ceph=True, GIT_CLEAN=True, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux ) Dec 6 03:00:19 localhost systemd[1]: tmp-crun.zsDxso.mount: Deactivated successfully. Dec 6 03:00:19 localhost systemd[1]: var-lib-containers-storage-overlay-5336e63345ed72e68d84ec37fd1934d59ba978f80334196b9943f6eb0118b02c-merged.mount: Deactivated successfully. Dec 6 03:00:19 localhost podman[34855]: 2025-12-06 08:00:19.325535311 +0000 UTC m=+0.079943962 container remove cd4cef547ad957d0bca97edb5522192192af225d69c0eb731063f49be89ec836 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_pasteur, vendor=Red Hat, Inc., name=rhceph, release=1763362218, CEPH_POINT_RELEASE=, RELEASE=main, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, version=7, io.k8s.description=Red Hat Ceph Storage 7) Dec 6 03:00:19 localhost systemd[1]: libpod-conmon-cd4cef547ad957d0bca97edb5522192192af225d69c0eb731063f49be89ec836.scope: Deactivated successfully. Dec 6 03:00:19 localhost ceph-osd[32665]: osd.4 pg_epoch: 16 pg[1.0( empty local-lis/les=0/0 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=16) [3,4,2] r=1 lpr=16 pi=[14,16)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:00:26 localhost sshd[34884]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:00:27 localhost sshd[34886]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:00:28 localhost systemd[26209]: Starting Mark boot as successful... Dec 6 03:00:28 localhost systemd[26209]: Finished Mark boot as successful. Dec 6 03:00:28 localhost podman[34985]: 2025-12-06 08:00:28.733521866 +0000 UTC m=+0.100884923 container exec ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, release=1763362218, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, name=rhceph, com.redhat.component=rhceph-container, distribution-scope=public, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7) Dec 6 03:00:28 localhost podman[34985]: 2025-12-06 08:00:28.84112271 +0000 UTC m=+0.208485757 container exec_died ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, vcs-type=git, release=1763362218, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, name=rhceph, ceph=True, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , architecture=x86_64, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, com.redhat.component=rhceph-container) Dec 6 03:00:31 localhost sshd[35062]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:00:45 localhost sshd[35065]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:01:15 localhost sshd[35078]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:01:30 localhost podman[35182]: 2025-12-06 08:01:30.546054814 +0000 UTC m=+0.081783801 container exec ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, RELEASE=main, maintainer=Guillaume Abrioux , architecture=x86_64, io.openshift.expose-services=, version=7, io.buildah.version=1.41.4, GIT_CLEAN=True, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 6 03:01:30 localhost podman[35182]: 2025-12-06 08:01:30.654631292 +0000 UTC m=+0.190360239 container exec_died ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, version=7, GIT_BRANCH=main, com.redhat.component=rhceph-container, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, io.openshift.expose-services=, maintainer=Guillaume Abrioux , GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 6 03:01:31 localhost sshd[35278]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:01:34 localhost systemd-logind[766]: Session 13 logged out. Waiting for processes to exit. Dec 6 03:01:34 localhost systemd[1]: session-13.scope: Deactivated successfully. Dec 6 03:01:34 localhost systemd[1]: session-13.scope: Consumed 20.640s CPU time. Dec 6 03:01:34 localhost systemd-logind[766]: Removed session 13. Dec 6 03:01:35 localhost sshd[35329]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:01:58 localhost sshd[35331]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:02:05 localhost sshd[35333]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:02:37 localhost sshd[35412]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:02:47 localhost sshd[35414]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:02:49 localhost sshd[35416]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:03:25 localhost sshd[35418]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:03:27 localhost sshd[35420]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:03:46 localhost sshd[35499]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:03:59 localhost sshd[35501]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:04:16 localhost systemd[26209]: Created slice User Background Tasks Slice. Dec 6 03:04:16 localhost systemd[26209]: Starting Cleanup of User's Temporary Files and Directories... Dec 6 03:04:16 localhost systemd[26209]: Finished Cleanup of User's Temporary Files and Directories. Dec 6 03:04:32 localhost sshd[35504]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:04:51 localhost sshd[35584]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:04:57 localhost sshd[35586]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:05:02 localhost sshd[35588]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:05:12 localhost sshd[35590]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:05:14 localhost sshd[35592]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:05:14 localhost systemd-logind[766]: New session 27 of user zuul. Dec 6 03:05:14 localhost systemd[1]: Started Session 27 of User zuul. Dec 6 03:05:14 localhost python3[35640]: ansible-ansible.legacy.ping Invoked with data=pong Dec 6 03:05:15 localhost python3[35685]: ansible-setup Invoked with gather_subset=['!facter', '!ohai'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 6 03:05:15 localhost python3[35705]: ansible-user Invoked with name=tripleo-admin generate_ssh_key=False state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005548789.localdomain update_password=always uid=None group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None Dec 6 03:05:16 localhost python3[35761]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/tripleo-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:05:16 localhost python3[35804]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/tripleo-admin mode=288 owner=root group=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765008316.2960074-66339-160939292098300/source _original_basename=tmpqfh2mgc1 follow=False checksum=b3e7ecdcc699d217c6b083a91b07208207813d93 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:05:17 localhost python3[35834]: ansible-file Invoked with path=/home/tripleo-admin state=directory owner=tripleo-admin group=tripleo-admin mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:05:17 localhost python3[35850]: ansible-file Invoked with path=/home/tripleo-admin/.ssh state=directory owner=tripleo-admin group=tripleo-admin mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:05:18 localhost python3[35866]: ansible-file Invoked with path=/home/tripleo-admin/.ssh/authorized_keys state=touch owner=tripleo-admin group=tripleo-admin mode=384 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:05:18 localhost python3[35882]: ansible-lineinfile Invoked with path=/home/tripleo-admin/.ssh/authorized_keys line=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDVgIoETU+ZMXzSQYJdf7tKLhQsLaB9easlDHbhHsBFXd1+Axjoyg338dVOvCx68r/a15lecdlSwbLqd4GXxUOdHnWLa1I9u6bd6azOwE0Dd6ZjnquN3BRq9dLJXMlKHhXMddL6WHNfxT/JOL+gKp0CM74naUBGqrzV05qlb19n7xZJtmxVohAGGeQdFwQJBVoQ6yZOjcJZ5CpbWCs4pFXZT/31fA0KIAJkrzAeUGRRkQEnzXY1riF0RHwvXaNJ0ZoAYfT7q263Pd5gnQEmpiBirUBH6CXJn4lIQyNMyVRbnKWemW9P1kyv2bjZUPg2b1xWBE7MBTs/wMt1RjdO9p+sxtwOd2IQMf1t3JLa2p3xqgxtGTMugpJUBr1TWwdLoHl+eAMuWZwAWofLWICHUlPzyTN8L8acu0im2eR60FEl9XdUjp8DYCBGxhhIVx+xZxj6nTnNc5T7GJpJlCTF+9YPlDVrLg8y/YXly0BoOqr7p+RaqMAJnoZymNDbuu9V3Vs= zuul-build-sshkey#012 regexp=Generated by TripleO state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:05:19 localhost python3[35896]: ansible-ping Invoked with data=pong Dec 6 03:05:27 localhost sshd[35897]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:05:30 localhost sshd[35900]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:05:30 localhost systemd[1]: Created slice User Slice of UID 1003. Dec 6 03:05:30 localhost systemd[1]: Starting User Runtime Directory /run/user/1003... Dec 6 03:05:30 localhost systemd-logind[766]: New session 28 of user tripleo-admin. Dec 6 03:05:30 localhost systemd[1]: Finished User Runtime Directory /run/user/1003. Dec 6 03:05:30 localhost systemd[1]: Starting User Manager for UID 1003... Dec 6 03:05:30 localhost systemd[35904]: Queued start job for default target Main User Target. Dec 6 03:05:30 localhost systemd[35904]: Created slice User Application Slice. Dec 6 03:05:30 localhost systemd[35904]: Started Mark boot as successful after the user session has run 2 minutes. Dec 6 03:05:30 localhost systemd[35904]: Started Daily Cleanup of User's Temporary Directories. Dec 6 03:05:30 localhost systemd[35904]: Reached target Paths. Dec 6 03:05:30 localhost systemd[35904]: Reached target Timers. Dec 6 03:05:30 localhost systemd[35904]: Starting D-Bus User Message Bus Socket... Dec 6 03:05:30 localhost systemd[35904]: Starting Create User's Volatile Files and Directories... Dec 6 03:05:30 localhost systemd[35904]: Finished Create User's Volatile Files and Directories. Dec 6 03:05:30 localhost systemd[35904]: Listening on D-Bus User Message Bus Socket. Dec 6 03:05:30 localhost systemd[35904]: Reached target Sockets. Dec 6 03:05:30 localhost systemd[35904]: Reached target Basic System. Dec 6 03:05:30 localhost systemd[35904]: Reached target Main User Target. Dec 6 03:05:30 localhost systemd[35904]: Startup finished in 126ms. Dec 6 03:05:30 localhost systemd[1]: Started User Manager for UID 1003. Dec 6 03:05:30 localhost systemd[1]: Started Session 28 of User tripleo-admin. Dec 6 03:05:31 localhost python3[35966]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all', 'min'] gather_timeout=45 filter=[] fact_path=/etc/ansible/facts.d Dec 6 03:05:36 localhost python3[35986]: ansible-selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config Dec 6 03:05:37 localhost python3[36031]: ansible-tempfile Invoked with state=file suffix=tmphosts prefix=ansible. path=None Dec 6 03:05:37 localhost python3[36097]: ansible-ansible.legacy.copy Invoked with remote_src=True src=/etc/hosts dest=/tmp/ansible.zk0iba52tmphosts mode=preserve backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:05:38 localhost python3[36141]: ansible-blockinfile Invoked with state=absent path=/tmp/ansible.zk0iba52tmphosts block= marker=# {mark} marker_begin=HEAT_HOSTS_START - Do not edit manually within this section! marker_end=HEAT_HOSTS_END create=False backup=False unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:05:38 localhost sshd[36157]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:05:39 localhost python3[36173]: ansible-blockinfile Invoked with create=True path=/tmp/ansible.zk0iba52tmphosts insertbefore=BOF block=172.17.0.106 np0005548788.localdomain np0005548788#012172.18.0.106 np0005548788.storage.localdomain np0005548788.storage#012172.20.0.106 np0005548788.storagemgmt.localdomain np0005548788.storagemgmt#012172.17.0.106 np0005548788.internalapi.localdomain np0005548788.internalapi#012172.19.0.106 np0005548788.tenant.localdomain np0005548788.tenant#012192.168.122.106 np0005548788.ctlplane.localdomain np0005548788.ctlplane#012172.17.0.107 np0005548789.localdomain np0005548789#012172.18.0.107 np0005548789.storage.localdomain np0005548789.storage#012172.20.0.107 np0005548789.storagemgmt.localdomain np0005548789.storagemgmt#012172.17.0.107 np0005548789.internalapi.localdomain np0005548789.internalapi#012172.19.0.107 np0005548789.tenant.localdomain np0005548789.tenant#012192.168.122.107 np0005548789.ctlplane.localdomain np0005548789.ctlplane#012172.17.0.108 np0005548790.localdomain np0005548790#012172.18.0.108 np0005548790.storage.localdomain np0005548790.storage#012172.20.0.108 np0005548790.storagemgmt.localdomain np0005548790.storagemgmt#012172.17.0.108 np0005548790.internalapi.localdomain np0005548790.internalapi#012172.19.0.108 np0005548790.tenant.localdomain np0005548790.tenant#012192.168.122.108 np0005548790.ctlplane.localdomain np0005548790.ctlplane#012172.17.0.103 np0005548785.localdomain np0005548785#012172.18.0.103 np0005548785.storage.localdomain np0005548785.storage#012172.20.0.103 np0005548785.storagemgmt.localdomain np0005548785.storagemgmt#012172.17.0.103 np0005548785.internalapi.localdomain np0005548785.internalapi#012172.19.0.103 np0005548785.tenant.localdomain np0005548785.tenant#012192.168.122.103 np0005548785.ctlplane.localdomain np0005548785.ctlplane#012172.17.0.104 np0005548786.localdomain np0005548786#012172.18.0.104 np0005548786.storage.localdomain np0005548786.storage#012172.20.0.104 np0005548786.storagemgmt.localdomain np0005548786.storagemgmt#012172.17.0.104 np0005548786.internalapi.localdomain np0005548786.internalapi#012172.19.0.104 np0005548786.tenant.localdomain np0005548786.tenant#012192.168.122.104 np0005548786.ctlplane.localdomain np0005548786.ctlplane#012172.17.0.105 np0005548787.localdomain np0005548787#012172.18.0.105 np0005548787.storage.localdomain np0005548787.storage#012172.20.0.105 np0005548787.storagemgmt.localdomain np0005548787.storagemgmt#012172.17.0.105 np0005548787.internalapi.localdomain np0005548787.internalapi#012172.19.0.105 np0005548787.tenant.localdomain np0005548787.tenant#012192.168.122.105 np0005548787.ctlplane.localdomain np0005548787.ctlplane#012#012192.168.122.100 undercloud.ctlplane.localdomain undercloud.ctlplane#012192.168.122.99 overcloud.ctlplane.localdomain#012172.18.0.250 overcloud.storage.localdomain#012172.20.0.140 overcloud.storagemgmt.localdomain#012172.17.0.168 overcloud.internalapi.localdomain#012172.21.0.196 overcloud.localdomain#012 marker=# {mark} marker_begin=START_HOST_ENTRIES_FOR_STACK: overcloud marker_end=END_HOST_ENTRIES_FOR_STACK: overcloud state=present backup=False unsafe_writes=False insertafter=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:05:39 localhost python3[36190]: ansible-ansible.legacy.command Invoked with _raw_params=cp "/tmp/ansible.zk0iba52tmphosts" "/etc/hosts" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:05:40 localhost python3[36207]: ansible-file Invoked with path=/tmp/ansible.zk0iba52tmphosts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:05:41 localhost python3[36223]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides rhosp-release _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:05:42 localhost python3[36240]: ansible-ansible.legacy.dnf Invoked with name=['rhosp-release'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 6 03:05:42 localhost sshd[36242]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:05:44 localhost sshd[36245]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:05:47 localhost sshd[36248]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:05:47 localhost python3[36264]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides driverctl lvm2 jq nftables openvswitch openstack-heat-agents openstack-selinux os-net-config python3-libselinux python3-pyyaml puppet-tripleo rsync tmpwatch sysstat iproute-tc _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:05:48 localhost python3[36282]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'jq', 'nftables', 'openvswitch', 'openstack-heat-agents', 'openstack-selinux', 'os-net-config', 'python3-libselinux', 'python3-pyyaml', 'puppet-tripleo', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 6 03:05:50 localhost sshd[36284]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:06:07 localhost sshd[36470]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:06:09 localhost sshd[36472]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:06:14 localhost sshd[36752]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:06:24 localhost sshd[36811]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:06:34 localhost sshd[36867]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:06:59 localhost kernel: SELinux: Converting 2699 SID table entries... Dec 6 03:06:59 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 6 03:06:59 localhost kernel: SELinux: policy capability open_perms=1 Dec 6 03:06:59 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 6 03:06:59 localhost kernel: SELinux: policy capability always_check_network=0 Dec 6 03:06:59 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 6 03:06:59 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 6 03:06:59 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 6 03:06:59 localhost dbus-broker-launch[756]: avc: op=load_policy lsm=selinux seqno=6 res=1 Dec 6 03:06:59 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 6 03:06:59 localhost systemd[1]: Starting man-db-cache-update.service... Dec 6 03:06:59 localhost systemd[1]: Reloading. Dec 6 03:06:59 localhost systemd-rc-local-generator[37337]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:06:59 localhost systemd-sysv-generator[37341]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:06:59 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:06:59 localhost systemd[1]: Queuing reload/restart jobs for marked units… Dec 6 03:06:59 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Dec 6 03:06:59 localhost systemd[1]: Finished man-db-cache-update.service. Dec 6 03:06:59 localhost systemd[1]: run-rf5d9828cd0b34aefadac40a870236621.service: Deactivated successfully. Dec 6 03:07:01 localhost python3[37737]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 jq nftables openvswitch openstack-heat-agents openstack-selinux os-net-config python3-libselinux python3-pyyaml puppet-tripleo rsync tmpwatch sysstat iproute-tc _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:07:03 localhost python3[37876]: ansible-ansible.legacy.systemd Invoked with name=openvswitch enabled=True state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 03:07:04 localhost systemd[1]: Reloading. Dec 6 03:07:04 localhost systemd-rc-local-generator[37901]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:07:04 localhost systemd-sysv-generator[37907]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:07:04 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:07:05 localhost python3[37930]: ansible-file Invoked with path=/var/lib/heat-config/tripleo-config-download state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:07:05 localhost python3[37946]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides openstack-network-scripts _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:07:06 localhost python3[37963]: ansible-systemd Invoked with name=NetworkManager enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None Dec 6 03:07:07 localhost python3[37981]: ansible-ini_file Invoked with path=/etc/NetworkManager/NetworkManager.conf state=present no_extra_spaces=True section=main option=dns value=none backup=True exclusive=True allow_no_value=False create=True unsafe_writes=False values=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:07:07 localhost python3[37999]: ansible-ini_file Invoked with path=/etc/NetworkManager/NetworkManager.conf state=present no_extra_spaces=True section=main option=rc-manager value=unmanaged backup=True exclusive=True allow_no_value=False create=True unsafe_writes=False values=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:07:08 localhost python3[38017]: ansible-ansible.legacy.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 6 03:07:08 localhost systemd[1]: Reloading Network Manager... Dec 6 03:07:08 localhost NetworkManager[5973]: [1765008428.1407] audit: op="reload" arg="0" pid=38020 uid=0 result="success" Dec 6 03:07:08 localhost NetworkManager[5973]: [1765008428.1416] config: signal: SIGHUP,config-files,values,values-user,no-auto-default,dns-mode,rc-manager (/etc/NetworkManager/NetworkManager.conf (lib: 00-server.conf) (run: 15-carrier-timeout.conf)) Dec 6 03:07:08 localhost NetworkManager[5973]: [1765008428.1417] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged Dec 6 03:07:08 localhost systemd[1]: Reloaded Network Manager. Dec 6 03:07:08 localhost python3[38036]: ansible-ansible.legacy.command Invoked with _raw_params=ln -f -s /usr/share/openstack-puppet/modules/* /etc/puppet/modules/ _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:07:09 localhost python3[38053]: ansible-stat Invoked with path=/usr/bin/ansible-playbook follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 6 03:07:09 localhost python3[38071]: ansible-stat Invoked with path=/usr/bin/ansible-playbook-3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 6 03:07:09 localhost python3[38087]: ansible-file Invoked with state=link src=/usr/bin/ansible-playbook path=/usr/bin/ansible-playbook-3 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:07:10 localhost python3[38103]: ansible-tempfile Invoked with state=file prefix=ansible. suffix= path=None Dec 6 03:07:11 localhost python3[38119]: ansible-stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 6 03:07:11 localhost python3[38135]: ansible-blockinfile Invoked with path=/tmp/ansible.sop2m8_l block=[192.168.122.106]*,[np0005548788.ctlplane.localdomain]*,[172.17.0.106]*,[np0005548788.internalapi.localdomain]*,[172.18.0.106]*,[np0005548788.storage.localdomain]*,[172.20.0.106]*,[np0005548788.storagemgmt.localdomain]*,[172.19.0.106]*,[np0005548788.tenant.localdomain]*,[np0005548788.localdomain]*,[np0005548788]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxIoAQH9YZnGrAxYR5prFQwo6HY5mwdDjndb+bp2pwvtVLM4ABIdCi+K1wpbhOpoO7BsYOf/tdBqemvSDleNo/ZLh3v3MmoVtoTtQZqLWsAQWFgJCjcGUGB+H3CHhtbp706coVQMlGD+UQqpCBy8WamMB/Ldy+hSHbLHwzuMzj8tO90vUbEyuKgOuu/X3ZFa+Yjo/asQ+PTrVfirh1QvRQ9aK22xH89KbThA/1an4OjnNGLCP752auSQ894B21QLKfqaMGPlpbjU8Wr6MP4zKV9lUzpQiFr6IU6cd4CeIsJDj7FnAZuBSmi8ewgm/r4ZWkmCSlqw8OpMC5soJnm8Q4PJTIFvT9eyyFCh9xmQkMhzE8P332LtYjZ+vXhYFU14e04mOQx5UrtHN8uWJVbOAwtLNAcenHyRtCQGkAZ6f9q0OvSuYr+o3FhHhN5ABu32AKAD8YpkjLypi+PbaiKNQW8XzPAHHbV8CGZ4B09ZWeQY49VA0bPxIYBXd1mEBlXSE=#012[192.168.122.107]*,[np0005548789.ctlplane.localdomain]*,[172.17.0.107]*,[np0005548789.internalapi.localdomain]*,[172.18.0.107]*,[np0005548789.storage.localdomain]*,[172.20.0.107]*,[np0005548789.storagemgmt.localdomain]*,[172.19.0.107]*,[np0005548789.tenant.localdomain]*,[np0005548789.localdomain]*,[np0005548789]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwH3rhRTvOINLmLdbeRXeXOiMzz+IXEuW2cXYAe50Wcc3ikH2RVGirWQrwLc8hAoA7UFCXADqEMxPg6/fLsQkbP7kLOpUtam8nuXvgt8VHM4RFl5wh9EOgZ7DWgjA7s3r2eQMcBhv82CjVMLY/YjnLuRNXCsJAqeG32qcKedKH/huEFvkb49U/UnNlxi5BfNrMlY9n5UQXE2rd6EKwP58aP/qQ1ie3p8nwHc36/MJcfEIABlLaoHK/LxnadOFTh93OkqVi7A0VQsKSmKD64nABiN7ML0NReoyRIQI5r3Dawe8v2K9jCBh5jY88TVsYUJqgwoZSSU73sYGHX4uF+PY8wL7qwn6mCzA17GGYeB8Dy0N8qwDqah6kUjpcLwGp7YaKf0FIZPBKcLVMrX6Tnwxer1j3kOIt3tgLZoz3mMfstWfCyvt9t+GEW5MCE+MBkY4Eree3uK7pI+wJ3vFQS9XVP00hjNiLWYmoaaW6rl8xtw7QtGhzmjcWbOxaZvHWE5E=#012[192.168.122.108]*,[np0005548790.ctlplane.localdomain]*,[172.17.0.108]*,[np0005548790.internalapi.localdomain]*,[172.18.0.108]*,[np0005548790.storage.localdomain]*,[172.20.0.108]*,[np0005548790.storagemgmt.localdomain]*,[172.19.0.108]*,[np0005548790.tenant.localdomain]*,[np0005548790.localdomain]*,[np0005548790]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDmdMCy44p73Ui+o09YQitqR9FILqoJ6AGYYutFVH6wn5m1j6oEoI4XgVFPR3UpG3SXdoiG7m0DRxC/WZZMpZbaQ3ZHbJJioRh1hV5uQtK5k2gtmS8uePng5UprbLncMXf+HIxNRvirU3r6zdgNGAroK0rN0nWESi/FNb2flu9Aw9JAsgIAAouW4IUoeyMGZ1AflhRhsWsQMstM9UEeGU+iTqV7al1URVCSq1finY99m+QC+Pftpd2C/+agboOIiVa63+D/RqqfYqh4C/PYfDbssYjcZzk3P90+HQ6uMKexX3HRnFbyje4eLSBHC0pjr/4pNfk/eSpdHeyMAPsP+QlBztdcPj9OnjcmT9ymeJRKF7GwNIWg3Pn9L2yY50d8l9Zu6rNIDW786XNcbm88yHdCHA5FE1A8XTWQRQ3eUSUsmsvf03pExAouRM4Fj8dvCu6wzG2SuyWqmdT5yCNrUG0e1CeE6PcfTLBeS5CJAwn5HM8aUndQQldWmaUbMPL5Jis=#012[192.168.122.103]*,[np0005548785.ctlplane.localdomain]*,[172.17.0.103]*,[np0005548785.internalapi.localdomain]*,[172.18.0.103]*,[np0005548785.storage.localdomain]*,[172.20.0.103]*,[np0005548785.storagemgmt.localdomain]*,[172.19.0.103]*,[np0005548785.tenant.localdomain]*,[np0005548785.localdomain]*,[np0005548785]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC89JzJHuRLDUgmU66VPdPVwYLrvslBwa5i2QfiUzrnpt1lKz8ayq6QMRy5y5GgfjQQhX/YZiAjUSoogVsYDkoDaImXdtfQHFlFMLTlJPiYcA/cGAwMAE/vifpWoztBHUXkJ5YWUojkXzGoR8d7ESx/tTLG/9QrQDsW6JcV18mcFCQZdeWYWGWdLn6ynmQOZ0N4U6mYK1FqE+GKgP6L9PEjkC1ePo81AnYcdQ5Z1IETdcCcJytdvvxH/Zie1PiAaMAgMYhsqu7+DZRRTvg+cEMw3mRVuodIyQEbpZs8MjR3itViRfZ+UqYi6uKDnz1viLL0aACaYhOLzrE7bQ6Sl4j1MnMrWncUOv3Sq2fus+Y6oYmed84E6HUNljte7vVP9jwPclbCAmj5WuC/Av9dSqqHEpPRbKJ4tAuBrO2LBKS7J62FjRYiY807V1viyxUgjK5FmsQyfVr3/YOirluSx54e4XwxxDrAjtrd0x68H7/Mt6HP/79cWKaVbC7XUckYRmE=#012[192.168.122.104]*,[np0005548786.ctlplane.localdomain]*,[172.17.0.104]*,[np0005548786.internalapi.localdomain]*,[172.18.0.104]*,[np0005548786.storage.localdomain]*,[172.20.0.104]*,[np0005548786.storagemgmt.localdomain]*,[172.19.0.104]*,[np0005548786.tenant.localdomain]*,[np0005548786.localdomain]*,[np0005548786]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDURzBA/aIGrwPgaIApy0UCTi4wdQhfDEx0QfkSAIn0ZptZcOkaR8BWtl9GijRPEp++Ep4qU04JcwHO1ZULd2UnCdDeg1Imwnf7x9HQBjAr0mH+tE0t4MBLtBbrk8Ep5ggyKATK1CvEl3NuGIS4gSSUWxzkR74Iju/GtrEMuVnMSsOw+auBofiv1ne4zyXqQWZORiK32DSolw1KyXGLyqG+JOpl3Kza5o79S1KUghfRzskZMm/AxFYciPmg4EQK/jL9Izj7qq3v8MaL8baeyqNlPaaRKCh+pkZlYtoPzDhe+vn/jwnDmQgqC1Bh+dkNiKEVlWz3mxoiMoeLY3jP/tMF2M4M8puGakPc2sqJxk1++Tv/lFRO3zBS+V2kECKI5DtQI6XThfLYXxIQl5SHr4yGEoxhMNt6YNQPLp6lg30kHO24YyNNA7LPFYYoOGUCaq5ZVUCF9lagMxcgkN0Bs+ZZqeni+53RqxoutiRZ0m9pIiqxGjrJjbNFXmofgfDBcUE=#012[192.168.122.105]*,[np0005548787.ctlplane.localdomain]*,[172.17.0.105]*,[np0005548787.internalapi.localdomain]*,[172.18.0.105]*,[np0005548787.storage.localdomain]*,[172.20.0.105]*,[np0005548787.storagemgmt.localdomain]*,[172.19.0.105]*,[np0005548787.tenant.localdomain]*,[np0005548787.localdomain]*,[np0005548787]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXe0UZ2kJKcvYaHSnjIOf3QqkGhArLo32nvDm8Pl8ZVNWfdRV8R+e17etAicDq//fxWC+U9jiHp4qI6/0Jm64rPocmJKaA+r79sNpv+598NlGtVUfTYQ34Ze9bgaPkjAwKfPNrzjSDChyfkys4Hm0J7ttog5rvMcuRelxkFmoonOcuzBC+9ufI6qld7br5w4WDookwamkefbMCiwAZxrw2bSjoTu7/TEFbt7SM0lUIdqP5WvxpWK52OkjnakQ0BL4QHdRYz1kBx/vS0TFxXb2pMO291dfkxDl3H2oXXZZYK/LWy3nZyJEX+mD5J6WOEs5HC5GQQ+CNEV0wa2e/gJA7KBsyL5T6RBtH8id22sBHZkzcaDhUz1ZABGAiOx4rdrr4YFFFy/u00nX3ZCuRBPXYh37Pafl7GXcSKyhTmkCZI0591RdNmb1duh9ZIObRmPVp2+WIheAFvS7EU4B0+ZjAEbDJgiSa9VlUrlRFX0ajcFHR8FnwNRcoERO3A3h4/Tc=#012 create=True state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:07:12 localhost python3[38151]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.sop2m8_l' > /etc/ssh/ssh_known_hosts _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:07:12 localhost python3[38169]: ansible-file Invoked with path=/tmp/ansible.sop2m8_l state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:07:13 localhost python3[38185]: ansible-file Invoked with path=/var/log/journal state=directory mode=0750 owner=root group=root setype=var_log_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 03:07:13 localhost python3[38201]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active cloud-init.service || systemctl is-enabled cloud-init.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:07:14 localhost python3[38219]: ansible-ansible.legacy.command Invoked with _raw_params=cat /proc/cmdline | grep -q cloud-init=disabled _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:07:14 localhost python3[38238]: ansible-community.general.cloud_init_data_facts Invoked with filter=status Dec 6 03:07:17 localhost python3[38375]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides tuned tuned-profiles-cpu-partitioning _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:07:17 localhost python3[38392]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 6 03:07:20 localhost dbus-broker-launch[752]: Noticed file-system modification, trigger reload. Dec 6 03:07:20 localhost dbus-broker-launch[752]: Noticed file-system modification, trigger reload. Dec 6 03:07:21 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 6 03:07:21 localhost systemd[1]: Starting man-db-cache-update.service... Dec 6 03:07:21 localhost systemd[1]: Reloading. Dec 6 03:07:21 localhost systemd-sysv-generator[38465]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:07:21 localhost systemd-rc-local-generator[38457]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:07:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:07:21 localhost systemd[1]: Queuing reload/restart jobs for marked units… Dec 6 03:07:21 localhost systemd[1]: Stopping Dynamic System Tuning Daemon... Dec 6 03:07:21 localhost systemd[1]: tuned.service: Deactivated successfully. Dec 6 03:07:21 localhost systemd[1]: Stopped Dynamic System Tuning Daemon. Dec 6 03:07:21 localhost systemd[1]: tuned.service: Consumed 1.780s CPU time. Dec 6 03:07:21 localhost systemd[1]: Starting Dynamic System Tuning Daemon... Dec 6 03:07:21 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Dec 6 03:07:21 localhost systemd[1]: Finished man-db-cache-update.service. Dec 6 03:07:21 localhost systemd[1]: run-raabb5943b14e4279a74ef3a8ef70db1b.service: Deactivated successfully. Dec 6 03:07:22 localhost systemd[1]: Started Dynamic System Tuning Daemon. Dec 6 03:07:22 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 6 03:07:22 localhost systemd[1]: Starting man-db-cache-update.service... Dec 6 03:07:23 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Dec 6 03:07:23 localhost systemd[1]: Finished man-db-cache-update.service. Dec 6 03:07:23 localhost systemd[1]: run-r674fb6b9d2bb4d4f87c7edf8ecbfa6ec.service: Deactivated successfully. Dec 6 03:07:24 localhost python3[38829]: ansible-systemd Invoked with name=tuned state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 03:07:24 localhost systemd[1]: Stopping Dynamic System Tuning Daemon... Dec 6 03:07:24 localhost systemd[1]: tuned.service: Deactivated successfully. Dec 6 03:07:24 localhost systemd[1]: Stopped Dynamic System Tuning Daemon. Dec 6 03:07:24 localhost systemd[1]: Starting Dynamic System Tuning Daemon... Dec 6 03:07:25 localhost systemd[1]: Started Dynamic System Tuning Daemon. Dec 6 03:07:26 localhost python3[39024]: ansible-ansible.legacy.command Invoked with _raw_params=which tuned-adm _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:07:26 localhost python3[39041]: ansible-slurp Invoked with src=/etc/tuned/active_profile Dec 6 03:07:27 localhost python3[39057]: ansible-stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 6 03:07:27 localhost python3[39073]: ansible-ansible.legacy.command Invoked with _raw_params=tuned-adm profile throughput-performance _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:07:29 localhost python3[39093]: ansible-ansible.legacy.command Invoked with _raw_params=cat /proc/cmdline _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:07:30 localhost python3[39110]: ansible-stat Invoked with path=/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 6 03:07:32 localhost python3[39126]: ansible-replace Invoked with regexp=TRIPLEO_HEAT_TEMPLATE_KERNEL_ARGS dest=/etc/default/grub replace= path=/etc/default/grub backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:07:37 localhost python3[39142]: ansible-file Invoked with path=/etc/puppet/hieradata state=directory mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:07:37 localhost sshd[39143]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:07:38 localhost python3[39192]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hiera.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:07:38 localhost python3[39237]: ansible-ansible.legacy.copy Invoked with mode=384 dest=/etc/puppet/hiera.yaml src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008457.875438-71089-216537416560304/source _original_basename=tmpr3ua795s follow=False checksum=aaf3699defba931d532f4955ae152f505046749a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:07:38 localhost python3[39267]: ansible-file Invoked with src=/etc/puppet/hiera.yaml dest=/etc/hiera.yaml state=link force=True path=/etc/hiera.yaml recurse=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:07:39 localhost python3[39315]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/all_nodes.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:07:39 localhost python3[39358]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008459.3138149-71179-167077286353394/source dest=/etc/puppet/hieradata/all_nodes.json _original_basename=overcloud.json follow=False checksum=c7cc1670a1e268d7901b4353362279cc1f651214 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:07:40 localhost python3[39465]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/bootstrap_node.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:07:40 localhost python3[39525]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008460.2215257-71240-53134192445290/source dest=/etc/puppet/hieradata/bootstrap_node.json mode=None follow=False _original_basename=bootstrap_node.j2 checksum=8c98a1379d65c02b867387467a21d26fe82a1c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:07:41 localhost python3[39602]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/vip_data.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:07:41 localhost python3[39645]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008461.0989947-71240-191536469027122/source dest=/etc/puppet/hieradata/vip_data.json mode=None follow=False _original_basename=vip_data.j2 checksum=2906872dac8eb33feea0b6fc0243b65109687e47 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:07:42 localhost python3[39707]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/net_ip_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:07:42 localhost python3[39750]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008462.029945-71240-177597842225641/source dest=/etc/puppet/hieradata/net_ip_map.json mode=None follow=False _original_basename=net_ip_map.j2 checksum=175c760950d63a47f443f25b58088dba962f090b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:07:43 localhost python3[39812]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/cloud_domain.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:07:43 localhost python3[39855]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008462.997107-71240-160589447451761/source dest=/etc/puppet/hieradata/cloud_domain.json mode=None follow=False _original_basename=cloud_domain.j2 checksum=5dd835a63e6a03d74797c2e2eadf4bea1cecd9d9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:07:44 localhost python3[39917]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/fqdn.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:07:44 localhost python3[39960]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008463.8333857-71240-118440307322738/source dest=/etc/puppet/hieradata/fqdn.json mode=None follow=False _original_basename=fqdn.j2 checksum=51d477f907146168895fd1905f3827c38c3a4658 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:07:44 localhost python3[40022]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_names.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:07:45 localhost python3[40065]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008464.6760578-71240-156811703805453/source dest=/etc/puppet/hieradata/service_names.json mode=None follow=False _original_basename=service_names.j2 checksum=ff586b96402d8ae133745cf06f17e772b2f22d52 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:07:45 localhost python3[40127]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_configs.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:07:46 localhost sshd[40170]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:07:46 localhost python3[40172]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008465.5660124-71240-40301974843610/source dest=/etc/puppet/hieradata/service_configs.json mode=None follow=False _original_basename=service_configs.j2 checksum=955531133cc86a259eb018c78aadbdeb821782e0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:07:46 localhost python3[40234]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:07:47 localhost python3[40277]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008466.441579-71240-198437280903914/source dest=/etc/puppet/hieradata/extraconfig.json mode=None follow=False _original_basename=extraconfig.j2 checksum=5f36b2ea290645ee34d943220a14b54ee5ea5be5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:07:47 localhost python3[40339]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/role_extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:07:47 localhost python3[40382]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008467.2952452-71240-196122356471459/source dest=/etc/puppet/hieradata/role_extraconfig.json mode=None follow=False _original_basename=role_extraconfig.j2 checksum=34875968bf996542162e620523f9dcfb3deac331 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:07:48 localhost python3[40444]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ovn_chassis_mac_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:07:48 localhost python3[40487]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008468.1584904-71240-275681376168253/source dest=/etc/puppet/hieradata/ovn_chassis_mac_map.json mode=None follow=False _original_basename=ovn_chassis_mac_map.j2 checksum=e22fb087c209a147f48a5b0777daca8567166409 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:07:49 localhost python3[40517]: ansible-stat Invoked with path={'src': '/etc/puppet/hieradata/ansible_managed.json'} follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 6 03:07:50 localhost python3[40565]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ansible_managed.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:07:50 localhost python3[40608]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/ansible_managed.json owner=root group=root mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008470.0010912-71883-178525614025385/source _original_basename=tmpav3x0jxx follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:07:55 localhost python3[40638]: ansible-setup Invoked with gather_subset=['!all', '!min', 'network'] filter=['ansible_default_ipv4'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 6 03:07:55 localhost python3[40699]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 38.102.83.1 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:08:00 localhost python3[40716]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 192.168.122.10 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:08:05 localhost python3[40733]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 192.168.122.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:08:05 localhost python3[40756]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 192.168.122.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:08:05 localhost sshd[40758]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:08:10 localhost python3[40775]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.18.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:08:10 localhost python3[40798]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 172.18.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:08:15 localhost python3[40815]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -s 1472 -c 5 172.18.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:08:16 localhost systemd[35904]: Starting Mark boot as successful... Dec 6 03:08:16 localhost systemd[35904]: Finished Mark boot as successful. Dec 6 03:08:19 localhost python3[40833]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.20.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:08:20 localhost python3[40856]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 172.20.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:08:24 localhost python3[40873]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -s 1472 -c 5 172.20.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:08:29 localhost python3[40890]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.17.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:08:29 localhost python3[40913]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 172.17.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:08:34 localhost python3[40930]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -s 1472 -c 5 172.17.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:08:38 localhost python3[40947]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.19.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:08:39 localhost python3[40970]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 172.19.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:08:43 localhost python3[41048]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -s 1472 -c 5 172.19.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:08:50 localhost python3[41080]: ansible-file Invoked with path=/etc/puppet/hieradata state=directory mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:08:50 localhost python3[41128]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hiera.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:08:51 localhost python3[41146]: ansible-ansible.legacy.file Invoked with mode=384 dest=/etc/puppet/hiera.yaml _original_basename=tmp_fefs1cu recurse=False state=file path=/etc/puppet/hiera.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:08:51 localhost python3[41176]: ansible-file Invoked with src=/etc/puppet/hiera.yaml dest=/etc/hiera.yaml state=link force=True path=/etc/hiera.yaml recurse=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:08:52 localhost python3[41224]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/all_nodes.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:08:52 localhost python3[41242]: ansible-ansible.legacy.file Invoked with dest=/etc/puppet/hieradata/all_nodes.json _original_basename=overcloud.json recurse=False state=file path=/etc/puppet/hieradata/all_nodes.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:08:53 localhost python3[41304]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/bootstrap_node.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:08:53 localhost python3[41322]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/bootstrap_node.json _original_basename=bootstrap_node.j2 recurse=False state=file path=/etc/puppet/hieradata/bootstrap_node.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:08:53 localhost python3[41384]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/vip_data.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:08:54 localhost python3[41402]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/vip_data.json _original_basename=vip_data.j2 recurse=False state=file path=/etc/puppet/hieradata/vip_data.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:08:54 localhost python3[41464]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/net_ip_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:08:55 localhost python3[41482]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/net_ip_map.json _original_basename=net_ip_map.j2 recurse=False state=file path=/etc/puppet/hieradata/net_ip_map.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:08:55 localhost python3[41544]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/cloud_domain.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:08:55 localhost python3[41562]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/cloud_domain.json _original_basename=cloud_domain.j2 recurse=False state=file path=/etc/puppet/hieradata/cloud_domain.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:08:56 localhost python3[41624]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/fqdn.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:08:56 localhost python3[41642]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/fqdn.json _original_basename=fqdn.j2 recurse=False state=file path=/etc/puppet/hieradata/fqdn.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:08:57 localhost python3[41704]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_names.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:08:57 localhost python3[41722]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/service_names.json _original_basename=service_names.j2 recurse=False state=file path=/etc/puppet/hieradata/service_names.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:08:57 localhost python3[41784]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_configs.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:08:58 localhost python3[41802]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/service_configs.json _original_basename=service_configs.j2 recurse=False state=file path=/etc/puppet/hieradata/service_configs.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:08:58 localhost python3[41864]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:08:58 localhost python3[41882]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/extraconfig.json _original_basename=extraconfig.j2 recurse=False state=file path=/etc/puppet/hieradata/extraconfig.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:08:59 localhost python3[41944]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/role_extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:08:59 localhost python3[41962]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/role_extraconfig.json _original_basename=role_extraconfig.j2 recurse=False state=file path=/etc/puppet/hieradata/role_extraconfig.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:09:00 localhost python3[42024]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ovn_chassis_mac_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:09:00 localhost python3[42042]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/ovn_chassis_mac_map.json _original_basename=ovn_chassis_mac_map.j2 recurse=False state=file path=/etc/puppet/hieradata/ovn_chassis_mac_map.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:09:01 localhost python3[42072]: ansible-stat Invoked with path={'src': '/etc/puppet/hieradata/ansible_managed.json'} follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 6 03:09:01 localhost sshd[42121]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:09:01 localhost python3[42120]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ansible_managed.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:09:02 localhost python3[42140]: ansible-ansible.legacy.file Invoked with owner=root group=root mode=0644 dest=/etc/puppet/hieradata/ansible_managed.json _original_basename=tmp29c5bsqq recurse=False state=file path=/etc/puppet/hieradata/ansible_managed.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:09:04 localhost python3[42170]: ansible-dnf Invoked with name=['firewalld'] state=absent allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 6 03:09:09 localhost python3[42187]: ansible-ansible.builtin.systemd Invoked with name=iptables.service state=stopped enabled=False daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 03:09:10 localhost python3[42205]: ansible-ansible.builtin.systemd Invoked with name=ip6tables.service state=stopped enabled=False daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 03:09:10 localhost python3[42223]: ansible-ansible.builtin.systemd Invoked with name=nftables state=started enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 03:09:10 localhost systemd[1]: Reloading. Dec 6 03:09:10 localhost systemd-rc-local-generator[42248]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:09:11 localhost systemd-sysv-generator[42253]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:09:11 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:09:11 localhost systemd[1]: Starting Netfilter Tables... Dec 6 03:09:11 localhost systemd[1]: Finished Netfilter Tables. Dec 6 03:09:11 localhost python3[42313]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:09:12 localhost python3[42356]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008551.556916-74833-47967323570456/source _original_basename=iptables.nft follow=False checksum=ede9860c99075946a7bc827210247aac639bc84a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:09:12 localhost python3[42386]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:09:13 localhost python3[42404]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:09:13 localhost python3[42453]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-jumps.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:09:14 localhost python3[42496]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-jumps.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008553.2878768-74950-124411490590927/source mode=None follow=False _original_basename=jump-chain.j2 checksum=eec306c3276262a27663d76bd0ea526457445afa backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:09:14 localhost python3[42558]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-update-jumps.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:09:15 localhost python3[42601]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-update-jumps.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008554.224436-75009-207700804968311/source mode=None follow=False _original_basename=jump-chain.j2 checksum=eec306c3276262a27663d76bd0ea526457445afa backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:09:15 localhost python3[42663]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-flushes.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:09:15 localhost python3[42706]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-flushes.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008555.2181327-75072-117887151202228/source mode=None follow=False _original_basename=flush-chain.j2 checksum=e8e7b8db0d61a7fe393441cc91613f470eb34a6e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:09:16 localhost python3[42768]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-chains.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:09:16 localhost python3[42811]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-chains.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008556.092601-75130-5901537059776/source mode=None follow=False _original_basename=chains.j2 checksum=e60ee651f5014e83924f4e901ecc8e25b1906610 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:09:17 localhost python3[42873]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-rules.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:09:18 localhost python3[42916]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-rules.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008557.0158777-75214-124050043087569/source mode=None follow=False _original_basename=ruleset.j2 checksum=0444e4206083f91e2fb2aabfa2928244c2db35ed backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:09:18 localhost python3[42946]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/nftables/tripleo-chains.nft /etc/nftables/tripleo-flushes.nft /etc/nftables/tripleo-rules.nft /etc/nftables/tripleo-update-jumps.nft /etc/nftables/tripleo-jumps.nft | nft -c -f - _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:09:19 localhost python3[43011]: ansible-ansible.builtin.blockinfile Invoked with path=/etc/sysconfig/nftables.conf backup=False validate=nft -c -f %s block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/tripleo-chains.nft"#012include "/etc/nftables/tripleo-rules.nft"#012include "/etc/nftables/tripleo-jumps.nft"#012 state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:09:19 localhost python3[43028]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/tripleo-chains.nft _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:09:20 localhost python3[43045]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/nftables/tripleo-flushes.nft /etc/nftables/tripleo-rules.nft /etc/nftables/tripleo-update-jumps.nft | nft -f - _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:09:20 localhost python3[43064]: ansible-file Invoked with mode=0750 path=/var/log/containers/collectd setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 03:09:20 localhost python3[43080]: ansible-file Invoked with mode=0755 path=/var/lib/container-user-scripts/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 03:09:21 localhost python3[43096]: ansible-file Invoked with mode=0750 path=/var/log/containers/ceilometer setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 03:09:21 localhost python3[43112]: ansible-seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False Dec 6 03:09:22 localhost dbus-broker-launch[756]: avc: op=load_policy lsm=selinux seqno=7 res=1 Dec 6 03:09:22 localhost python3[43133]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/etc/iscsi(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None Dec 6 03:09:23 localhost kernel: SELinux: Converting 2703 SID table entries... Dec 6 03:09:23 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 6 03:09:23 localhost kernel: SELinux: policy capability open_perms=1 Dec 6 03:09:23 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 6 03:09:23 localhost kernel: SELinux: policy capability always_check_network=0 Dec 6 03:09:23 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 6 03:09:23 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 6 03:09:23 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 6 03:09:23 localhost dbus-broker-launch[756]: avc: op=load_policy lsm=selinux seqno=8 res=1 Dec 6 03:09:24 localhost python3[43158]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/etc/target(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None Dec 6 03:09:24 localhost sshd[43159]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:09:24 localhost kernel: SELinux: Converting 2703 SID table entries... Dec 6 03:09:25 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 6 03:09:25 localhost kernel: SELinux: policy capability open_perms=1 Dec 6 03:09:25 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 6 03:09:25 localhost kernel: SELinux: policy capability always_check_network=0 Dec 6 03:09:25 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 6 03:09:25 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 6 03:09:25 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 6 03:09:25 localhost dbus-broker-launch[756]: avc: op=load_policy lsm=selinux seqno=9 res=1 Dec 6 03:09:25 localhost python3[43181]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/var/lib/iscsi(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None Dec 6 03:09:26 localhost kernel: SELinux: Converting 2703 SID table entries... Dec 6 03:09:26 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 6 03:09:26 localhost kernel: SELinux: policy capability open_perms=1 Dec 6 03:09:26 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 6 03:09:26 localhost kernel: SELinux: policy capability always_check_network=0 Dec 6 03:09:26 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 6 03:09:26 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 6 03:09:26 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 6 03:09:26 localhost dbus-broker-launch[756]: avc: op=load_policy lsm=selinux seqno=10 res=1 Dec 6 03:09:26 localhost python3[43259]: ansible-file Invoked with path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 03:09:27 localhost python3[43275]: ansible-file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 03:09:27 localhost python3[43291]: ansible-file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 03:09:27 localhost python3[43307]: ansible-stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 6 03:09:28 localhost python3[43323]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-enabled --quiet iscsi.service _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:09:29 localhost python3[43340]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 6 03:09:32 localhost python3[43357]: ansible-file Invoked with path=/etc/modules-load.d state=directory mode=493 owner=root group=root setype=etc_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 03:09:33 localhost python3[43405]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-tripleo.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:09:33 localhost python3[43448]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008573.0895329-75984-238479677627678/source dest=/etc/modules-load.d/99-tripleo.conf mode=420 owner=root group=root setype=etc_t follow=False _original_basename=tripleo-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 6 03:09:34 localhost python3[43478]: ansible-systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 6 03:09:34 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 6 03:09:34 localhost systemd[1]: Stopped Load Kernel Modules. Dec 6 03:09:34 localhost systemd[1]: Stopping Load Kernel Modules... Dec 6 03:09:34 localhost systemd[1]: Starting Load Kernel Modules... Dec 6 03:09:34 localhost kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 6 03:09:34 localhost kernel: Bridge firewalling registered Dec 6 03:09:34 localhost systemd-modules-load[43481]: Inserted module 'br_netfilter' Dec 6 03:09:34 localhost systemd-modules-load[43481]: Module 'msr' is built in Dec 6 03:09:34 localhost systemd[1]: Finished Load Kernel Modules. Dec 6 03:09:35 localhost sshd[43485]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:09:35 localhost python3[43534]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-tripleo.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:09:36 localhost python3[43577]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008575.3885028-76055-153341894485097/source dest=/etc/sysctl.d/99-tripleo.conf mode=420 owner=root group=root setype=etc_t follow=False _original_basename=tripleo-sysctl.conf.j2 checksum=cddb9401fdafaaf28a4a94b98448f98ae93c94c9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 6 03:09:36 localhost python3[43607]: ansible-sysctl Invoked with name=fs.aio-max-nr value=1048576 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 6 03:09:36 localhost python3[43624]: ansible-sysctl Invoked with name=fs.inotify.max_user_instances value=1024 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 6 03:09:37 localhost python3[43642]: ansible-sysctl Invoked with name=kernel.pid_max value=1048576 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 6 03:09:37 localhost python3[43660]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-arptables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 6 03:09:37 localhost sshd[43678]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:09:37 localhost python3[43677]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-ip6tables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 6 03:09:38 localhost python3[43695]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-iptables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 6 03:09:38 localhost python3[43713]: ansible-sysctl Invoked with name=net.ipv4.conf.all.rp_filter value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 6 03:09:38 localhost python3[43731]: ansible-sysctl Invoked with name=net.ipv4.ip_forward value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 6 03:09:38 localhost sshd[43734]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:09:39 localhost python3[43751]: ansible-sysctl Invoked with name=net.ipv4.ip_local_reserved_ports value=35357,49000-49001 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 6 03:09:39 localhost python3[43769]: ansible-sysctl Invoked with name=net.ipv4.ip_nonlocal_bind value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 6 03:09:39 localhost python3[43787]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh1 value=1024 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 6 03:09:39 localhost sshd[43806]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:09:39 localhost python3[43805]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh2 value=2048 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 6 03:09:40 localhost python3[43824]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh3 value=4096 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 6 03:09:40 localhost python3[43843]: ansible-sysctl Invoked with name=net.ipv6.conf.all.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 6 03:09:40 localhost python3[43860]: ansible-sysctl Invoked with name=net.ipv6.conf.all.forwarding value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 6 03:09:41 localhost python3[43877]: ansible-sysctl Invoked with name=net.ipv6.conf.default.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 6 03:09:41 localhost python3[43894]: ansible-sysctl Invoked with name=net.ipv6.conf.lo.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 6 03:09:41 localhost python3[43911]: ansible-sysctl Invoked with name=net.ipv6.ip_nonlocal_bind value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Dec 6 03:09:42 localhost sshd[43930]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:09:42 localhost python3[43929]: ansible-systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 6 03:09:42 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 6 03:09:42 localhost systemd[1]: Stopped Apply Kernel Variables. Dec 6 03:09:42 localhost systemd[1]: Stopping Apply Kernel Variables... Dec 6 03:09:42 localhost systemd[1]: Starting Apply Kernel Variables... Dec 6 03:09:42 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 6 03:09:42 localhost systemd[1]: Finished Apply Kernel Variables. Dec 6 03:09:42 localhost python3[43951]: ansible-file Invoked with mode=0750 path=/var/log/containers/metrics_qdr setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 03:09:42 localhost python3[43967]: ansible-file Invoked with path=/var/lib/metrics_qdr setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 03:09:43 localhost python3[43983]: ansible-file Invoked with mode=0750 path=/var/log/containers/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 03:09:43 localhost python3[43999]: ansible-stat Invoked with path=/var/lib/nova/instances follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 6 03:09:43 localhost python3[44015]: ansible-file Invoked with path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 03:09:44 localhost sshd[44030]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:09:44 localhost python3[44032]: ansible-file Invoked with path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 03:09:44 localhost python3[44049]: ansible-file Invoked with path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 03:09:44 localhost python3[44065]: ansible-file Invoked with path=/var/lib/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 03:09:45 localhost python3[44111]: ansible-file Invoked with path=/etc/tmpfiles.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:09:45 localhost python3[44211]: ansible-ansible.legacy.stat Invoked with path=/etc/tmpfiles.d/run-nova.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:09:46 localhost python3[44271]: ansible-ansible.legacy.copy Invoked with dest=/etc/tmpfiles.d/run-nova.conf src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008585.3822806-76540-74399181822859/source _original_basename=tmpj09ev3nq follow=False checksum=f834349098718ec09c7562bcb470b717a83ff411 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:09:46 localhost python3[44315]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-tmpfiles --create _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:09:47 localhost python3[44346]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:09:48 localhost python3[44395]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/delay-nova-compute follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:09:48 localhost python3[44438]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/nova/delay-nova-compute mode=493 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008587.8587005-76702-98835160614507/source _original_basename=tmprnp7vuv0 follow=False checksum=f07ad3e8cf3766b3b3b07ae8278826a0ef3bb5e3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:09:49 localhost python3[44468]: ansible-file Invoked with mode=0750 path=/var/log/containers/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 03:09:49 localhost python3[44484]: ansible-file Invoked with path=/etc/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 03:09:49 localhost python3[44500]: ansible-file Invoked with path=/etc/libvirt/secrets setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 03:09:50 localhost python3[44516]: ansible-file Invoked with path=/etc/libvirt/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 03:09:50 localhost python3[44532]: ansible-file Invoked with path=/var/lib/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 03:09:50 localhost python3[44548]: ansible-file Invoked with path=/var/cache/libvirt state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:09:50 localhost python3[44564]: ansible-file Invoked with path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 03:09:51 localhost python3[44580]: ansible-file Invoked with path=/run/libvirt state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:09:51 localhost python3[44596]: ansible-file Invoked with mode=0770 path=/var/log/containers/libvirt/swtpm setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 03:09:52 localhost python3[44612]: ansible-group Invoked with gid=107 name=qemu state=present system=False local=False non_unique=False Dec 6 03:09:52 localhost python3[44634]: ansible-user Invoked with comment=qemu user group=qemu name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005548789.localdomain update_password=always groups=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None Dec 6 03:09:53 localhost python3[44658]: ansible-file Invoked with group=qemu owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None serole=None selevel=None attributes=None Dec 6 03:09:53 localhost python3[44674]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/rpm -q libvirt-daemon _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:09:53 localhost python3[44723]: ansible-ansible.legacy.stat Invoked with path=/etc/tmpfiles.d/run-libvirt.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:09:54 localhost python3[44766]: ansible-ansible.legacy.copy Invoked with dest=/etc/tmpfiles.d/run-libvirt.conf src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008593.6286278-76979-185023022115093/source _original_basename=tmpuh2rpkk0 follow=False checksum=57f3ff94c666c6aae69ae22e23feb750cf9e8b13 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:09:54 localhost python3[44796]: ansible-seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False Dec 6 03:09:55 localhost dbus-broker-launch[756]: avc: op=load_policy lsm=selinux seqno=11 res=1 Dec 6 03:09:55 localhost python3[44816]: ansible-file Invoked with path=/etc/crypto-policies/local.d/gnutls-qemu.config state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:09:56 localhost python3[44832]: ansible-file Invoked with path=/run/libvirt setype=virt_var_run_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 03:09:56 localhost python3[44848]: ansible-seboolean Invoked with name=logrotate_read_inside_containers persistent=True state=True ignore_selinux_state=False Dec 6 03:09:57 localhost dbus-broker-launch[756]: avc: op=load_policy lsm=selinux seqno=12 res=1 Dec 6 03:09:57 localhost python3[44868]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 6 03:10:00 localhost python3[44885]: ansible-setup Invoked with gather_subset=['!all', '!min', 'network'] filter=['ansible_interfaces'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 6 03:10:01 localhost python3[44946]: ansible-file Invoked with path=/etc/containers/networks state=directory recurse=True mode=493 owner=root group=root force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:10:01 localhost python3[44962]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:10:02 localhost python3[45022]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:10:02 localhost python3[45065]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008602.1679893-77340-184528284703072/source dest=/etc/containers/networks/podman.json mode=0644 owner=root group=root follow=False _original_basename=podman_network_config.j2 checksum=ac5a4647d8c1518748e8118ddace0562c0bf12f6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:10:03 localhost python3[45127]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:10:04 localhost python3[45172]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008603.209031-77384-88706933372472/source dest=/etc/containers/registries.conf owner=root group=root setype=etc_t mode=0644 follow=False _original_basename=registries.conf.j2 checksum=710a00cfb11a4c3eba9c028ef1984a9fea9ba83a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 6 03:10:04 localhost python3[45202]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=containers option=pids_limit value=4096 backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None Dec 6 03:10:04 localhost python3[45218]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=engine option=events_logger value="journald" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None Dec 6 03:10:05 localhost python3[45234]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=engine option=runtime value="crun" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None Dec 6 03:10:05 localhost python3[45250]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=network option=network_backend value="netavark" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None Dec 6 03:10:06 localhost python3[45298]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:10:06 localhost python3[45341]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008605.8756714-77514-2091158185547/source _original_basename=tmpvbmkp_9c follow=False checksum=0bfbc70e9a4740c9004b9947da681f723d529c83 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:10:06 localhost python3[45371]: ansible-file Invoked with mode=0750 path=/var/log/containers/rsyslog setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 03:10:07 localhost python3[45387]: ansible-file Invoked with path=/var/lib/rsyslog.container setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 03:10:07 localhost python3[45403]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 6 03:10:09 localhost ceph-osd[31726]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 6 03:10:09 localhost ceph-osd[31726]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 3259 writes, 16K keys, 3259 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 3259 writes, 145 syncs, 22.48 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3259 writes, 16K keys, 3259 commit groups, 1.0 writes per commit group, ingest: 14.68 MB, 0.02 MB/s#012Interval WAL: 3259 writes, 145 syncs, 22.48 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.005 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.005 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.005 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d1180b62d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d1180b62d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memt Dec 6 03:10:11 localhost python3[45452]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:10:11 localhost python3[45497]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008611.103243-77971-262711034243748/source validate=/usr/sbin/sshd -T -f %s mode=None follow=False _original_basename=sshd_config_block.j2 checksum=913c99ed7d5c33615bfb07a6792a4ef143dcfd2b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:10:12 localhost python3[45528]: ansible-systemd Invoked with name=sshd state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 03:10:12 localhost systemd[1]: Stopping OpenSSH server daemon... Dec 6 03:10:12 localhost systemd[1]: sshd.service: Deactivated successfully. Dec 6 03:10:12 localhost systemd[1]: Stopped OpenSSH server daemon. Dec 6 03:10:12 localhost systemd[1]: sshd.service: Consumed 7.675s CPU time, read 2.1M from disk, written 52.0K to disk. Dec 6 03:10:12 localhost systemd[1]: Stopped target sshd-keygen.target. Dec 6 03:10:12 localhost systemd[1]: Stopping sshd-keygen.target... Dec 6 03:10:12 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 6 03:10:12 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 6 03:10:12 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 6 03:10:12 localhost systemd[1]: Reached target sshd-keygen.target. Dec 6 03:10:12 localhost systemd[1]: Starting OpenSSH server daemon... Dec 6 03:10:12 localhost sshd[45532]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:10:12 localhost systemd[1]: Started OpenSSH server daemon. Dec 6 03:10:12 localhost python3[45548]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:10:12 localhost ceph-osd[32665]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 6 03:10:12 localhost ceph-osd[32665]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.2 total, 600.0 interval#012Cumulative writes: 3388 writes, 16K keys, 3388 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.03 MB/s#012Cumulative WAL: 3388 writes, 198 syncs, 17.11 writes per sync, written: 0.01 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3388 writes, 16K keys, 3388 commit groups, 1.0 writes per commit group, ingest: 15.28 MB, 0.03 MB/s#012Interval WAL: 3388 writes, 198 syncs, 17.11 writes per sync, written: 0.01 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e1468ea2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e1468ea2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable Dec 6 03:10:13 localhost python3[45566]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:10:14 localhost python3[45584]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 6 03:10:17 localhost python3[45633]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:10:18 localhost python3[45651]: ansible-ansible.legacy.file Invoked with owner=root group=root mode=420 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:10:18 localhost python3[45681]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 03:10:19 localhost python3[45731]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/chrony-online.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:10:19 localhost python3[45749]: ansible-ansible.legacy.file Invoked with dest=/etc/systemd/system/chrony-online.service _original_basename=chrony-online.service recurse=False state=file path=/etc/systemd/system/chrony-online.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:10:20 localhost python3[45779]: ansible-systemd Invoked with state=started name=chrony-online.service enabled=True daemon-reload=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 03:10:20 localhost systemd[1]: Reloading. Dec 6 03:10:20 localhost systemd-sysv-generator[45808]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:10:20 localhost systemd-rc-local-generator[45803]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:10:20 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:10:20 localhost systemd[1]: Starting chronyd online sources service... Dec 6 03:10:20 localhost chronyc[45819]: 200 OK Dec 6 03:10:20 localhost systemd[1]: chrony-online.service: Deactivated successfully. Dec 6 03:10:20 localhost systemd[1]: Finished chronyd online sources service. Dec 6 03:10:21 localhost python3[45835]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:10:21 localhost chronyd[25988]: System clock was stepped by 0.000002 seconds Dec 6 03:10:21 localhost python3[45852]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:10:22 localhost python3[45869]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:10:22 localhost chronyd[25988]: System clock was stepped by 0.000017 seconds Dec 6 03:10:22 localhost python3[45886]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:10:22 localhost python3[45903]: ansible-timezone Invoked with name=UTC hwclock=None Dec 6 03:10:22 localhost systemd[1]: Starting Time & Date Service... Dec 6 03:10:23 localhost systemd[1]: Started Time & Date Service. Dec 6 03:10:24 localhost python3[45923]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides tuned tuned-profiles-cpu-partitioning _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:10:24 localhost python3[45940]: ansible-ansible.legacy.command Invoked with _raw_params=which tuned-adm _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:10:25 localhost python3[45957]: ansible-slurp Invoked with src=/etc/tuned/active_profile Dec 6 03:10:25 localhost python3[45973]: ansible-stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 6 03:10:26 localhost python3[45989]: ansible-file Invoked with mode=0750 path=/var/log/containers/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 03:10:26 localhost python3[46005]: ansible-file Invoked with path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 03:10:26 localhost python3[46053]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/neutron-cleanup follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:10:27 localhost python3[46096]: ansible-ansible.legacy.copy Invoked with dest=/usr/libexec/neutron-cleanup force=True mode=0755 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008626.6419804-78802-199585271786632/source _original_basename=tmpalzo4kk7 follow=False checksum=f9cc7d1e91fbae49caa7e35eb2253bba146a73b4 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:10:27 localhost python3[46158]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/neutron-cleanup.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:10:28 localhost python3[46201]: ansible-ansible.legacy.copy Invoked with dest=/usr/lib/systemd/system/neutron-cleanup.service force=True src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008627.4954515-78852-90190994906431/source _original_basename=tmpvy089fds follow=False checksum=6b6cd9f074903a28d054eb530a10c7235d0c39fc backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:10:28 localhost python3[46231]: ansible-ansible.legacy.systemd Invoked with enabled=True name=neutron-cleanup daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None Dec 6 03:10:28 localhost sshd[46233]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:10:28 localhost systemd[1]: Reloading. Dec 6 03:10:28 localhost systemd-rc-local-generator[46259]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:10:28 localhost systemd-sysv-generator[46262]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:10:28 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:10:29 localhost python3[46286]: ansible-file Invoked with mode=0750 path=/var/log/containers/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 03:10:29 localhost python3[46302]: ansible-ansible.legacy.command Invoked with _raw_params=ip netns add ns_temp _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:10:30 localhost python3[46319]: ansible-ansible.legacy.command Invoked with _raw_params=ip netns delete ns_temp _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:10:30 localhost systemd[1]: run-netns-ns_temp.mount: Deactivated successfully. Dec 6 03:10:30 localhost python3[46336]: ansible-file Invoked with path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 03:10:30 localhost python3[46352]: ansible-file Invoked with path=/var/lib/neutron/kill_scripts state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:10:31 localhost python3[46400]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:10:31 localhost python3[46443]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=493 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008630.9105864-79105-215058964916488/source _original_basename=tmp8gldg326 follow=False checksum=2f369fbe8f83639cdfd4efc53e7feb4ee77d1ed7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:10:53 localhost systemd[1]: systemd-timedated.service: Deactivated successfully. Dec 6 03:10:57 localhost python3[46552]: ansible-file Invoked with path=/var/log/containers state=directory setype=container_file_t selevel=s0 mode=488 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Dec 6 03:10:58 localhost python3[46568]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None setype=None attributes=None Dec 6 03:10:58 localhost python3[46584]: ansible-file Invoked with path=/var/lib/tripleo-config state=directory setype=container_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 6 03:10:58 localhost python3[46600]: ansible-file Invoked with path=/var/lib/container-startup-configs.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:10:59 localhost python3[46616]: ansible-file Invoked with path=/var/lib/docker-container-startup-configs.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:10:59 localhost python3[46632]: ansible-community.general.sefcontext Invoked with target=/var/lib/container-config-scripts(/.*)? setype=container_file_t state=present ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None Dec 6 03:11:00 localhost kernel: SELinux: Converting 2706 SID table entries... Dec 6 03:11:00 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 6 03:11:00 localhost kernel: SELinux: policy capability open_perms=1 Dec 6 03:11:00 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 6 03:11:00 localhost kernel: SELinux: policy capability always_check_network=0 Dec 6 03:11:00 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 6 03:11:00 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 6 03:11:00 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 6 03:11:00 localhost dbus-broker-launch[756]: avc: op=load_policy lsm=selinux seqno=13 res=1 Dec 6 03:11:00 localhost python3[46653]: ansible-file Invoked with path=/var/lib/container-config-scripts state=directory setype=container_file_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 03:11:02 localhost sshd[46775]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:11:03 localhost python3[46792]: ansible-container_startup_config Invoked with config_base_dir=/var/lib/tripleo-config/container-startup-config config_data={'step_1': {'metrics_qdr': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, 'metrics_qdr_init_logs': {'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}}, 'step_2': {'create_haproxy_wrapper': {'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, 'create_virtlogd_wrapper': {'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, 'nova_compute_init_log': {'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, 'nova_virtqemud_init_logs': {'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}}, 'step_3': {'ceilometer_init_log': {'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, 'collectd': {'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, 'iscsid': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, 'nova_statedir_owner': {'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, 'nova_virtlogd_wrapper': {'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': [ Dec 6 03:11:03 localhost rsyslogd[760]: message too long (31243) with configured size 8096, begin of message is: ansible-container_startup_config Invoked with config_base_dir=/var/lib/tripleo-c [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2445 ] Dec 6 03:11:03 localhost python3[46808]: ansible-file Invoked with path=/var/lib/kolla/config_files state=directory setype=container_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 6 03:11:04 localhost python3[46824]: ansible-file Invoked with path=/var/lib/config-data mode=493 state=directory setype=container_file_t selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Dec 6 03:11:04 localhost python3[46840]: ansible-tripleo_container_configs Invoked with config_data={'/var/lib/kolla/config_files/ceilometer-agent-ipmi.json': {'command': '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /var/log/ceilometer/ipmi.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/ceilometer_agent_compute.json': {'command': '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/collectd.json': {'command': '/usr/sbin/collectd -f', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/collectd.d'}], 'permissions': [{'owner': 'collectd:collectd', 'path': '/var/log/collectd', 'recurse': True}, {'owner': 'collectd:collectd', 'path': '/scripts', 'recurse': True}, {'owner': 'collectd:collectd', 'path': '/config-scripts', 'recurse': True}]}, '/var/lib/kolla/config_files/iscsid.json': {'command': '/usr/sbin/iscsid -f', 'config_files': [{'dest': '/etc/iscsi/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-iscsid/'}]}, '/var/lib/kolla/config_files/logrotate-crond.json': {'command': '/usr/sbin/crond -s -n', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/metrics_qdr.json': {'command': '/usr/sbin/qdrouterd -c /etc/qpid-dispatch/qdrouterd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/', 'merge': True, 'optional': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-tls/*'}], 'permissions': [{'owner': 'qdrouterd:qdrouterd', 'path': '/var/lib/qdrouterd', 'recurse': True}, {'optional': True, 'owner': 'qdrouterd:qdrouterd', 'path': '/etc/pki/tls/certs/metrics_qdr.crt'}, {'optional': True, 'owner': 'qdrouterd:qdrouterd', 'path': '/etc/pki/tls/private/metrics_qdr.key'}]}, '/var/lib/kolla/config_files/nova-migration-target.json': {'command': 'dumb-init --single-child -- /usr/sbin/sshd -D -p 2022', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ssh/', 'owner': 'root', 'perm': '0600', 'source': '/host-ssh/ssh_host_*_key'}]}, '/var/lib/kolla/config_files/nova_compute.json': {'command': '/var/lib/nova/delay-nova-compute --delay 180 --nova-binary /usr/bin/nova-compute ', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/iscsi/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-iscsid/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/var/log/nova', 'recurse': True}, {'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json': {'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_wait_for_compute_service.py', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'nova:nova', 'path': '/var/log/nova', 'recurse': True}]}, '/var/lib/kolla/config_files/nova_virtlogd.json': {'command': '/usr/local/bin/virtlogd_wrapper', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtnodedevd.json': {'command': '/usr/sbin/virtnodedevd --config /etc/libvirt/virtnodedevd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtproxyd.json': {'command': '/usr/sbin/virtproxyd --config /etc/libvirt/virtproxyd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtqemud.json': {'command': '/usr/sbin/virtqemud --config /etc/libvirt/virtqemud.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtsecretd.json': {'command': '/usr/sbin/virtsecretd --config /etc/libvirt/virtsecretd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtstoraged.json': {'command': '/usr/sbin/virtstoraged --config /etc/libvirt/virtstoraged.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/ovn_controller.json': {'command': '/usr/bin/ovn-controller --pidfile --log-file unix:/run/openvswitch/db.sock ', 'permissions': [{'owner': 'root:root', 'path': '/var/log/openvswitch', 'recurse': True}, {'owner': 'root:root', 'path': '/var/log/ovn', 'recurse': True}]}, '/var/lib/kolla/config_files/ovn_metadata_agent.json': {'command': '/usr/bin/networking-ovn-metadata-agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/networking-ovn/networking-ovn-metadata-agent.ini --log-file=/var/log/neutron/ovn-metadata-agent.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'neutron:neutron', 'path': '/var/log/neutron', 'recurse': True}, {'owner': 'neutron:neutron', 'path': '/var/lib/neutron', 'recurse': True}, {'optional': True, 'owner': 'neutron:neutron', 'path': '/etc/pki/tls/certs/ovn_metadata.crt', 'perm': '0644'}, {'optional': True, 'owner': 'neutron:neutron', 'path': '/etc/pki/tls/private/ovn_metadata.key', 'perm': '0644'}]}, '/var/lib/kolla/config_files/rsyslog.json': {'command': '/usr/sbin/rsyslogd -n', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'root:root', 'path': '/var/lib/rsyslog', 'recurse': True}, {'owner': 'root:root', 'path': '/var/log/rsyslog', 'recurse': True}]}} Dec 6 03:11:09 localhost python3[46888]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:11:10 localhost python3[46931]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008669.4534256-80758-148523531882988/source _original_basename=tmptet8qcjo follow=False checksum=dfdcc7695edd230e7a2c06fc7b739bfa56506d8f backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:11:10 localhost python3[46961]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_1 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 6 03:11:12 localhost python3[47084]: ansible-file Invoked with path=/var/lib/container-puppet state=directory setype=container_file_t selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 6 03:11:13 localhost sshd[47159]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:11:14 localhost python3[47207]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Dec 6 03:11:16 localhost systemd[35904]: Created slice User Background Tasks Slice. Dec 6 03:11:16 localhost systemd[35904]: Starting Cleanup of User's Temporary Files and Directories... Dec 6 03:11:16 localhost systemd[35904]: Finished Cleanup of User's Temporary Files and Directories. Dec 6 03:11:17 localhost python3[47224]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q lvm2 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:11:18 localhost python3[47241]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 6 03:11:22 localhost dbus-broker-launch[752]: Noticed file-system modification, trigger reload. Dec 6 03:11:22 localhost dbus-broker-launch[18452]: Noticed file-system modification, trigger reload. Dec 6 03:11:22 localhost dbus-broker-launch[18452]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored Dec 6 03:11:22 localhost dbus-broker-launch[18452]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored Dec 6 03:11:22 localhost dbus-broker-launch[752]: Noticed file-system modification, trigger reload. Dec 6 03:11:22 localhost systemd[1]: Reexecuting. Dec 6 03:11:22 localhost systemd[1]: systemd 252-14.el9_2.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 6 03:11:22 localhost systemd[1]: Detected virtualization kvm. Dec 6 03:11:22 localhost systemd[1]: Detected architecture x86-64. Dec 6 03:11:22 localhost systemd-rc-local-generator[47294]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:11:22 localhost systemd-sysv-generator[47300]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:11:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:11:31 localhost kernel: SELinux: Converting 2706 SID table entries... Dec 6 03:11:31 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 6 03:11:31 localhost kernel: SELinux: policy capability open_perms=1 Dec 6 03:11:31 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 6 03:11:31 localhost kernel: SELinux: policy capability always_check_network=0 Dec 6 03:11:31 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 6 03:11:31 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 6 03:11:31 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 6 03:11:31 localhost dbus-broker-launch[752]: Noticed file-system modification, trigger reload. Dec 6 03:11:31 localhost dbus-broker-launch[756]: avc: op=load_policy lsm=selinux seqno=14 res=1 Dec 6 03:11:31 localhost dbus-broker-launch[752]: Noticed file-system modification, trigger reload. Dec 6 03:11:32 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 6 03:11:32 localhost systemd[1]: Starting man-db-cache-update.service... Dec 6 03:11:32 localhost systemd[1]: Reloading. Dec 6 03:11:32 localhost systemd-rc-local-generator[47439]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:11:32 localhost systemd-sysv-generator[47443]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:11:32 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:11:32 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 6 03:11:32 localhost systemd[1]: Queuing reload/restart jobs for marked units… Dec 6 03:11:32 localhost systemd-journald[619]: Journal stopped Dec 6 03:11:32 localhost systemd-journald[619]: Received SIGTERM from PID 1 (systemd). Dec 6 03:11:32 localhost systemd[1]: Stopping Journal Service... Dec 6 03:11:32 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files... Dec 6 03:11:32 localhost systemd[1]: systemd-journald.service: Deactivated successfully. Dec 6 03:11:32 localhost systemd[1]: Stopped Journal Service. Dec 6 03:11:32 localhost systemd[1]: systemd-journald.service: Consumed 1.903s CPU time. Dec 6 03:11:32 localhost systemd[1]: Starting Journal Service... Dec 6 03:11:32 localhost systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 6 03:11:32 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files. Dec 6 03:11:32 localhost systemd[1]: systemd-udevd.service: Consumed 3.044s CPU time. Dec 6 03:11:32 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files... Dec 6 03:11:32 localhost systemd-journald[47810]: Journal started Dec 6 03:11:32 localhost systemd-journald[47810]: Runtime Journal (/run/log/journal/4b30904fc4748c16d0c72dbebcabab49) is 12.4M, max 314.7M, 302.3M free. Dec 6 03:11:32 localhost systemd[1]: Started Journal Service. Dec 6 03:11:32 localhost systemd-journald[47810]: Field hash table of /run/log/journal/4b30904fc4748c16d0c72dbebcabab49/system.journal has a fill level at 75.4 (251 of 333 items), suggesting rotation. Dec 6 03:11:32 localhost systemd-journald[47810]: /run/log/journal/4b30904fc4748c16d0c72dbebcabab49/system.journal: Journal header limits reached or header out-of-date, rotating. Dec 6 03:11:32 localhost rsyslogd[760]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 6 03:11:32 localhost systemd-udevd[47819]: Using default interface naming scheme 'rhel-9.0'. Dec 6 03:11:32 localhost systemd[1]: Started Rule-based Manager for Device Events and Files. Dec 6 03:11:32 localhost rsyslogd[760]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 6 03:11:32 localhost systemd[1]: Reloading. Dec 6 03:11:32 localhost systemd-rc-local-generator[48356]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:11:32 localhost systemd-sysv-generator[48359]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:11:33 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:11:33 localhost systemd[1]: Queuing reload/restart jobs for marked units… Dec 6 03:11:33 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Dec 6 03:11:33 localhost systemd[1]: Finished man-db-cache-update.service. Dec 6 03:11:33 localhost systemd[1]: man-db-cache-update.service: Consumed 1.288s CPU time. Dec 6 03:11:33 localhost systemd[1]: run-rc341a2c620854cb2a706901147bc79b1.service: Deactivated successfully. Dec 6 03:11:33 localhost systemd[1]: run-r691c2e9d44df4f228f6875fd4b471bbc.service: Deactivated successfully. Dec 6 03:11:35 localhost python3[48734]: ansible-sysctl Invoked with name=vm.unprivileged_userfaultfd reload=True state=present sysctl_file=/etc/sysctl.d/99-tripleo-postcopy.conf sysctl_set=True value=1 ignoreerrors=False Dec 6 03:11:35 localhost python3[48753]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ksm.service || systemctl is-enabled ksm.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:11:36 localhost python3[48771]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Dec 6 03:11:36 localhost python3[48771]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 --format json Dec 6 03:11:36 localhost python3[48771]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 -q --tls-verify=false Dec 6 03:11:43 localhost podman[48783]: 2025-12-06 08:11:36.723103586 +0000 UTC m=+0.045978759 image pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Dec 6 03:11:43 localhost python3[48771]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect bac901955dcf7a32a493c6ef724c092009bbc18467858aa8c55e916b8c2b2b8f --format json Dec 6 03:11:44 localhost python3[48884]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Dec 6 03:11:44 localhost python3[48884]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 --format json Dec 6 03:11:44 localhost python3[48884]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 -q --tls-verify=false Dec 6 03:11:49 localhost sshd[48959]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:11:51 localhost podman[48896]: 2025-12-06 08:11:44.178582322 +0000 UTC m=+0.034199437 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Dec 6 03:11:51 localhost python3[48884]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 44feaf8d87c1d40487578230316b622680576d805efdb45dfeea6aad464b41f1 --format json Dec 6 03:11:51 localhost podman[49099]: 2025-12-06 08:11:51.496384675 +0000 UTC m=+0.090054217 container exec ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, vendor=Red Hat, Inc., io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, name=rhceph, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , RELEASE=main, distribution-scope=public, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, vcs-type=git, version=7, ceph=True, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, GIT_BRANCH=main) Dec 6 03:11:51 localhost python3[49097]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Dec 6 03:11:51 localhost python3[49097]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 --format json Dec 6 03:11:51 localhost python3[49097]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 -q --tls-verify=false Dec 6 03:11:51 localhost podman[49099]: 2025-12-06 08:11:51.607282757 +0000 UTC m=+0.200952349 container exec_died ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vendor=Red Hat, Inc., RELEASE=main, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, release=1763362218, vcs-type=git, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph) Dec 6 03:12:07 localhost podman[49129]: 2025-12-06 08:11:51.642183605 +0000 UTC m=+0.047915470 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 6 03:12:07 localhost python3[49097]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 3a088c12511c977065fdc5f1594cba7b1a79f163578a6ffd0ac4a475b8e67938 --format json Dec 6 03:12:08 localhost python3[49651]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Dec 6 03:12:08 localhost python3[49651]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 --format json Dec 6 03:12:08 localhost python3[49651]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 -q --tls-verify=false Dec 6 03:12:21 localhost podman[49664]: 2025-12-06 08:12:08.424226229 +0000 UTC m=+0.042978435 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Dec 6 03:12:21 localhost python3[49651]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 514d439186251360cf734cbc6d4a44c834664891872edf3798a653dfaacf10c0 --format json Dec 6 03:12:22 localhost python3[49767]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Dec 6 03:12:22 localhost python3[49767]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 --format json Dec 6 03:12:22 localhost python3[49767]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 -q --tls-verify=false Dec 6 03:12:29 localhost podman[49780]: 2025-12-06 08:12:22.399588726 +0000 UTC m=+0.044637355 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 Dec 6 03:12:29 localhost python3[49767]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect a9dd7a2ac6f35cb086249f87f74e2f8e74e7e2ad5141ce2228263be6faedce26 --format json Dec 6 03:12:30 localhost python3[49917]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Dec 6 03:12:30 localhost python3[49917]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 --format json Dec 6 03:12:30 localhost python3[49917]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 -q --tls-verify=false Dec 6 03:12:34 localhost podman[49931]: 2025-12-06 08:12:30.324909617 +0000 UTC m=+0.035824519 image pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Dec 6 03:12:34 localhost python3[49917]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 24976907b2c2553304119aba5731a800204d664feed24ca9eb7f2b4c7d81016b --format json Dec 6 03:12:35 localhost python3[50009]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Dec 6 03:12:35 localhost python3[50009]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 --format json Dec 6 03:12:35 localhost python3[50009]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 -q --tls-verify=false Dec 6 03:12:37 localhost podman[50022]: 2025-12-06 08:12:35.328120073 +0000 UTC m=+0.043086986 image pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Dec 6 03:12:37 localhost python3[50009]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 57163a7b21fdbb804a27897cb6e6052a5e5c7a339c45d663e80b52375a760dcf --format json Dec 6 03:12:38 localhost python3[50098]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Dec 6 03:12:38 localhost python3[50098]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 --format json Dec 6 03:12:38 localhost python3[50098]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 -q --tls-verify=false Dec 6 03:12:40 localhost podman[50111]: 2025-12-06 08:12:38.13721956 +0000 UTC m=+0.030071113 image pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Dec 6 03:12:40 localhost python3[50098]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 076d82a27d63c8328729ed27ceb4291585ae18d017befe6fe353df7aa11715ae --format json Dec 6 03:12:40 localhost python3[50190]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Dec 6 03:12:40 localhost python3[50190]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 --format json Dec 6 03:12:40 localhost python3[50190]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 -q --tls-verify=false Dec 6 03:12:41 localhost sshd[50212]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:12:43 localhost podman[50202]: 2025-12-06 08:12:41.019283458 +0000 UTC m=+0.042646793 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 Dec 6 03:12:43 localhost python3[50190]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect d0dbcb95546840a8d088df044347a7877ad5ea45a2ddba0578e9bb5de4ab0da5 --format json Dec 6 03:12:44 localhost python3[50280]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Dec 6 03:12:44 localhost python3[50280]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 --format json Dec 6 03:12:44 localhost python3[50280]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 -q --tls-verify=false Dec 6 03:12:44 localhost sshd[50306]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:12:47 localhost podman[50293]: 2025-12-06 08:12:44.111993586 +0000 UTC m=+0.045438840 image pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Dec 6 03:12:47 localhost python3[50280]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect e6e981540e553415b2d6eda490d7683db07164af2e7a0af8245623900338a4d6 --format json Dec 6 03:12:48 localhost python3[50382]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Dec 6 03:12:48 localhost python3[50382]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 --format json Dec 6 03:12:48 localhost python3[50382]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 -q --tls-verify=false Dec 6 03:12:50 localhost podman[50396]: 2025-12-06 08:12:48.327865689 +0000 UTC m=+0.041551709 image pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Dec 6 03:12:50 localhost python3[50382]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 87ee88cbf01fb42e0b22747072843bcca6130a90eda4de6e74b3ccd847bb4040 --format json Dec 6 03:12:51 localhost python3[50472]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_1 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 6 03:12:52 localhost ansible-async_wrapper.py[50644]: Invoked with 305873006874 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008772.1605537-83491-237608108090073/AnsiballZ_command.py _ Dec 6 03:12:52 localhost ansible-async_wrapper.py[50647]: Starting module and watcher Dec 6 03:12:52 localhost ansible-async_wrapper.py[50647]: Start watching 50648 (3600) Dec 6 03:12:52 localhost ansible-async_wrapper.py[50648]: Start module (50648) Dec 6 03:12:52 localhost ansible-async_wrapper.py[50644]: Return async_wrapper task started. Dec 6 03:12:53 localhost python3[50668]: ansible-ansible.legacy.async_status Invoked with jid=305873006874.50644 mode=status _async_dir=/tmp/.ansible_async Dec 6 03:12:56 localhost puppet-user[50652]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 6 03:12:56 localhost puppet-user[50652]: (file: /etc/puppet/hiera.yaml) Dec 6 03:12:56 localhost puppet-user[50652]: Warning: Undefined variable '::deploy_config_name'; Dec 6 03:12:56 localhost puppet-user[50652]: (file & line not available) Dec 6 03:12:56 localhost puppet-user[50652]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 6 03:12:56 localhost puppet-user[50652]: (file & line not available) Dec 6 03:12:56 localhost puppet-user[50652]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Dec 6 03:12:56 localhost puppet-user[50652]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Dec 6 03:12:56 localhost puppet-user[50652]: Notice: Compiled catalog for np0005548789.localdomain in environment production in 0.12 seconds Dec 6 03:12:56 localhost puppet-user[50652]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Exec[directory-create-etc-my.cnf.d]/returns: executed successfully Dec 6 03:12:56 localhost puppet-user[50652]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/File[/etc/my.cnf.d/tripleo.cnf]/ensure: created Dec 6 03:12:56 localhost puppet-user[50652]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully Dec 6 03:12:56 localhost puppet-user[50652]: Notice: Applied catalog in 0.06 seconds Dec 6 03:12:56 localhost puppet-user[50652]: Application: Dec 6 03:12:56 localhost puppet-user[50652]: Initial environment: production Dec 6 03:12:56 localhost puppet-user[50652]: Converged environment: production Dec 6 03:12:56 localhost puppet-user[50652]: Run mode: user Dec 6 03:12:56 localhost puppet-user[50652]: Changes: Dec 6 03:12:56 localhost puppet-user[50652]: Total: 3 Dec 6 03:12:56 localhost puppet-user[50652]: Events: Dec 6 03:12:56 localhost puppet-user[50652]: Success: 3 Dec 6 03:12:56 localhost puppet-user[50652]: Total: 3 Dec 6 03:12:56 localhost puppet-user[50652]: Resources: Dec 6 03:12:56 localhost puppet-user[50652]: Changed: 3 Dec 6 03:12:56 localhost puppet-user[50652]: Out of sync: 3 Dec 6 03:12:56 localhost puppet-user[50652]: Total: 10 Dec 6 03:12:56 localhost puppet-user[50652]: Time: Dec 6 03:12:56 localhost puppet-user[50652]: Schedule: 0.00 Dec 6 03:12:56 localhost puppet-user[50652]: Filebucket: 0.00 Dec 6 03:12:56 localhost puppet-user[50652]: File: 0.00 Dec 6 03:12:56 localhost puppet-user[50652]: Exec: 0.01 Dec 6 03:12:56 localhost puppet-user[50652]: Augeas: 0.02 Dec 6 03:12:56 localhost puppet-user[50652]: Transaction evaluation: 0.05 Dec 6 03:12:56 localhost puppet-user[50652]: Catalog application: 0.06 Dec 6 03:12:56 localhost puppet-user[50652]: Config retrieval: 0.16 Dec 6 03:12:56 localhost puppet-user[50652]: Last run: 1765008776 Dec 6 03:12:56 localhost puppet-user[50652]: Total: 0.06 Dec 6 03:12:56 localhost puppet-user[50652]: Version: Dec 6 03:12:56 localhost puppet-user[50652]: Config: 1765008776 Dec 6 03:12:56 localhost puppet-user[50652]: Puppet: 7.10.0 Dec 6 03:12:57 localhost ansible-async_wrapper.py[50648]: Module complete (50648) Dec 6 03:12:57 localhost ansible-async_wrapper.py[50647]: Done in kid B. Dec 6 03:13:03 localhost python3[50871]: ansible-ansible.legacy.async_status Invoked with jid=305873006874.50644 mode=status _async_dir=/tmp/.ansible_async Dec 6 03:13:04 localhost python3[50887]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 6 03:13:04 localhost python3[50903]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 6 03:13:05 localhost python3[50951]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:13:05 localhost python3[50994]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/container-puppet/puppetlabs/facter.conf setype=svirt_sandbox_file_t selevel=s0 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008784.9808478-83700-189720324973468/source _original_basename=tmpq8fh5mc5 follow=False checksum=53908622cb869db5e2e2a68e737aa2ab1a872111 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 6 03:13:06 localhost python3[51024]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:13:07 localhost python3[51127]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Dec 6 03:13:08 localhost python3[51146]: ansible-file Invoked with path=/var/lib/tripleo-config/container-puppet-config mode=448 recurse=True setype=container_file_t force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 03:13:08 localhost python3[51162]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=False puppet_config=/var/lib/container-puppet/container-puppet.json short_hostname=np0005548789 step=1 update_config_hash_only=False Dec 6 03:13:09 localhost python3[51178]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:13:10 localhost python3[51194]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_1 config_pattern=container-puppet-*.json config_overrides={} debug=True Dec 6 03:13:10 localhost python3[51210]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Dec 6 03:13:11 localhost python3[51252]: ansible-tripleo_container_manage Invoked with config_id=tripleo_puppet_step1 config_dir=/var/lib/tripleo-config/container-puppet-config/step_1 config_patterns=container-puppet-*.json config_overrides={} concurrency=6 log_base_path=/var/log/containers/stdouts debug=False Dec 6 03:13:12 localhost podman[51402]: 2025-12-06 08:13:12.231674147 +0000 UTC m=+0.085827322 container create f735a830357edaf91b07470ca762854575514a50cfbb9cf4aaacf994a5cb9ec5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, config_id=tripleo_puppet_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.12, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=container-puppet-metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, distribution-scope=public, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container) Dec 6 03:13:12 localhost podman[51447]: 2025-12-06 08:13:12.247333582 +0000 UTC m=+0.055533512 container create 9506016e406df32ac4299ea465041d6a128e0f3dc9ea98102586360564954bfa (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, io.buildah.version=1.41.4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, version=17.1.12, distribution-scope=public, architecture=x86_64, url=https://www.redhat.com, config_id=tripleo_puppet_step1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=container-puppet-crond, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team) Dec 6 03:13:12 localhost systemd[1]: Started libpod-conmon-f735a830357edaf91b07470ca762854575514a50cfbb9cf4aaacf994a5cb9ec5.scope. Dec 6 03:13:12 localhost podman[51436]: 2025-12-06 08:13:12.267363832 +0000 UTC m=+0.092504828 container create 4704c7e23d4793bb8ec51f342b561857983350f0042af9af009532473579e109 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1761123044, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_puppet_step1, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, version=17.1.12, batch=17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, container_name=container-puppet-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git) Dec 6 03:13:12 localhost systemd[1]: Started libpod-conmon-9506016e406df32ac4299ea465041d6a128e0f3dc9ea98102586360564954bfa.scope. Dec 6 03:13:12 localhost systemd[1]: Started libcrun container. Dec 6 03:13:12 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93af1cca3c5bc2fd6a62da9c5b24ae5188daa60c95e07f180cd96e6bf14ae456/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Dec 6 03:13:12 localhost systemd[1]: Started libcrun container. Dec 6 03:13:12 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62c1e5bed2cbe019aeb004cf1ae8b59e235260475c84680568f7ac998ec16abe/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Dec 6 03:13:12 localhost podman[51402]: 2025-12-06 08:13:12.194273757 +0000 UTC m=+0.048426952 image pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Dec 6 03:13:12 localhost podman[51402]: 2025-12-06 08:13:12.296895578 +0000 UTC m=+0.151048753 container init f735a830357edaf91b07470ca762854575514a50cfbb9cf4aaacf994a5cb9ec5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, io.buildah.version=1.41.4, distribution-scope=public, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, container_name=container-puppet-metrics_qdr, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_puppet_step1, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 6 03:13:12 localhost systemd[1]: Started libpod-conmon-4704c7e23d4793bb8ec51f342b561857983350f0042af9af009532473579e109.scope. Dec 6 03:13:12 localhost podman[51402]: 2025-12-06 08:13:12.304868976 +0000 UTC m=+0.159022151 container start f735a830357edaf91b07470ca762854575514a50cfbb9cf4aaacf994a5cb9ec5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, container_name=container-puppet-metrics_qdr, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, architecture=x86_64, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_puppet_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, release=1761123044) Dec 6 03:13:12 localhost podman[51402]: 2025-12-06 08:13:12.305182115 +0000 UTC m=+0.159335310 container attach f735a830357edaf91b07470ca762854575514a50cfbb9cf4aaacf994a5cb9ec5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, managed_by=tripleo_ansible, release=1761123044, vendor=Red Hat, Inc., config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_puppet_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=container-puppet-metrics_qdr, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, version=17.1.12) Dec 6 03:13:12 localhost systemd[1]: Started libcrun container. Dec 6 03:13:12 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/926f9ae59ae4620dbf9c3ec16ecc12b1f2dc289864a1e6ab337d5dcdccd2a7fe/merged/tmp/iscsi.host supports timestamps until 2038 (0x7fffffff) Dec 6 03:13:12 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/926f9ae59ae4620dbf9c3ec16ecc12b1f2dc289864a1e6ab337d5dcdccd2a7fe/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Dec 6 03:13:12 localhost podman[51436]: 2025-12-06 08:13:12.321295685 +0000 UTC m=+0.146436681 container init 4704c7e23d4793bb8ec51f342b561857983350f0042af9af009532473579e109 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_puppet_step1, batch=17.1_20251118.1, vendor=Red Hat, Inc., container_name=container-puppet-iscsid, name=rhosp17/openstack-iscsid, tcib_managed=true, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, managed_by=tripleo_ansible, vcs-type=git, version=17.1.12, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 6 03:13:12 localhost podman[51436]: 2025-12-06 08:13:12.222830263 +0000 UTC m=+0.047971269 image pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Dec 6 03:13:12 localhost podman[51447]: 2025-12-06 08:13:12.223223015 +0000 UTC m=+0.031422965 image pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Dec 6 03:13:12 localhost podman[51463]: 2025-12-06 08:13:12.2414888 +0000 UTC m=+0.032214949 image pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Dec 6 03:13:12 localhost podman[51452]: 2025-12-06 08:13:12.238015583 +0000 UTC m=+0.037190624 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 6 03:13:13 localhost podman[51436]: 2025-12-06 08:13:13.345595715 +0000 UTC m=+1.170736701 container start 4704c7e23d4793bb8ec51f342b561857983350f0042af9af009532473579e109 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, architecture=x86_64, config_id=tripleo_puppet_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, distribution-scope=public, container_name=container-puppet-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.12, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 6 03:13:13 localhost podman[51436]: 2025-12-06 08:13:13.346999049 +0000 UTC m=+1.172140125 container attach 4704c7e23d4793bb8ec51f342b561857983350f0042af9af009532473579e109 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, release=1761123044, config_id=tripleo_puppet_step1, architecture=x86_64, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, container_name=container-puppet-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.expose-services=) Dec 6 03:13:13 localhost podman[51452]: 2025-12-06 08:13:13.389022752 +0000 UTC m=+1.188197803 container create 329949042ceb45cabbb88c012af20b9f8b51f1de61f81364a852037377f8c8c1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_puppet_step1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, container_name=container-puppet-nova_libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, io.openshift.expose-services=, batch=17.1_20251118.1, build-date=2025-11-19T00:35:22Z, vcs-type=git) Dec 6 03:13:13 localhost podman[51463]: 2025-12-06 08:13:13.452801509 +0000 UTC m=+1.243527648 container create b567bdc6731749e2c2df0cfc659cd5d0e02971a859c8934938817051a1ec8b8c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vendor=Red Hat, Inc., release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, tcib_managed=true, name=rhosp17/openstack-collectd, io.openshift.expose-services=, version=17.1.12, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, container_name=container-puppet-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, config_id=tripleo_puppet_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 6 03:13:13 localhost systemd[1]: Started libpod-conmon-329949042ceb45cabbb88c012af20b9f8b51f1de61f81364a852037377f8c8c1.scope. Dec 6 03:13:13 localhost systemd[1]: Started libcrun container. Dec 6 03:13:13 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7114dad9ca5bd35640cc71d1bc50a0dfc385dbab46b008e450dae4492651614e/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Dec 6 03:13:13 localhost podman[51447]: 2025-12-06 08:13:13.483397238 +0000 UTC m=+1.291597178 container init 9506016e406df32ac4299ea465041d6a128e0f3dc9ea98102586360564954bfa (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, container_name=container-puppet-crond, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, url=https://www.redhat.com, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, architecture=x86_64, config_id=tripleo_puppet_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12) Dec 6 03:13:13 localhost podman[51452]: 2025-12-06 08:13:13.520349923 +0000 UTC m=+1.319524954 container init 329949042ceb45cabbb88c012af20b9f8b51f1de61f81364a852037377f8c8c1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=container-puppet-nova_libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, architecture=x86_64, build-date=2025-11-19T00:35:22Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, com.redhat.component=openstack-nova-libvirt-container, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, batch=17.1_20251118.1, release=1761123044, io.openshift.expose-services=, name=rhosp17/openstack-nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_puppet_step1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt) Dec 6 03:13:13 localhost podman[51452]: 2025-12-06 08:13:13.598490835 +0000 UTC m=+1.397665906 container start 329949042ceb45cabbb88c012af20b9f8b51f1de61f81364a852037377f8c8c1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, name=rhosp17/openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_puppet_step1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, build-date=2025-11-19T00:35:22Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=container-puppet-nova_libvirt, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4) Dec 6 03:13:13 localhost podman[51452]: 2025-12-06 08:13:13.598955409 +0000 UTC m=+1.398130460 container attach 329949042ceb45cabbb88c012af20b9f8b51f1de61f81364a852037377f8c8c1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, name=rhosp17/openstack-nova-libvirt, tcib_managed=true, build-date=2025-11-19T00:35:22Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, batch=17.1_20251118.1, config_id=tripleo_puppet_step1, container_name=container-puppet-nova_libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, release=1761123044) Dec 6 03:13:13 localhost systemd[1]: Started libpod-conmon-b567bdc6731749e2c2df0cfc659cd5d0e02971a859c8934938817051a1ec8b8c.scope. Dec 6 03:13:13 localhost podman[51447]: 2025-12-06 08:13:13.667858035 +0000 UTC m=+1.476057975 container start 9506016e406df32ac4299ea465041d6a128e0f3dc9ea98102586360564954bfa (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, release=1761123044, config_id=tripleo_puppet_step1, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=container-puppet-crond, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, version=17.1.12, url=https://www.redhat.com) Dec 6 03:13:13 localhost podman[51447]: 2025-12-06 08:13:13.668066592 +0000 UTC m=+1.476266532 container attach 9506016e406df32ac4299ea465041d6a128e0f3dc9ea98102586360564954bfa (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, managed_by=tripleo_ansible, release=1761123044, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.expose-services=, config_id=tripleo_puppet_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, container_name=container-puppet-crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, version=17.1.12, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vcs-type=git) Dec 6 03:13:13 localhost systemd[1]: Started libcrun container. Dec 6 03:13:13 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d82191509656bbf6f64f1f50570f9d09f17aadb036e941dc9fdbfc1b9557da8/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Dec 6 03:13:13 localhost podman[51463]: 2025-12-06 08:13:13.693272692 +0000 UTC m=+1.483998861 container init b567bdc6731749e2c2df0cfc659cd5d0e02971a859c8934938817051a1ec8b8c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=container-puppet-collectd, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, architecture=x86_64, managed_by=tripleo_ansible, config_id=tripleo_puppet_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.12, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 03:13:13 localhost podman[51463]: 2025-12-06 08:13:13.705245014 +0000 UTC m=+1.495971143 container start b567bdc6731749e2c2df0cfc659cd5d0e02971a859c8934938817051a1ec8b8c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, com.redhat.component=openstack-collectd-container, architecture=x86_64, container_name=container-puppet-collectd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, config_id=tripleo_puppet_step1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true) Dec 6 03:13:13 localhost podman[51463]: 2025-12-06 08:13:13.705685108 +0000 UTC m=+1.496411327 container attach b567bdc6731749e2c2df0cfc659cd5d0e02971a859c8934938817051a1ec8b8c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, release=1761123044, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, architecture=x86_64, build-date=2025-11-18T22:51:28Z, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, distribution-scope=public, config_id=tripleo_puppet_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, maintainer=OpenStack TripleO Team, container_name=container-puppet-collectd, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4) Dec 6 03:13:14 localhost podman[51327]: 2025-12-06 08:13:12.105943549 +0000 UTC m=+0.037577816 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1 Dec 6 03:13:14 localhost podman[51872]: 2025-12-06 08:13:14.495894533 +0000 UTC m=+0.066505633 container create e6a13c3a9cefd2fa1b521b34f74f31f76fdd0b457acb3e515d649fdb95bdeb0c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, name=rhosp17/openstack-ceilometer-central, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, config_id=tripleo_puppet_step1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, com.redhat.component=openstack-ceilometer-central-container, vendor=Red Hat, Inc., tcib_managed=true, container_name=container-puppet-ceilometer, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, architecture=x86_64, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-central, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, build-date=2025-11-19T00:11:59Z, managed_by=tripleo_ansible, release=1761123044, vcs-type=git, batch=17.1_20251118.1, version=17.1.12, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 6 03:13:14 localhost systemd[1]: Started libpod-conmon-e6a13c3a9cefd2fa1b521b34f74f31f76fdd0b457acb3e515d649fdb95bdeb0c.scope. Dec 6 03:13:14 localhost systemd[1]: Started libcrun container. Dec 6 03:13:14 localhost podman[51872]: 2025-12-06 08:13:14.462516648 +0000 UTC m=+0.033127768 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1 Dec 6 03:13:14 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c8811abddbaacb6f32387bcfbf857078338be762178d55f30b77fc76cc9dc19/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Dec 6 03:13:14 localhost podman[51872]: 2025-12-06 08:13:14.578463891 +0000 UTC m=+0.149075021 container init e6a13c3a9cefd2fa1b521b34f74f31f76fdd0b457acb3e515d649fdb95bdeb0c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_puppet_step1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, container_name=container-puppet-ceilometer, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-central, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-central-container, distribution-scope=public, vendor=Red Hat, Inc., release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-central, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, build-date=2025-11-19T00:11:59Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:13:14 localhost podman[51872]: 2025-12-06 08:13:14.598440132 +0000 UTC m=+0.169051232 container start e6a13c3a9cefd2fa1b521b34f74f31f76fdd0b457acb3e515d649fdb95bdeb0c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, batch=17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_puppet_step1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, tcib_managed=true, architecture=x86_64, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:59Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-central, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, io.buildah.version=1.41.4, container_name=container-puppet-ceilometer, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-central-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, description=Red Hat OpenStack Platform 17.1 ceilometer-central, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-central) Dec 6 03:13:14 localhost podman[51872]: 2025-12-06 08:13:14.598639868 +0000 UTC m=+0.169250978 container attach e6a13c3a9cefd2fa1b521b34f74f31f76fdd0b457acb3e515d649fdb95bdeb0c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-central, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, build-date=2025-11-19T00:11:59Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, container_name=container-puppet-ceilometer, config_id=tripleo_puppet_step1, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-central-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-central, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, architecture=x86_64, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.41.4, release=1761123044) Dec 6 03:13:15 localhost puppet-user[51785]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 6 03:13:15 localhost puppet-user[51785]: (file: /etc/puppet/hiera.yaml) Dec 6 03:13:15 localhost puppet-user[51785]: Warning: Undefined variable '::deploy_config_name'; Dec 6 03:13:15 localhost puppet-user[51785]: (file & line not available) Dec 6 03:13:15 localhost puppet-user[51785]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 6 03:13:15 localhost puppet-user[51785]: (file & line not available) Dec 6 03:13:15 localhost puppet-user[51785]: Notice: Accepting previously invalid value for target type 'Integer' Dec 6 03:13:15 localhost systemd[1]: tmp-crun.HANlUp.mount: Deactivated successfully. Dec 6 03:13:15 localhost puppet-user[51786]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 6 03:13:15 localhost puppet-user[51786]: (file: /etc/puppet/hiera.yaml) Dec 6 03:13:15 localhost puppet-user[51786]: Warning: Undefined variable '::deploy_config_name'; Dec 6 03:13:15 localhost puppet-user[51786]: (file & line not available) Dec 6 03:13:15 localhost puppet-user[51785]: Notice: Compiled catalog for np0005548789.localdomain in environment production in 0.14 seconds Dec 6 03:13:15 localhost puppet-user[51786]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 6 03:13:15 localhost puppet-user[51786]: (file & line not available) Dec 6 03:13:15 localhost puppet-user[51785]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/owner: owner changed 'qdrouterd' to 'root' Dec 6 03:13:15 localhost puppet-user[51785]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/group: group changed 'qdrouterd' to 'root' Dec 6 03:13:15 localhost puppet-user[51785]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/mode: mode changed '0700' to '0755' Dec 6 03:13:15 localhost puppet-user[51785]: Notice: /Stage[main]/Qdr::Config/File[/etc/qpid-dispatch/ssl]/ensure: created Dec 6 03:13:15 localhost puppet-user[51785]: Notice: /Stage[main]/Qdr::Config/File[qdrouterd.conf]/content: content changed '{sha256}89e10d8896247f992c5f0baf027c25a8ca5d0441be46d8859d9db2067ea74cd3' to '{sha256}34a9d8752d7fd9af34f20c785b68df439604d3fa295519168b060d10c3f23b42' Dec 6 03:13:15 localhost puppet-user[51785]: Notice: /Stage[main]/Qdr::Config/File[/var/log/qdrouterd]/ensure: created Dec 6 03:13:15 localhost puppet-user[51785]: Notice: /Stage[main]/Qdr::Config/File[/var/log/qdrouterd/metrics_qdr.log]/ensure: created Dec 6 03:13:15 localhost puppet-user[51785]: Notice: Applied catalog in 0.03 seconds Dec 6 03:13:15 localhost puppet-user[51785]: Application: Dec 6 03:13:15 localhost puppet-user[51785]: Initial environment: production Dec 6 03:13:15 localhost puppet-user[51785]: Converged environment: production Dec 6 03:13:15 localhost puppet-user[51785]: Run mode: user Dec 6 03:13:15 localhost puppet-user[51785]: Changes: Dec 6 03:13:15 localhost puppet-user[51785]: Total: 7 Dec 6 03:13:15 localhost puppet-user[51814]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 6 03:13:15 localhost puppet-user[51814]: (file: /etc/puppet/hiera.yaml) Dec 6 03:13:15 localhost puppet-user[51814]: Warning: Undefined variable '::deploy_config_name'; Dec 6 03:13:15 localhost puppet-user[51814]: (file & line not available) Dec 6 03:13:15 localhost puppet-user[51785]: Events: Dec 6 03:13:15 localhost puppet-user[51785]: Success: 7 Dec 6 03:13:15 localhost puppet-user[51785]: Total: 7 Dec 6 03:13:15 localhost puppet-user[51785]: Resources: Dec 6 03:13:15 localhost puppet-user[51785]: Skipped: 13 Dec 6 03:13:15 localhost puppet-user[51785]: Changed: 5 Dec 6 03:13:15 localhost puppet-user[51785]: Out of sync: 5 Dec 6 03:13:15 localhost puppet-user[51785]: Total: 20 Dec 6 03:13:15 localhost puppet-user[51785]: Time: Dec 6 03:13:15 localhost puppet-user[51785]: File: 0.01 Dec 6 03:13:15 localhost puppet-user[51785]: Transaction evaluation: 0.02 Dec 6 03:13:15 localhost puppet-user[51785]: Catalog application: 0.03 Dec 6 03:13:15 localhost puppet-user[51785]: Config retrieval: 0.17 Dec 6 03:13:15 localhost puppet-user[51785]: Last run: 1765008795 Dec 6 03:13:15 localhost puppet-user[51785]: Total: 0.03 Dec 6 03:13:15 localhost puppet-user[51785]: Version: Dec 6 03:13:15 localhost puppet-user[51785]: Config: 1765008795 Dec 6 03:13:15 localhost puppet-user[51785]: Puppet: 7.10.0 Dec 6 03:13:15 localhost puppet-user[51786]: Notice: Compiled catalog for np0005548789.localdomain in environment production in 0.10 seconds Dec 6 03:13:15 localhost puppet-user[51814]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 6 03:13:15 localhost puppet-user[51814]: (file & line not available) Dec 6 03:13:15 localhost ovs-vsctl[52105]: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) Dec 6 03:13:15 localhost puppet-user[51814]: Notice: Compiled catalog for np0005548789.localdomain in environment production in 0.08 seconds Dec 6 03:13:15 localhost puppet-user[51786]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[reset-iscsi-initiator-name]/returns: executed successfully Dec 6 03:13:15 localhost puppet-user[51786]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/File[/etc/iscsi/.initiator_reset]/ensure: created Dec 6 03:13:15 localhost puppet-user[51814]: Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/File[/etc/logrotate-crond.conf]/ensure: defined content as '{sha256}1c3202f58bd2ae16cb31badcbb7f0d4e6697157b987d1887736ad96bb73d70b0' Dec 6 03:13:15 localhost puppet-user[51814]: Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/Cron[logrotate-crond]/ensure: created Dec 6 03:13:15 localhost puppet-user[51786]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[sync-iqn-to-host]/returns: executed successfully Dec 6 03:13:15 localhost puppet-user[51814]: Notice: Applied catalog in 0.06 seconds Dec 6 03:13:15 localhost puppet-user[51814]: Application: Dec 6 03:13:15 localhost puppet-user[51814]: Initial environment: production Dec 6 03:13:15 localhost puppet-user[51814]: Converged environment: production Dec 6 03:13:15 localhost puppet-user[51814]: Run mode: user Dec 6 03:13:15 localhost puppet-user[51814]: Changes: Dec 6 03:13:15 localhost puppet-user[51814]: Total: 2 Dec 6 03:13:15 localhost puppet-user[51814]: Events: Dec 6 03:13:15 localhost puppet-user[51814]: Success: 2 Dec 6 03:13:15 localhost puppet-user[51814]: Total: 2 Dec 6 03:13:15 localhost puppet-user[51814]: Resources: Dec 6 03:13:15 localhost puppet-user[51814]: Changed: 2 Dec 6 03:13:15 localhost puppet-user[51814]: Out of sync: 2 Dec 6 03:13:15 localhost puppet-user[51814]: Skipped: 7 Dec 6 03:13:15 localhost puppet-user[51814]: Total: 9 Dec 6 03:13:15 localhost puppet-user[51814]: Time: Dec 6 03:13:15 localhost puppet-user[51814]: File: 0.01 Dec 6 03:13:15 localhost puppet-user[51814]: Cron: 0.02 Dec 6 03:13:15 localhost puppet-user[51814]: Transaction evaluation: 0.05 Dec 6 03:13:15 localhost puppet-user[51814]: Catalog application: 0.06 Dec 6 03:13:15 localhost puppet-user[51814]: Config retrieval: 0.10 Dec 6 03:13:15 localhost puppet-user[51814]: Last run: 1765008795 Dec 6 03:13:15 localhost puppet-user[51814]: Total: 0.06 Dec 6 03:13:15 localhost puppet-user[51814]: Version: Dec 6 03:13:15 localhost puppet-user[51814]: Config: 1765008795 Dec 6 03:13:15 localhost puppet-user[51814]: Puppet: 7.10.0 Dec 6 03:13:15 localhost puppet-user[51838]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 6 03:13:15 localhost puppet-user[51838]: (file: /etc/puppet/hiera.yaml) Dec 6 03:13:15 localhost puppet-user[51838]: Warning: Undefined variable '::deploy_config_name'; Dec 6 03:13:15 localhost puppet-user[51838]: (file & line not available) Dec 6 03:13:15 localhost puppet-user[51819]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 6 03:13:15 localhost puppet-user[51819]: (file: /etc/puppet/hiera.yaml) Dec 6 03:13:15 localhost puppet-user[51819]: Warning: Undefined variable '::deploy_config_name'; Dec 6 03:13:15 localhost puppet-user[51819]: (file & line not available) Dec 6 03:13:15 localhost puppet-user[51838]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 6 03:13:15 localhost puppet-user[51838]: (file & line not available) Dec 6 03:13:15 localhost puppet-user[51819]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 6 03:13:15 localhost puppet-user[51819]: (file & line not available) Dec 6 03:13:15 localhost systemd[1]: libpod-f735a830357edaf91b07470ca762854575514a50cfbb9cf4aaacf994a5cb9ec5.scope: Deactivated successfully. Dec 6 03:13:15 localhost systemd[1]: libpod-f735a830357edaf91b07470ca762854575514a50cfbb9cf4aaacf994a5cb9ec5.scope: Consumed 2.134s CPU time. Dec 6 03:13:15 localhost podman[52272]: 2025-12-06 08:13:15.649145461 +0000 UTC m=+0.040583409 container died f735a830357edaf91b07470ca762854575514a50cfbb9cf4aaacf994a5cb9ec5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, config_id=tripleo_puppet_step1, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., release=1761123044, batch=17.1_20251118.1, container_name=container-puppet-metrics_qdr, io.buildah.version=1.41.4, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-type=git) Dec 6 03:13:15 localhost systemd[1]: tmp-crun.K53d5d.mount: Deactivated successfully. Dec 6 03:13:15 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f735a830357edaf91b07470ca762854575514a50cfbb9cf4aaacf994a5cb9ec5-userdata-shm.mount: Deactivated successfully. Dec 6 03:13:15 localhost podman[52272]: 2025-12-06 08:13:15.710544824 +0000 UTC m=+0.101982742 container cleanup f735a830357edaf91b07470ca762854575514a50cfbb9cf4aaacf994a5cb9ec5 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, architecture=x86_64, container_name=container-puppet-metrics_qdr, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_puppet_step1) Dec 6 03:13:15 localhost systemd[1]: libpod-conmon-f735a830357edaf91b07470ca762854575514a50cfbb9cf4aaacf994a5cb9ec5.scope: Deactivated successfully. Dec 6 03:13:15 localhost systemd[1]: libpod-9506016e406df32ac4299ea465041d6a128e0f3dc9ea98102586360564954bfa.scope: Deactivated successfully. Dec 6 03:13:15 localhost systemd[1]: libpod-9506016e406df32ac4299ea465041d6a128e0f3dc9ea98102586360564954bfa.scope: Consumed 2.041s CPU time. Dec 6 03:13:15 localhost python3[51252]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-metrics_qdr --conmon-pidfile /run/container-puppet-metrics_qdr.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005548789 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=metrics_qdr --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::metrics::qdr#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-metrics_qdr --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-metrics_qdr.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Dec 6 03:13:15 localhost podman[51447]: 2025-12-06 08:13:15.717432168 +0000 UTC m=+3.525632108 container died 9506016e406df32ac4299ea465041d6a128e0f3dc9ea98102586360564954bfa (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, architecture=x86_64, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_puppet_step1, url=https://www.redhat.com, container_name=container-puppet-crond, version=17.1.12, tcib_managed=true, io.buildah.version=1.41.4, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-cron, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 03:13:15 localhost puppet-user[51819]: Warning: Scope(Class[Nova]): The os_region_name parameter is deprecated and will be removed \ Dec 6 03:13:15 localhost puppet-user[51819]: in a future release. Use nova::cinder::os_region_name instead Dec 6 03:13:15 localhost puppet-user[51819]: Warning: Scope(Class[Nova]): The catalog_info parameter is deprecated and will be removed \ Dec 6 03:13:15 localhost puppet-user[51819]: in a future release. Use nova::cinder::catalog_info instead Dec 6 03:13:15 localhost podman[52309]: 2025-12-06 08:13:15.789695818 +0000 UTC m=+0.065834812 container cleanup 9506016e406df32ac4299ea465041d6a128e0f3dc9ea98102586360564954bfa (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vendor=Red Hat, Inc., config_id=tripleo_puppet_step1, vcs-type=git, release=1761123044, io.openshift.expose-services=, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, distribution-scope=public, architecture=x86_64, name=rhosp17/openstack-cron, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, container_name=container-puppet-crond) Dec 6 03:13:15 localhost systemd[1]: libpod-conmon-9506016e406df32ac4299ea465041d6a128e0f3dc9ea98102586360564954bfa.scope: Deactivated successfully. Dec 6 03:13:15 localhost python3[51252]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-crond --conmon-pidfile /run/container-puppet-crond.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005548789 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=crond --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::logging::logrotate --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-crond --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-crond.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Dec 6 03:13:15 localhost puppet-user[51786]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Augeas[chap_algs in /etc/iscsi/iscsid.conf]/returns: executed successfully Dec 6 03:13:15 localhost puppet-user[51786]: Notice: Applied catalog in 0.45 seconds Dec 6 03:13:15 localhost puppet-user[51786]: Application: Dec 6 03:13:15 localhost puppet-user[51786]: Initial environment: production Dec 6 03:13:15 localhost puppet-user[51786]: Converged environment: production Dec 6 03:13:15 localhost puppet-user[51786]: Run mode: user Dec 6 03:13:15 localhost puppet-user[51786]: Changes: Dec 6 03:13:15 localhost puppet-user[51786]: Total: 4 Dec 6 03:13:15 localhost puppet-user[51786]: Events: Dec 6 03:13:15 localhost puppet-user[51786]: Success: 4 Dec 6 03:13:15 localhost puppet-user[51786]: Total: 4 Dec 6 03:13:15 localhost puppet-user[51786]: Resources: Dec 6 03:13:15 localhost puppet-user[51786]: Changed: 4 Dec 6 03:13:15 localhost puppet-user[51786]: Out of sync: 4 Dec 6 03:13:15 localhost puppet-user[51786]: Skipped: 8 Dec 6 03:13:15 localhost puppet-user[51786]: Total: 13 Dec 6 03:13:15 localhost puppet-user[51786]: Time: Dec 6 03:13:15 localhost puppet-user[51786]: File: 0.00 Dec 6 03:13:15 localhost puppet-user[51786]: Exec: 0.05 Dec 6 03:13:15 localhost puppet-user[51786]: Config retrieval: 0.13 Dec 6 03:13:15 localhost puppet-user[51786]: Augeas: 0.38 Dec 6 03:13:15 localhost puppet-user[51786]: Transaction evaluation: 0.44 Dec 6 03:13:15 localhost puppet-user[51786]: Catalog application: 0.45 Dec 6 03:13:15 localhost puppet-user[51786]: Last run: 1765008795 Dec 6 03:13:15 localhost puppet-user[51786]: Total: 0.45 Dec 6 03:13:15 localhost puppet-user[51786]: Version: Dec 6 03:13:15 localhost puppet-user[51786]: Config: 1765008795 Dec 6 03:13:15 localhost puppet-user[51786]: Puppet: 7.10.0 Dec 6 03:13:15 localhost puppet-user[51838]: Notice: Compiled catalog for np0005548789.localdomain in environment production in 0.35 seconds Dec 6 03:13:15 localhost puppet-user[51819]: Warning: Unknown variable: '::nova::compute::verify_glance_signatures'. (file: /etc/puppet/modules/nova/manifests/glance.pp, line: 62, column: 41) Dec 6 03:13:15 localhost puppet-user[51819]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_base_images'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 44, column: 5) Dec 6 03:13:15 localhost puppet-user[51819]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_original_minimum_age_seconds'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 48, column: 5) Dec 6 03:13:15 localhost puppet-user[51819]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_resized_minimum_age_seconds'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 52, column: 5) Dec 6 03:13:16 localhost puppet-user[51838]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/content: content changed '{sha256}aea388a73ebafc7e07a81ddb930a91099211f660eee55fbf92c13007a77501e5' to '{sha256}2523d01ee9c3022c0e9f61d896b1474a168e18472aee141cc278e69fe13f41c1' Dec 6 03:13:16 localhost puppet-user[51838]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/owner: owner changed 'collectd' to 'root' Dec 6 03:13:16 localhost puppet-user[51838]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/group: group changed 'collectd' to 'root' Dec 6 03:13:16 localhost puppet-user[51819]: Warning: Scope(Class[Tripleo::Profile::Base::Nova::Compute]): The keymgr_backend parameter has been deprecated Dec 6 03:13:16 localhost puppet-user[51838]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/mode: mode changed '0644' to '0640' Dec 6 03:13:16 localhost puppet-user[51819]: Warning: Scope(Class[Nova::Compute]): vcpu_pin_set is deprecated, instead use cpu_dedicated_set or cpu_shared_set. Dec 6 03:13:16 localhost puppet-user[51819]: Warning: Scope(Class[Nova::Compute]): verify_glance_signatures is deprecated. Use the same parameter in nova::glance Dec 6 03:13:16 localhost puppet-user[51838]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/owner: owner changed 'collectd' to 'root' Dec 6 03:13:16 localhost puppet-user[51838]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/group: group changed 'collectd' to 'root' Dec 6 03:13:16 localhost puppet-user[51838]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/mode: mode changed '0755' to '0750' Dec 6 03:13:16 localhost puppet-user[51838]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-cpu.conf]/ensure: removed Dec 6 03:13:16 localhost puppet-user[51838]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-interface.conf]/ensure: removed Dec 6 03:13:16 localhost puppet-user[51838]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-load.conf]/ensure: removed Dec 6 03:13:16 localhost puppet-user[51838]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-memory.conf]/ensure: removed Dec 6 03:13:16 localhost puppet-user[51838]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-syslog.conf]/ensure: removed Dec 6 03:13:16 localhost puppet-user[51838]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/apache.conf]/ensure: removed Dec 6 03:13:16 localhost puppet-user[51838]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/dns.conf]/ensure: removed Dec 6 03:13:16 localhost puppet-user[51838]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ipmi.conf]/ensure: removed Dec 6 03:13:16 localhost puppet-user[51838]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/mcelog.conf]/ensure: removed Dec 6 03:13:16 localhost puppet-user[51838]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/mysql.conf]/ensure: removed Dec 6 03:13:16 localhost puppet-user[51838]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ovs-events.conf]/ensure: removed Dec 6 03:13:16 localhost puppet-user[51838]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ovs-stats.conf]/ensure: removed Dec 6 03:13:16 localhost puppet-user[51838]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ping.conf]/ensure: removed Dec 6 03:13:16 localhost puppet-user[51838]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/pmu.conf]/ensure: removed Dec 6 03:13:16 localhost puppet-user[51838]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/rdt.conf]/ensure: removed Dec 6 03:13:16 localhost puppet-user[51838]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/sensors.conf]/ensure: removed Dec 6 03:13:16 localhost puppet-user[51838]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/snmp.conf]/ensure: removed Dec 6 03:13:16 localhost puppet-user[51838]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/write_prometheus.conf]/ensure: removed Dec 6 03:13:16 localhost puppet-user[51838]: Notice: /Stage[main]/Collectd::Plugin::Python/File[/usr/lib/python3.9/site-packages]/mode: mode changed '0755' to '0750' Dec 6 03:13:16 localhost puppet-user[51838]: Notice: /Stage[main]/Collectd::Plugin::Python/Collectd::Plugin[python]/File[python.load]/ensure: defined content as '{sha256}0163924a0099dd43fe39cb85e836df147fd2cfee8197dc6866d3c384539eb6ee' Dec 6 03:13:16 localhost puppet-user[51838]: Notice: /Stage[main]/Collectd::Plugin::Python/Concat[/etc/collectd.d/python-config.conf]/File[/etc/collectd.d/python-config.conf]/ensure: defined content as '{sha256}2e5fb20e60b30f84687fc456a37fc62451000d2d85f5bbc1b3fca3a5eac9deeb' Dec 6 03:13:16 localhost podman[52430]: 2025-12-06 08:13:16.07171081 +0000 UTC m=+0.058508095 container create ae8b38ba4b5ab4a99a42be92184928140343d5747d40ab542a9394bab47afa5c (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, container_name=container-puppet-rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, batch=17.1_20251118.1, config_id=tripleo_puppet_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vendor=Red Hat, Inc., com.redhat.component=openstack-rsyslog-container, io.buildah.version=1.41.4, name=rhosp17/openstack-rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, architecture=x86_64, tcib_managed=true, io.openshift.expose-services=, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:49Z) Dec 6 03:13:16 localhost puppet-user[51838]: Notice: /Stage[main]/Collectd::Plugin::Logfile/Collectd::Plugin[logfile]/File[logfile.load]/ensure: defined content as '{sha256}07bbda08ef9b824089500bdc6ac5a86e7d1ef2ae3ed4ed423c0559fe6361e5af' Dec 6 03:13:16 localhost puppet-user[51838]: Notice: /Stage[main]/Collectd::Plugin::Amqp1/Collectd::Plugin[amqp1]/File[amqp1.load]/ensure: defined content as '{sha256}0d4e701b7b2398bbf396579a0713d46d3c496c79edc52f2e260456f359c9a46c' Dec 6 03:13:16 localhost puppet-user[51838]: Notice: /Stage[main]/Collectd::Plugin::Ceph/Collectd::Plugin[ceph]/File[ceph.load]/ensure: defined content as '{sha256}c796abffda2e860875295b4fc11cc95c6032b4e13fa8fb128e839a305aa1676c' Dec 6 03:13:16 localhost puppet-user[51838]: Notice: /Stage[main]/Collectd::Plugin::Cpu/Collectd::Plugin[cpu]/File[cpu.load]/ensure: defined content as '{sha256}67d4c8bf6bf5785f4cb6b596712204d9eacbcebbf16fe289907195d4d3cb0e34' Dec 6 03:13:16 localhost puppet-user[51838]: Notice: /Stage[main]/Collectd::Plugin::Df/Collectd::Plugin[df]/File[df.load]/ensure: defined content as '{sha256}edeb4716d96fc9dca2c6adfe07bae70ba08c6af3944a3900581cba0f08f3c4ba' Dec 6 03:13:16 localhost systemd[1]: Started libpod-conmon-ae8b38ba4b5ab4a99a42be92184928140343d5747d40ab542a9394bab47afa5c.scope. Dec 6 03:13:16 localhost puppet-user[51838]: Notice: /Stage[main]/Collectd::Plugin::Disk/Collectd::Plugin[disk]/File[disk.load]/ensure: defined content as '{sha256}1d0cb838278f3226fcd381f0fc2e0e1abaf0d590f4ba7bcb2fc6ec113d3ebde7' Dec 6 03:13:16 localhost systemd[1]: Started libcrun container. Dec 6 03:13:16 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b1be86c3700ae830e3281535879d3e256788ac5e6b2a708dc79f8a857d5536b/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Dec 6 03:13:16 localhost puppet-user[51838]: Notice: /Stage[main]/Collectd::Plugin::Hugepages/Collectd::Plugin[hugepages]/File[hugepages.load]/ensure: defined content as '{sha256}9b9f35b65a73da8d4037e4355a23b678f2cf61997ccf7a5e1adf2a7ce6415827' Dec 6 03:13:16 localhost puppet-user[51838]: Notice: /Stage[main]/Collectd::Plugin::Hugepages/Collectd::Plugin[hugepages]/File[older_hugepages.load]/ensure: removed Dec 6 03:13:16 localhost podman[52430]: 2025-12-06 08:13:16.11820038 +0000 UTC m=+0.104997675 container init ae8b38ba4b5ab4a99a42be92184928140343d5747d40ab542a9394bab47afa5c (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, config_id=tripleo_puppet_step1, container_name=container-puppet-rsyslog, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.component=openstack-rsyslog-container, version=17.1.12, description=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-11-18T22:49:49Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, architecture=x86_64, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 6 03:13:16 localhost puppet-user[51838]: Notice: /Stage[main]/Collectd::Plugin::Interface/Collectd::Plugin[interface]/File[interface.load]/ensure: defined content as '{sha256}b76b315dc312e398940fe029c6dbc5c18d2b974ff7527469fc7d3617b5222046' Dec 6 03:13:16 localhost systemd[1]: libpod-4704c7e23d4793bb8ec51f342b561857983350f0042af9af009532473579e109.scope: Deactivated successfully. Dec 6 03:13:16 localhost systemd[1]: libpod-4704c7e23d4793bb8ec51f342b561857983350f0042af9af009532473579e109.scope: Consumed 2.604s CPU time. Dec 6 03:13:16 localhost puppet-user[51838]: Notice: /Stage[main]/Collectd::Plugin::Load/Collectd::Plugin[load]/File[load.load]/ensure: defined content as '{sha256}af2403f76aebd2f10202d66d2d55e1a8d987eed09ced5a3e3873a4093585dc31' Dec 6 03:13:16 localhost podman[52430]: 2025-12-06 08:13:16.133219246 +0000 UTC m=+0.120016531 container start ae8b38ba4b5ab4a99a42be92184928140343d5747d40ab542a9394bab47afa5c (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, io.openshift.expose-services=, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-rsyslog-container, description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=container-puppet-rsyslog, distribution-scope=public, architecture=x86_64, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_puppet_step1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:49Z, batch=17.1_20251118.1) Dec 6 03:13:16 localhost podman[51436]: 2025-12-06 08:13:16.133780694 +0000 UTC m=+3.958921700 container died 4704c7e23d4793bb8ec51f342b561857983350f0042af9af009532473579e109 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, architecture=x86_64, release=1761123044, maintainer=OpenStack TripleO Team, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=container-puppet-iscsid, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., config_id=tripleo_puppet_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, vcs-type=git) Dec 6 03:13:16 localhost podman[52430]: 2025-12-06 08:13:16.133404032 +0000 UTC m=+0.120201327 container attach ae8b38ba4b5ab4a99a42be92184928140343d5747d40ab542a9394bab47afa5c (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, summary=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=container-puppet-rsyslog, config_id=tripleo_puppet_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:49Z, url=https://www.redhat.com, tcib_managed=true, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, com.redhat.component=openstack-rsyslog-container, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public) Dec 6 03:13:16 localhost puppet-user[51838]: Notice: /Stage[main]/Collectd::Plugin::Memory/Collectd::Plugin[memory]/File[memory.load]/ensure: defined content as '{sha256}0f270425ee6b05fc9440ee32b9afd1010dcbddd9b04ca78ff693858f7ecb9d0e' Dec 6 03:13:16 localhost puppet-user[51838]: Notice: /Stage[main]/Collectd::Plugin::Unixsock/Collectd::Plugin[unixsock]/File[unixsock.load]/ensure: defined content as '{sha256}9d1ec1c51ba386baa6f62d2e019dbd6998ad924bf868b3edc2d24d3dc3c63885' Dec 6 03:13:16 localhost podman[52430]: 2025-12-06 08:13:16.044428574 +0000 UTC m=+0.031225899 image pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Dec 6 03:13:16 localhost puppet-user[51838]: Notice: /Stage[main]/Collectd::Plugin::Uptime/Collectd::Plugin[uptime]/File[uptime.load]/ensure: defined content as '{sha256}f7a26c6369f904d0ca1af59627ebea15f5e72160bcacdf08d217af282b42e5c0' Dec 6 03:13:16 localhost puppet-user[51838]: Notice: /Stage[main]/Collectd::Plugin::Virt/Collectd::Plugin[virt]/File[virt.load]/ensure: defined content as '{sha256}9a2bcf913f6bf8a962a0ff351a9faea51ae863cc80af97b77f63f8ab68941c62' Dec 6 03:13:16 localhost puppet-user[51838]: Notice: /Stage[main]/Collectd::Plugin::Virt/Collectd::Plugin[virt]/File[older_virt.load]/ensure: removed Dec 6 03:13:16 localhost puppet-user[51819]: Warning: Scope(Class[Nova::Compute::Libvirt]): nova::compute::libvirt::images_type will be required if rbd ephemeral storage is used. Dec 6 03:13:16 localhost puppet-user[51838]: Notice: Applied catalog in 0.23 seconds Dec 6 03:13:16 localhost puppet-user[51838]: Application: Dec 6 03:13:16 localhost puppet-user[51838]: Initial environment: production Dec 6 03:13:16 localhost puppet-user[51838]: Converged environment: production Dec 6 03:13:16 localhost puppet-user[51838]: Run mode: user Dec 6 03:13:16 localhost puppet-user[51838]: Changes: Dec 6 03:13:16 localhost puppet-user[51838]: Total: 43 Dec 6 03:13:16 localhost puppet-user[51838]: Events: Dec 6 03:13:16 localhost puppet-user[51838]: Success: 43 Dec 6 03:13:16 localhost puppet-user[51838]: Total: 43 Dec 6 03:13:16 localhost puppet-user[51838]: Resources: Dec 6 03:13:16 localhost puppet-user[51838]: Skipped: 14 Dec 6 03:13:16 localhost puppet-user[51838]: Changed: 38 Dec 6 03:13:16 localhost puppet-user[51838]: Out of sync: 38 Dec 6 03:13:16 localhost puppet-user[51838]: Total: 82 Dec 6 03:13:16 localhost puppet-user[51838]: Time: Dec 6 03:13:16 localhost puppet-user[51838]: Concat fragment: 0.00 Dec 6 03:13:16 localhost puppet-user[51838]: Concat file: 0.00 Dec 6 03:13:16 localhost puppet-user[51838]: File: 0.12 Dec 6 03:13:16 localhost puppet-user[51838]: Transaction evaluation: 0.22 Dec 6 03:13:16 localhost puppet-user[51838]: Catalog application: 0.23 Dec 6 03:13:16 localhost puppet-user[51838]: Config retrieval: 0.48 Dec 6 03:13:16 localhost puppet-user[51838]: Last run: 1765008796 Dec 6 03:13:16 localhost puppet-user[51838]: Total: 0.23 Dec 6 03:13:16 localhost puppet-user[51838]: Version: Dec 6 03:13:16 localhost puppet-user[51838]: Config: 1765008795 Dec 6 03:13:16 localhost puppet-user[51838]: Puppet: 7.10.0 Dec 6 03:13:16 localhost podman[52500]: 2025-12-06 08:13:16.199466329 +0000 UTC m=+0.056370838 container cleanup 4704c7e23d4793bb8ec51f342b561857983350f0042af9af009532473579e109 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, version=17.1.12, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_puppet_step1, release=1761123044, url=https://www.redhat.com, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=container-puppet-iscsid, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, architecture=x86_64, managed_by=tripleo_ansible) Dec 6 03:13:16 localhost systemd[1]: libpod-conmon-4704c7e23d4793bb8ec51f342b561857983350f0042af9af009532473579e109.scope: Deactivated successfully. Dec 6 03:13:16 localhost python3[51252]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-iscsid --conmon-pidfile /run/container-puppet-iscsid.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005548789 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,iscsid_config --env NAME=iscsid --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::iscsid#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-iscsid --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-iscsid.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/iscsi:/tmp/iscsi.host:z --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Dec 6 03:13:16 localhost systemd[1]: var-lib-containers-storage-overlay-926f9ae59ae4620dbf9c3ec16ecc12b1f2dc289864a1e6ab337d5dcdccd2a7fe-merged.mount: Deactivated successfully. Dec 6 03:13:16 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4704c7e23d4793bb8ec51f342b561857983350f0042af9af009532473579e109-userdata-shm.mount: Deactivated successfully. Dec 6 03:13:16 localhost systemd[1]: var-lib-containers-storage-overlay-62c1e5bed2cbe019aeb004cf1ae8b59e235260475c84680568f7ac998ec16abe-merged.mount: Deactivated successfully. Dec 6 03:13:16 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9506016e406df32ac4299ea465041d6a128e0f3dc9ea98102586360564954bfa-userdata-shm.mount: Deactivated successfully. Dec 6 03:13:16 localhost systemd[1]: var-lib-containers-storage-overlay-93af1cca3c5bc2fd6a62da9c5b24ae5188daa60c95e07f180cd96e6bf14ae456-merged.mount: Deactivated successfully. Dec 6 03:13:16 localhost podman[52477]: 2025-12-06 08:13:16.271894605 +0000 UTC m=+0.172935222 container create 973e97e54c593ac5d052f8871b0acf7de04e674b931c55cda2af5dfc37a43e47 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=container-puppet-ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, tcib_managed=true, release=1761123044, distribution-scope=public, io.buildah.version=1.41.4, config_id=tripleo_puppet_step1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container) Dec 6 03:13:16 localhost systemd[1]: Started libpod-conmon-973e97e54c593ac5d052f8871b0acf7de04e674b931c55cda2af5dfc37a43e47.scope. Dec 6 03:13:16 localhost podman[52477]: 2025-12-06 08:13:16.232284477 +0000 UTC m=+0.133325134 image pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Dec 6 03:13:16 localhost systemd[1]: Started libcrun container. Dec 6 03:13:16 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad3810d21826b8b406da9b30fe54843a6684913af1956a8eb23d071f707914a6/merged/etc/sysconfig/modules supports timestamps until 2038 (0x7fffffff) Dec 6 03:13:16 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad3810d21826b8b406da9b30fe54843a6684913af1956a8eb23d071f707914a6/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Dec 6 03:13:16 localhost podman[52477]: 2025-12-06 08:13:16.353562636 +0000 UTC m=+0.254603253 container init 973e97e54c593ac5d052f8871b0acf7de04e674b931c55cda2af5dfc37a43e47 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, config_id=tripleo_puppet_step1, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, io.openshift.expose-services=, vcs-type=git, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=container-puppet-ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Dec 6 03:13:16 localhost podman[52477]: 2025-12-06 08:13:16.364027171 +0000 UTC m=+0.265067818 container start 973e97e54c593ac5d052f8871b0acf7de04e674b931c55cda2af5dfc37a43e47 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, vcs-type=git, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.41.4, version=17.1.12, config_id=tripleo_puppet_step1, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, batch=17.1_20251118.1, tcib_managed=true, container_name=container-puppet-ovn_controller, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 6 03:13:16 localhost podman[52477]: 2025-12-06 08:13:16.364370681 +0000 UTC m=+0.265411318 container attach 973e97e54c593ac5d052f8871b0acf7de04e674b931c55cda2af5dfc37a43e47 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, build-date=2025-11-18T23:34:05Z, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.41.4, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, vendor=Red Hat, Inc., config_id=tripleo_puppet_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, io.openshift.expose-services=, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=container-puppet-ovn_controller, com.redhat.component=openstack-ovn-controller-container) Dec 6 03:13:16 localhost systemd[1]: libpod-b567bdc6731749e2c2df0cfc659cd5d0e02971a859c8934938817051a1ec8b8c.scope: Deactivated successfully. Dec 6 03:13:16 localhost systemd[1]: libpod-b567bdc6731749e2c2df0cfc659cd5d0e02971a859c8934938817051a1ec8b8c.scope: Consumed 2.606s CPU time. Dec 6 03:13:16 localhost podman[51463]: 2025-12-06 08:13:16.602794822 +0000 UTC m=+4.393520961 container died b567bdc6731749e2c2df0cfc659cd5d0e02971a859c8934938817051a1ec8b8c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, release=1761123044, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, managed_by=tripleo_ansible, url=https://www.redhat.com, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=container-puppet-collectd, config_id=tripleo_puppet_step1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 6 03:13:16 localhost puppet-user[51916]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 6 03:13:16 localhost puppet-user[51916]: (file: /etc/puppet/hiera.yaml) Dec 6 03:13:16 localhost puppet-user[51916]: Warning: Undefined variable '::deploy_config_name'; Dec 6 03:13:16 localhost puppet-user[51916]: (file & line not available) Dec 6 03:13:16 localhost puppet-user[51916]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 6 03:13:16 localhost puppet-user[51916]: (file & line not available) Dec 6 03:13:16 localhost podman[52687]: 2025-12-06 08:13:16.690466419 +0000 UTC m=+0.077684908 container cleanup b567bdc6731749e2c2df0cfc659cd5d0e02971a859c8934938817051a1ec8b8c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, url=https://www.redhat.com, release=1761123044, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_puppet_step1, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-collectd-container, distribution-scope=public, container_name=container-puppet-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:13:16 localhost systemd[1]: libpod-conmon-b567bdc6731749e2c2df0cfc659cd5d0e02971a859c8934938817051a1ec8b8c.scope: Deactivated successfully. Dec 6 03:13:16 localhost python3[51252]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-collectd --conmon-pidfile /run/container-puppet-collectd.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005548789 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,collectd_client_config,exec --env NAME=collectd --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::metrics::collectd --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-collectd --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-collectd.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Dec 6 03:13:16 localhost puppet-user[51819]: Notice: Compiled catalog for np0005548789.localdomain in environment production in 1.24 seconds Dec 6 03:13:16 localhost puppet-user[51916]: Warning: Unknown variable: '::ceilometer::cache_backend'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 145, column: 39) Dec 6 03:13:16 localhost puppet-user[51916]: Warning: Unknown variable: '::ceilometer::memcache_servers'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 146, column: 39) Dec 6 03:13:16 localhost puppet-user[51916]: Warning: Unknown variable: '::ceilometer::cache_tls_enabled'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 147, column: 39) Dec 6 03:13:16 localhost puppet-user[51916]: Warning: Unknown variable: '::ceilometer::cache_tls_cafile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 148, column: 39) Dec 6 03:13:16 localhost puppet-user[51916]: Warning: Unknown variable: '::ceilometer::cache_tls_certfile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 149, column: 39) Dec 6 03:13:16 localhost puppet-user[51916]: Warning: Unknown variable: '::ceilometer::cache_tls_keyfile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 150, column: 39) Dec 6 03:13:16 localhost puppet-user[51916]: Warning: Unknown variable: '::ceilometer::cache_tls_allowed_ciphers'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 151, column: 39) Dec 6 03:13:16 localhost puppet-user[51916]: Warning: Unknown variable: '::ceilometer::manage_backend_package'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 152, column: 39) Dec 6 03:13:16 localhost puppet-user[51916]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_password'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 63, column: 25) Dec 6 03:13:16 localhost puppet-user[51916]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_url'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 68, column: 25) Dec 6 03:13:16 localhost puppet-user[51916]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_region'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 69, column: 28) Dec 6 03:13:16 localhost puppet-user[51916]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_user'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 70, column: 25) Dec 6 03:13:16 localhost puppet-user[51916]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_tenant_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 71, column: 29) Dec 6 03:13:16 localhost puppet-user[51916]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_cacert'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 72, column: 23) Dec 6 03:13:16 localhost puppet-user[51916]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_endpoint_type'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 73, column: 26) Dec 6 03:13:16 localhost puppet-user[51916]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_user_domain_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 74, column: 33) Dec 6 03:13:16 localhost puppet-user[51916]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_project_domain_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 75, column: 36) Dec 6 03:13:16 localhost puppet-user[51916]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_type'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 76, column: 26) Dec 6 03:13:16 localhost puppet-user[51916]: Notice: Compiled catalog for np0005548789.localdomain in environment production in 0.38 seconds Dec 6 03:13:17 localhost puppet-user[51916]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/http_timeout]/ensure: created Dec 6 03:13:17 localhost puppet-user[51916]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/host]/ensure: created Dec 6 03:13:17 localhost puppet-user[51916]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[publisher/telemetry_secret]/ensure: created Dec 6 03:13:17 localhost puppet-user[51916]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_name]/ensure: created Dec 6 03:13:17 localhost puppet-user[51916]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_password]/ensure: created Dec 6 03:13:17 localhost puppet-user[51916]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/auth_url]/ensure: created Dec 6 03:13:17 localhost puppet-user[51916]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/region_name]/ensure: created Dec 6 03:13:17 localhost puppet-user[51916]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/username]/ensure: created Dec 6 03:13:17 localhost puppet-user[51819]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File[/etc/nova/migration/identity]/content: content changed '{sha256}86610d84e745a3992358ae0b747297805d075492e5114c666fa08f8aecce7da0' to '{sha256}b5992c61c5e6c0fa60ac7720677a0efdfb73ceba695978e2f56794a0d035436f' Dec 6 03:13:17 localhost puppet-user[51916]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/password]/ensure: created Dec 6 03:13:17 localhost puppet-user[51819]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File_line[nova_ssh_port]/ensure: created Dec 6 03:13:17 localhost puppet-user[51916]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/project_name]/ensure: created Dec 6 03:13:17 localhost puppet-user[51819]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/File[/etc/sasl2/libvirt.conf]/content: content changed '{sha256}78510a0d6f14b269ddeb9f9638dfdfba9f976d370ee2ec04ba25352a8af6df35' to '{sha256}6d7bcae773217a30c0772f75d0d1b6d21f5d64e72853f5e3d91bb47799dbb7fe' Dec 6 03:13:17 localhost puppet-user[51916]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/interface]/ensure: created Dec 6 03:13:17 localhost puppet-user[51819]: Warning: Empty environment setting 'TLS_PASSWORD' Dec 6 03:13:17 localhost puppet-user[51819]: (file: /etc/puppet/modules/tripleo/manifests/profile/base/nova/libvirt.pp, line: 182) Dec 6 03:13:17 localhost puppet-user[51916]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/user_domain_name]/ensure: created Dec 6 03:13:17 localhost puppet-user[51916]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/project_domain_name]/ensure: created Dec 6 03:13:17 localhost puppet-user[51916]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/auth_type]/ensure: created Dec 6 03:13:17 localhost puppet-user[51819]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/Exec[set libvirt sasl credentials]/returns: executed successfully Dec 6 03:13:17 localhost puppet-user[51916]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[compute/instance_discovery_method]/ensure: created Dec 6 03:13:17 localhost puppet-user[51916]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[DEFAULT/polling_namespaces]/ensure: created Dec 6 03:13:17 localhost puppet-user[51916]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[polling/tenant_name_discovery]/ensure: created Dec 6 03:13:17 localhost puppet-user[51819]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Target/File[/etc/nova/migration/authorized_keys]/content: content changed '{sha256}0d05a8832f36c0517b84e9c3ad11069d531c7d2be5297661e5552fd29e3a5e47' to '{sha256}8f9f91b7bc846aa12da1e2df7356fc45f862596082e133d7976104ee8d1893c1' Dec 6 03:13:17 localhost puppet-user[51819]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Target/File_line[nova_migration_logindefs]/ensure: created Dec 6 03:13:17 localhost puppet-user[51916]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[coordination/backend_url]/ensure: created Dec 6 03:13:17 localhost puppet-user[51916]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/backend]/ensure: created Dec 6 03:13:17 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Workarounds/Nova_config[workarounds/never_download_image_if_on_rbd]/ensure: created Dec 6 03:13:17 localhost puppet-user[51916]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/enabled]/ensure: created Dec 6 03:13:17 localhost puppet-user[51916]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/memcache_servers]/ensure: created Dec 6 03:13:17 localhost puppet-user[51916]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/tls_enabled]/ensure: created Dec 6 03:13:17 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Workarounds/Nova_config[workarounds/disable_compute_service_check_for_ffu]/ensure: created Dec 6 03:13:17 localhost puppet-user[51916]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created Dec 6 03:13:17 localhost puppet-user[51819]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ssl_only]/ensure: created Dec 6 03:13:17 localhost systemd[1]: tmp-crun.maWaK9.mount: Deactivated successfully. Dec 6 03:13:17 localhost systemd[1]: var-lib-containers-storage-overlay-5d82191509656bbf6f64f1f50570f9d09f17aadb036e941dc9fdbfc1b9557da8-merged.mount: Deactivated successfully. Dec 6 03:13:17 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b567bdc6731749e2c2df0cfc659cd5d0e02971a859c8934938817051a1ec8b8c-userdata-shm.mount: Deactivated successfully. Dec 6 03:13:17 localhost puppet-user[51819]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/my_ip]/ensure: created Dec 6 03:13:17 localhost puppet-user[51916]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Amqp[ceilometer_config]/Ceilometer_config[oslo_messaging_amqp/rpc_address_prefix]/ensure: created Dec 6 03:13:17 localhost puppet-user[51916]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Amqp[ceilometer_config]/Ceilometer_config[oslo_messaging_amqp/notify_address_prefix]/ensure: created Dec 6 03:13:17 localhost puppet-user[51819]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/host]/ensure: created Dec 6 03:13:17 localhost puppet-user[51819]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/cpu_allocation_ratio]/ensure: created Dec 6 03:13:17 localhost puppet-user[51819]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ram_allocation_ratio]/ensure: created Dec 6 03:13:17 localhost puppet-user[51819]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/disk_allocation_ratio]/ensure: created Dec 6 03:13:17 localhost puppet-user[51819]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/dhcp_domain]/ensure: created Dec 6 03:13:17 localhost puppet-user[51916]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/driver]/ensure: created Dec 6 03:13:17 localhost puppet-user[51819]: Notice: /Stage[main]/Nova/Nova_config[vif_plug_ovs/ovsdb_connection]/ensure: created Dec 6 03:13:17 localhost puppet-user[51916]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/transport_url]/ensure: created Dec 6 03:13:17 localhost puppet-user[51916]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/topics]/ensure: created Dec 6 03:13:17 localhost puppet-user[51819]: Notice: /Stage[main]/Nova/Nova_config[notifications/notification_format]/ensure: created Dec 6 03:13:17 localhost puppet-user[51916]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Default[ceilometer_config]/Ceilometer_config[DEFAULT/transport_url]/ensure: created Dec 6 03:13:17 localhost puppet-user[51819]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/state_path]/ensure: created Dec 6 03:13:17 localhost puppet-user[51916]: Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/debug]/ensure: created Dec 6 03:13:17 localhost puppet-user[51819]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/service_down_time]/ensure: created Dec 6 03:13:17 localhost puppet-user[51916]: Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/log_dir]/ensure: created Dec 6 03:13:17 localhost puppet-user[51819]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/rootwrap_config]/ensure: created Dec 6 03:13:17 localhost puppet-user[51819]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/report_interval]/ensure: created Dec 6 03:13:17 localhost puppet-user[51916]: Notice: Applied catalog in 0.43 seconds Dec 6 03:13:17 localhost puppet-user[51916]: Application: Dec 6 03:13:17 localhost puppet-user[51916]: Initial environment: production Dec 6 03:13:17 localhost puppet-user[51916]: Converged environment: production Dec 6 03:13:17 localhost puppet-user[51916]: Run mode: user Dec 6 03:13:17 localhost puppet-user[51916]: Changes: Dec 6 03:13:17 localhost puppet-user[51916]: Total: 31 Dec 6 03:13:17 localhost puppet-user[51916]: Events: Dec 6 03:13:17 localhost puppet-user[51916]: Success: 31 Dec 6 03:13:17 localhost puppet-user[51916]: Total: 31 Dec 6 03:13:17 localhost puppet-user[51916]: Resources: Dec 6 03:13:17 localhost puppet-user[51916]: Skipped: 22 Dec 6 03:13:17 localhost puppet-user[51916]: Changed: 31 Dec 6 03:13:17 localhost puppet-user[51916]: Out of sync: 31 Dec 6 03:13:17 localhost puppet-user[51916]: Total: 151 Dec 6 03:13:17 localhost puppet-user[51916]: Time: Dec 6 03:13:17 localhost puppet-user[51916]: Package: 0.01 Dec 6 03:13:17 localhost puppet-user[51916]: Ceilometer config: 0.35 Dec 6 03:13:17 localhost puppet-user[51916]: Transaction evaluation: 0.42 Dec 6 03:13:17 localhost puppet-user[51916]: Catalog application: 0.43 Dec 6 03:13:17 localhost puppet-user[51916]: Config retrieval: 0.45 Dec 6 03:13:17 localhost puppet-user[51916]: Last run: 1765008797 Dec 6 03:13:17 localhost puppet-user[51916]: Resources: 0.00 Dec 6 03:13:17 localhost puppet-user[51916]: Total: 0.43 Dec 6 03:13:17 localhost puppet-user[51916]: Version: Dec 6 03:13:17 localhost puppet-user[51916]: Config: 1765008796 Dec 6 03:13:17 localhost puppet-user[51916]: Puppet: 7.10.0 Dec 6 03:13:17 localhost puppet-user[51819]: Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_state_change]/ensure: created Dec 6 03:13:17 localhost puppet-user[51819]: Notice: /Stage[main]/Nova/Nova_config[cinder/cross_az_attach]/ensure: created Dec 6 03:13:17 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Glance/Nova_config[glance/valid_interfaces]/ensure: created Dec 6 03:13:17 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_type]/ensure: created Dec 6 03:13:17 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_url]/ensure: created Dec 6 03:13:17 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/password]/ensure: created Dec 6 03:13:17 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_domain_name]/ensure: created Dec 6 03:13:17 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_name]/ensure: created Dec 6 03:13:17 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/user_domain_name]/ensure: created Dec 6 03:13:17 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/username]/ensure: created Dec 6 03:13:17 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/region_name]/ensure: created Dec 6 03:13:17 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/valid_interfaces]/ensure: created Dec 6 03:13:17 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/password]/ensure: created Dec 6 03:13:17 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/auth_type]/ensure: created Dec 6 03:13:17 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/auth_url]/ensure: created Dec 6 03:13:17 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/region_name]/ensure: created Dec 6 03:13:17 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/project_name]/ensure: created Dec 6 03:13:17 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/project_domain_name]/ensure: created Dec 6 03:13:17 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/username]/ensure: created Dec 6 03:13:17 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/user_domain_name]/ensure: created Dec 6 03:13:17 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/os_region_name]/ensure: created Dec 6 03:13:17 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/catalog_info]/ensure: created Dec 6 03:13:17 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/manager_interval]/ensure: created Dec 6 03:13:17 localhost systemd[1]: libpod-e6a13c3a9cefd2fa1b521b34f74f31f76fdd0b457acb3e515d649fdb95bdeb0c.scope: Deactivated successfully. Dec 6 03:13:17 localhost systemd[1]: libpod-e6a13c3a9cefd2fa1b521b34f74f31f76fdd0b457acb3e515d649fdb95bdeb0c.scope: Consumed 2.959s CPU time. Dec 6 03:13:17 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_base_images]/ensure: created Dec 6 03:13:17 localhost puppet-user[52519]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 6 03:13:17 localhost puppet-user[52519]: (file: /etc/puppet/hiera.yaml) Dec 6 03:13:17 localhost puppet-user[52519]: Warning: Undefined variable '::deploy_config_name'; Dec 6 03:13:17 localhost puppet-user[52519]: (file & line not available) Dec 6 03:13:17 localhost podman[51872]: 2025-12-06 08:13:17.883722448 +0000 UTC m=+3.454333598 container died e6a13c3a9cefd2fa1b521b34f74f31f76fdd0b457acb3e515d649fdb95bdeb0c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, com.redhat.component=openstack-ceilometer-central-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-central, managed_by=tripleo_ansible, batch=17.1_20251118.1, architecture=x86_64, url=https://www.redhat.com, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-type=git, build-date=2025-11-19T00:11:59Z, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-central, version=17.1.12, config_id=tripleo_puppet_step1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, container_name=container-puppet-ceilometer, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:13:17 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_original_minimum_age_seconds]/ensure: created Dec 6 03:13:17 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_resized_minimum_age_seconds]/ensure: created Dec 6 03:13:17 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/precache_concurrency]/ensure: created Dec 6 03:13:17 localhost puppet-user[52519]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 6 03:13:17 localhost puppet-user[52519]: (file & line not available) Dec 6 03:13:17 localhost systemd[1]: tmp-crun.GBdEz1.mount: Deactivated successfully. Dec 6 03:13:17 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e6a13c3a9cefd2fa1b521b34f74f31f76fdd0b457acb3e515d649fdb95bdeb0c-userdata-shm.mount: Deactivated successfully. Dec 6 03:13:17 localhost systemd[1]: var-lib-containers-storage-overlay-2c8811abddbaacb6f32387bcfbf857078338be762178d55f30b77fc76cc9dc19-merged.mount: Deactivated successfully. Dec 6 03:13:17 localhost podman[52857]: 2025-12-06 08:13:17.979639622 +0000 UTC m=+0.086053929 container cleanup e6a13c3a9cefd2fa1b521b34f74f31f76fdd0b457acb3e515d649fdb95bdeb0c (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://www.redhat.com, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, tcib_managed=true, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, description=Red Hat OpenStack Platform 17.1 ceilometer-central, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, batch=17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, config_id=tripleo_puppet_step1, name=rhosp17/openstack-ceilometer-central, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-ceilometer-central-container, build-date=2025-11-19T00:11:59Z, container_name=container-puppet-ceilometer) Dec 6 03:13:17 localhost systemd[1]: libpod-conmon-e6a13c3a9cefd2fa1b521b34f74f31f76fdd0b457acb3e515d649fdb95bdeb0c.scope: Deactivated successfully. Dec 6 03:13:17 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Vendordata/Nova_config[vendordata_dynamic_auth/project_domain_name]/ensure: created Dec 6 03:13:17 localhost python3[51252]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-ceilometer --conmon-pidfile /run/container-puppet-ceilometer.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005548789 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config --env NAME=ceilometer --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::ceilometer::agent::polling#012include tripleo::profile::base::ceilometer::agent::polling#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-ceilometer --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-ceilometer.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1 Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Vendordata/Nova_config[vendordata_dynamic_auth/user_domain_name]/ensure: created Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Provider/Nova_config[compute/provider_config_location]/ensure: created Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Provider/File[/etc/nova/provider_config]/ensure: created Dec 6 03:13:18 localhost puppet-user[52519]: Notice: Compiled catalog for np0005548789.localdomain in environment production in 0.28 seconds Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/use_cow_images]/ensure: created Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/mkisofs_cmd]/ensure: created Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/force_raw_images]/ensure: created Dec 6 03:13:18 localhost puppet-user[52585]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 6 03:13:18 localhost puppet-user[52585]: (file: /etc/puppet/hiera.yaml) Dec 6 03:13:18 localhost puppet-user[52585]: Warning: Undefined variable '::deploy_config_name'; Dec 6 03:13:18 localhost puppet-user[52585]: (file & line not available) Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/reserved_host_memory_mb]/ensure: created Dec 6 03:13:18 localhost puppet-user[52519]: Notice: /Stage[main]/Rsyslog::Base/File[/etc/rsyslog.conf]/content: content changed '{sha256}d6f679f6a4eb6f33f9fc20c846cb30bef93811e1c86bc4da1946dc3100b826c3' to '{sha256}7963bd801fadd49a17561f4d3f80738c3f504b413b11c443432d8303138041f2' Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/reserved_huge_pages]/ensure: created Dec 6 03:13:18 localhost puppet-user[52519]: Notice: /Stage[main]/Rsyslog::Config::Global/Rsyslog::Component::Global_config[MaxMessageSize]/Rsyslog::Generate_concat[rsyslog::concat::global_config::MaxMessageSize]/Concat[/etc/rsyslog.d/00_rsyslog.conf]/File[/etc/rsyslog.d/00_rsyslog.conf]/ensure: defined content as '{sha256}a291d5cc6d5884a978161f4c7b5831d43edd07797cc590bae366e7f150b8643b' Dec 6 03:13:18 localhost puppet-user[52585]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 6 03:13:18 localhost puppet-user[52585]: (file & line not available) Dec 6 03:13:18 localhost puppet-user[52519]: Notice: /Stage[main]/Rsyslog::Config::Templates/Rsyslog::Component::Template[rsyslog-node-index]/Rsyslog::Generate_concat[rsyslog::concat::template::rsyslog-node-index]/Concat[/etc/rsyslog.d/50_openstack_logs.conf]/File[/etc/rsyslog.d/50_openstack_logs.conf]/ensure: defined content as '{sha256}c8a076a5cc4f95986ab769a4c95ebfeb53a6814b6c917c7233a963f16ed74f11' Dec 6 03:13:18 localhost puppet-user[52519]: Notice: Applied catalog in 0.10 seconds Dec 6 03:13:18 localhost puppet-user[52519]: Application: Dec 6 03:13:18 localhost puppet-user[52519]: Initial environment: production Dec 6 03:13:18 localhost puppet-user[52519]: Converged environment: production Dec 6 03:13:18 localhost puppet-user[52519]: Run mode: user Dec 6 03:13:18 localhost puppet-user[52519]: Changes: Dec 6 03:13:18 localhost puppet-user[52519]: Total: 3 Dec 6 03:13:18 localhost puppet-user[52519]: Events: Dec 6 03:13:18 localhost puppet-user[52519]: Success: 3 Dec 6 03:13:18 localhost puppet-user[52519]: Total: 3 Dec 6 03:13:18 localhost puppet-user[52519]: Resources: Dec 6 03:13:18 localhost puppet-user[52519]: Skipped: 11 Dec 6 03:13:18 localhost puppet-user[52519]: Changed: 3 Dec 6 03:13:18 localhost puppet-user[52519]: Out of sync: 3 Dec 6 03:13:18 localhost puppet-user[52519]: Total: 25 Dec 6 03:13:18 localhost puppet-user[52519]: Time: Dec 6 03:13:18 localhost puppet-user[52519]: Concat file: 0.00 Dec 6 03:13:18 localhost puppet-user[52519]: Concat fragment: 0.00 Dec 6 03:13:18 localhost puppet-user[52519]: File: 0.01 Dec 6 03:13:18 localhost puppet-user[52519]: Transaction evaluation: 0.09 Dec 6 03:13:18 localhost puppet-user[52519]: Catalog application: 0.10 Dec 6 03:13:18 localhost puppet-user[52519]: Config retrieval: 0.32 Dec 6 03:13:18 localhost puppet-user[52519]: Last run: 1765008798 Dec 6 03:13:18 localhost puppet-user[52519]: Total: 0.10 Dec 6 03:13:18 localhost puppet-user[52519]: Version: Dec 6 03:13:18 localhost puppet-user[52519]: Config: 1765008797 Dec 6 03:13:18 localhost puppet-user[52519]: Puppet: 7.10.0 Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/resume_guests_state_on_host_boot]/ensure: created Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute/Nova_config[key_manager/backend]/ensure: created Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/sync_power_state_interval]/ensure: created Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/consecutive_build_service_disable_threshold]/ensure: created Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/live_migration_wait_for_vif_plug]/ensure: created Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/max_disk_devices_to_attach]/ensure: created Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Vncproxy::Common/Nova_config[vnc/novncproxy_base_url]/ensure: created Dec 6 03:13:18 localhost puppet-user[52585]: Notice: Compiled catalog for np0005548789.localdomain in environment production in 0.27 seconds Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/server_proxyclient_address]/ensure: created Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/enabled]/ensure: created Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute/Nova_config[spice/enabled]/ensure: created Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit]/ensure: created Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit_period]/ensure: created Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_is_fatal]/ensure: created Dec 6 03:13:18 localhost ovs-vsctl[52978]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-remote=tcp:172.17.0.103:6642,tcp:172.17.0.104:6642,tcp:172.17.0.105:6642 Dec 6 03:13:18 localhost puppet-user[52585]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-remote]/ensure: created Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_timeout]/ensure: created Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/default_floating_pool]/ensure: created Dec 6 03:13:18 localhost ovs-vsctl[52983]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-type=geneve Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/timeout]/ensure: created Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_name]/ensure: created Dec 6 03:13:18 localhost puppet-user[52585]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-type]/ensure: created Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_domain_name]/ensure: created Dec 6 03:13:18 localhost systemd[1]: libpod-ae8b38ba4b5ab4a99a42be92184928140343d5747d40ab542a9394bab47afa5c.scope: Deactivated successfully. Dec 6 03:13:18 localhost systemd[1]: libpod-ae8b38ba4b5ab4a99a42be92184928140343d5747d40ab542a9394bab47afa5c.scope: Consumed 2.319s CPU time. Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/region_name]/ensure: created Dec 6 03:13:18 localhost ovs-vsctl[52993]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-ip=172.19.0.107 Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/username]/ensure: created Dec 6 03:13:18 localhost podman[52430]: 2025-12-06 08:13:18.638977119 +0000 UTC m=+2.625774424 container died ae8b38ba4b5ab4a99a42be92184928140343d5747d40ab542a9394bab47afa5c (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-rsyslog, io.buildah.version=1.41.4, url=https://www.redhat.com, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, container_name=container-puppet-rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_puppet_step1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-rsyslog-container, summary=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-11-18T22:49:49Z, managed_by=tripleo_ansible) Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/user_domain_name]/ensure: created Dec 6 03:13:18 localhost puppet-user[52585]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-ip]/ensure: created Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/password]/ensure: created Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_url]/ensure: created Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/valid_interfaces]/ensure: created Dec 6 03:13:18 localhost ovs-vsctl[53006]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:hostname=np0005548789.localdomain Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/ovs_bridge]/ensure: created Dec 6 03:13:18 localhost puppet-user[52585]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:hostname]/value: value changed 'np0005548789.novalocal' to 'np0005548789.localdomain' Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/extension_sync_interval]/ensure: created Dec 6 03:13:18 localhost systemd[1]: tmp-crun.3SSv0L.mount: Deactivated successfully. Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_type]/ensure: created Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_uri]/ensure: created Dec 6 03:13:18 localhost ovs-vsctl[53009]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-bridge=br-int Dec 6 03:13:18 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ae8b38ba4b5ab4a99a42be92184928140343d5747d40ab542a9394bab47afa5c-userdata-shm.mount: Deactivated successfully. Dec 6 03:13:18 localhost systemd[1]: var-lib-containers-storage-overlay-2b1be86c3700ae830e3281535879d3e256788ac5e6b2a708dc79f8a857d5536b-merged.mount: Deactivated successfully. Dec 6 03:13:18 localhost puppet-user[52585]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-bridge]/ensure: created Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_tunnelled]/ensure: created Dec 6 03:13:18 localhost ovs-vsctl[53011]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-remote-probe-interval=60000 Dec 6 03:13:18 localhost puppet-user[52585]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-remote-probe-interval]/ensure: created Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_inbound_addr]/ensure: created Dec 6 03:13:18 localhost ovs-vsctl[53013]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-openflow-probe-interval=60 Dec 6 03:13:18 localhost puppet-user[52585]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-openflow-probe-interval]/ensure: created Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_permit_post_copy]/ensure: created Dec 6 03:13:18 localhost podman[52995]: 2025-12-06 08:13:18.802061514 +0000 UTC m=+0.150380162 container cleanup ae8b38ba4b5ab4a99a42be92184928140343d5747d40ab542a9394bab47afa5c (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=container-puppet-rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_puppet_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, architecture=x86_64, version=17.1.12, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-11-18T22:49:49Z, release=1761123044, summary=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, com.redhat.component=openstack-rsyslog-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog) Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_permit_auto_converge]/ensure: created Dec 6 03:13:18 localhost systemd[1]: libpod-conmon-ae8b38ba4b5ab4a99a42be92184928140343d5747d40ab542a9394bab47afa5c.scope: Deactivated successfully. Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Migration::Libvirt/Virtproxyd_config[listen_tls]/ensure: created Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Migration::Libvirt/Virtproxyd_config[listen_tcp]/ensure: created Dec 6 03:13:18 localhost python3[51252]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-rsyslog --conmon-pidfile /run/container-puppet-rsyslog.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005548789 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment --env NAME=rsyslog --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::logging::rsyslog --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-rsyslog --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-rsyslog.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Dec 6 03:13:18 localhost ovs-vsctl[53015]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-monitor-all=true Dec 6 03:13:18 localhost puppet-user[52585]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-monitor-all]/ensure: created Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_user]/ensure: created Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_secret_uuid]/ensure: created Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Rbd/File[/etc/nova/secret.xml]/ensure: defined content as '{sha256}9f797f9d49cf12085061840a6e15e35ef08aaf3c80bbe03bcf23d28dd55767ae' Dec 6 03:13:18 localhost ovs-vsctl[53029]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-ofctrl-wait-before-clear=8000 Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_type]/ensure: created Dec 6 03:13:18 localhost puppet-user[52585]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-ofctrl-wait-before-clear]/ensure: created Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_pool]/ensure: created Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_ceph_conf]/ensure: created Dec 6 03:13:18 localhost ovs-vsctl[53042]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-tos=0 Dec 6 03:13:18 localhost puppet-user[52585]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-tos]/ensure: created Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_store_name]/ensure: created Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_copy_poll_interval]/ensure: created Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_copy_timeout]/ensure: created Dec 6 03:13:18 localhost ovs-vsctl[53044]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-chassis-mac-mappings=datacentre:fa:16:3e:0b:71:f7 Dec 6 03:13:18 localhost puppet-user[52585]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-chassis-mac-mappings]/ensure: created Dec 6 03:13:18 localhost ovs-vsctl[53046]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-bridge-mappings=datacentre:br-ex Dec 6 03:13:18 localhost puppet-user[52585]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-bridge-mappings]/ensure: created Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[DEFAULT/compute_driver]/ensure: created Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[DEFAULT/preallocate_images]/ensure: created Dec 6 03:13:18 localhost ovs-vsctl[53048]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-match-northd-version=false Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[vnc/server_listen]/ensure: created Dec 6 03:13:18 localhost puppet-user[52585]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-match-northd-version]/ensure: created Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/virt_type]/ensure: created Dec 6 03:13:18 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/cpu_mode]/ensure: created Dec 6 03:13:19 localhost ovs-vsctl[53050]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:garp-max-timeout-sec=0 Dec 6 03:13:19 localhost puppet-user[52585]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:garp-max-timeout-sec]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_password]/ensure: created Dec 6 03:13:19 localhost puppet-user[52585]: Notice: Applied catalog in 0.56 seconds Dec 6 03:13:19 localhost puppet-user[52585]: Application: Dec 6 03:13:19 localhost puppet-user[52585]: Initial environment: production Dec 6 03:13:19 localhost puppet-user[52585]: Converged environment: production Dec 6 03:13:19 localhost puppet-user[52585]: Run mode: user Dec 6 03:13:19 localhost puppet-user[52585]: Changes: Dec 6 03:13:19 localhost puppet-user[52585]: Total: 14 Dec 6 03:13:19 localhost puppet-user[52585]: Events: Dec 6 03:13:19 localhost puppet-user[52585]: Success: 14 Dec 6 03:13:19 localhost puppet-user[52585]: Total: 14 Dec 6 03:13:19 localhost puppet-user[52585]: Resources: Dec 6 03:13:19 localhost puppet-user[52585]: Skipped: 12 Dec 6 03:13:19 localhost puppet-user[52585]: Changed: 14 Dec 6 03:13:19 localhost puppet-user[52585]: Out of sync: 14 Dec 6 03:13:19 localhost puppet-user[52585]: Total: 29 Dec 6 03:13:19 localhost puppet-user[52585]: Time: Dec 6 03:13:19 localhost puppet-user[52585]: Exec: 0.02 Dec 6 03:13:19 localhost puppet-user[52585]: Config retrieval: 0.30 Dec 6 03:13:19 localhost puppet-user[52585]: Vs config: 0.48 Dec 6 03:13:19 localhost puppet-user[52585]: Transaction evaluation: 0.55 Dec 6 03:13:19 localhost puppet-user[52585]: Catalog application: 0.56 Dec 6 03:13:19 localhost puppet-user[52585]: Last run: 1765008799 Dec 6 03:13:19 localhost puppet-user[52585]: Total: 0.56 Dec 6 03:13:19 localhost puppet-user[52585]: Version: Dec 6 03:13:19 localhost puppet-user[52585]: Config: 1765008798 Dec 6 03:13:19 localhost puppet-user[52585]: Puppet: 7.10.0 Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_key]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_partition]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/hw_disk_discard]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/hw_machine_type]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/enabled_perf_events]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/rx_queue_size]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/tx_queue_size]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/file_backed_memory]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/volume_use_multipath]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/num_pcie_ports]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/mem_stats_period_seconds]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/pmem_namespaces]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/swtpm_enabled]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/cpu_model_extra_flags]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/disk_cachemodes]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtlogd/Virtlogd_config[log_filters]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtlogd/Virtlogd_config[log_outputs]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtproxyd/Virtproxyd_config[log_filters]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtproxyd/Virtproxyd_config[log_outputs]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtqemud/Virtqemud_config[log_filters]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtqemud/Virtqemud_config[log_outputs]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtnodedevd/Virtnodedevd_config[log_filters]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtnodedevd/Virtnodedevd_config[log_outputs]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtstoraged/Virtstoraged_config[log_filters]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtstoraged/Virtstoraged_config[log_outputs]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtsecretd/Virtsecretd_config[log_filters]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtsecretd/Virtsecretd_config[log_outputs]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_group]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[auth_unix_ro]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[auth_unix_rw]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_ro_perms]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_rw_perms]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_group]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[auth_unix_ro]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[auth_unix_rw]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_ro_perms]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_rw_perms]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_group]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[auth_unix_ro]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[auth_unix_rw]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_ro_perms]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_rw_perms]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_group]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[auth_unix_ro]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[auth_unix_rw]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_ro_perms]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_rw_perms]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_group]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[auth_unix_ro]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[auth_unix_rw]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_ro_perms]/ensure: created Dec 6 03:13:19 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_rw_perms]/ensure: created Dec 6 03:13:19 localhost systemd[1]: libpod-973e97e54c593ac5d052f8871b0acf7de04e674b931c55cda2af5dfc37a43e47.scope: Deactivated successfully. Dec 6 03:13:19 localhost systemd[1]: libpod-973e97e54c593ac5d052f8871b0acf7de04e674b931c55cda2af5dfc37a43e47.scope: Consumed 2.969s CPU time. Dec 6 03:13:19 localhost podman[52477]: 2025-12-06 08:13:19.501155415 +0000 UTC m=+3.402196042 container died 973e97e54c593ac5d052f8871b0acf7de04e674b931c55cda2af5dfc37a43e47 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, config_id=tripleo_puppet_step1, distribution-scope=public, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, release=1761123044, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, version=17.1.12, container_name=container-puppet-ovn_controller) Dec 6 03:13:19 localhost podman[52580]: 2025-12-06 08:13:16.445507836 +0000 UTC m=+0.040974861 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1 Dec 6 03:13:19 localhost systemd[1]: var-lib-containers-storage-overlay-ad3810d21826b8b406da9b30fe54843a6684913af1956a8eb23d071f707914a6-merged.mount: Deactivated successfully. Dec 6 03:13:19 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-973e97e54c593ac5d052f8871b0acf7de04e674b931c55cda2af5dfc37a43e47-userdata-shm.mount: Deactivated successfully. Dec 6 03:13:20 localhost podman[53097]: 2025-12-06 08:13:20.010857376 +0000 UTC m=+0.499010010 container cleanup 973e97e54c593ac5d052f8871b0acf7de04e674b931c55cda2af5dfc37a43e47 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, io.openshift.expose-services=, distribution-scope=public, config_id=tripleo_puppet_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, batch=17.1_20251118.1, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=container-puppet-ovn_controller, maintainer=OpenStack TripleO Team, vcs-type=git, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64) Dec 6 03:13:20 localhost python3[51252]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-ovn_controller --conmon-pidfile /run/container-puppet-ovn_controller.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005548789 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,vs_config,exec --env NAME=ovn_controller --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::neutron::agents::ovn#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-ovn_controller --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-ovn_controller.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /etc/sysconfig/modules:/etc/sysconfig/modules --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Dec 6 03:13:20 localhost systemd[1]: libpod-conmon-973e97e54c593ac5d052f8871b0acf7de04e674b931c55cda2af5dfc37a43e47.scope: Deactivated successfully. Dec 6 03:13:20 localhost podman[53136]: 2025-12-06 08:13:19.742248488 +0000 UTC m=+0.050075633 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1 Dec 6 03:13:20 localhost podman[53136]: 2025-12-06 08:13:20.045687475 +0000 UTC m=+0.353514660 container create 17558ff0747fce48f7968377192af9e4833071686c98fabf2053cfd3378b2915 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:23:27Z, distribution-scope=public, name=rhosp17/openstack-neutron-server, container_name=container-puppet-neutron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-neutron-server-container, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, url=https://www.redhat.com, managed_by=tripleo_ansible, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_puppet_step1, summary=Red Hat OpenStack Platform 17.1 neutron-server) Dec 6 03:13:20 localhost systemd[1]: Started libpod-conmon-17558ff0747fce48f7968377192af9e4833071686c98fabf2053cfd3378b2915.scope. Dec 6 03:13:20 localhost systemd[1]: Started libcrun container. Dec 6 03:13:20 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Compute::Libvirt::Qemu/Augeas[qemu-conf-limits]/returns: executed successfully Dec 6 03:13:20 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2444c943287df738a2d4fa7e71e9d9b0754a541b609353f29be07295aed4c411/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Dec 6 03:13:20 localhost podman[53136]: 2025-12-06 08:13:20.121969819 +0000 UTC m=+0.429796974 container init 17558ff0747fce48f7968377192af9e4833071686c98fabf2053cfd3378b2915 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, release=1761123044, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-server, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_puppet_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, com.redhat.component=openstack-neutron-server-container, distribution-scope=public, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, architecture=x86_64, version=17.1.12, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, container_name=container-puppet-neutron, io.buildah.version=1.41.4, vcs-type=git, build-date=2025-11-19T00:23:27Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-server, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-server) Dec 6 03:13:20 localhost podman[53136]: 2025-12-06 08:13:20.127696806 +0000 UTC m=+0.435523961 container start 17558ff0747fce48f7968377192af9e4833071686c98fabf2053cfd3378b2915 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-server, name=rhosp17/openstack-neutron-server, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, vcs-type=git, release=1761123044, container_name=container-puppet-neutron, build-date=2025-11-19T00:23:27Z, tcib_managed=true, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_puppet_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-server-container, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-server, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 6 03:13:20 localhost podman[53136]: 2025-12-06 08:13:20.127928143 +0000 UTC m=+0.435755298 container attach 17558ff0747fce48f7968377192af9e4833071686c98fabf2053cfd3378b2915 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, release=1761123044, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.expose-services=, batch=17.1_20251118.1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=container-puppet-neutron, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, build-date=2025-11-19T00:23:27Z, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-server, vcs-type=git, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-server, config_id=tripleo_puppet_step1, com.redhat.component=openstack-neutron-server-container, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:13:20 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Migration::Qemu/Augeas[qemu-conf-migration-ports]/returns: executed successfully Dec 6 03:13:20 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/debug]/ensure: created Dec 6 03:13:20 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/log_dir]/ensure: created Dec 6 03:13:20 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/backend]/ensure: created Dec 6 03:13:20 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/enabled]/ensure: created Dec 6 03:13:20 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/memcache_servers]/ensure: created Dec 6 03:13:20 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/tls_enabled]/ensure: created Dec 6 03:13:20 localhost puppet-user[51819]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created Dec 6 03:13:20 localhost puppet-user[51819]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created Dec 6 03:13:21 localhost puppet-user[51819]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/ssl]/ensure: created Dec 6 03:13:21 localhost puppet-user[51819]: Notice: /Stage[main]/Nova/Oslo::Messaging::Default[nova_config]/Nova_config[DEFAULT/transport_url]/ensure: created Dec 6 03:13:21 localhost puppet-user[51819]: Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/driver]/ensure: created Dec 6 03:13:21 localhost puppet-user[51819]: Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/transport_url]/ensure: created Dec 6 03:13:21 localhost puppet-user[51819]: Notice: /Stage[main]/Nova/Oslo::Concurrency[nova_config]/Nova_config[oslo_concurrency/lock_path]/ensure: created Dec 6 03:13:21 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/auth_type]/ensure: created Dec 6 03:13:21 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/region_name]/ensure: created Dec 6 03:13:21 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/auth_url]/ensure: created Dec 6 03:13:21 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/username]/ensure: created Dec 6 03:13:21 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/password]/ensure: created Dec 6 03:13:21 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/user_domain_name]/ensure: created Dec 6 03:13:21 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/project_name]/ensure: created Dec 6 03:13:21 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/project_domain_name]/ensure: created Dec 6 03:13:21 localhost puppet-user[51819]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/send_service_user_token]/ensure: created Dec 6 03:13:21 localhost puppet-user[51819]: Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/ensure: defined content as '{sha256}3a12438802493a75725c4f7704f2af6db1ef72af396369e5de28f6f4d6a7ed98' Dec 6 03:13:21 localhost puppet-user[51819]: Notice: Applied catalog in 4.73 seconds Dec 6 03:13:21 localhost puppet-user[51819]: Application: Dec 6 03:13:21 localhost puppet-user[51819]: Initial environment: production Dec 6 03:13:21 localhost puppet-user[51819]: Converged environment: production Dec 6 03:13:21 localhost puppet-user[51819]: Run mode: user Dec 6 03:13:21 localhost puppet-user[51819]: Changes: Dec 6 03:13:21 localhost puppet-user[51819]: Total: 183 Dec 6 03:13:21 localhost puppet-user[51819]: Events: Dec 6 03:13:21 localhost puppet-user[51819]: Success: 183 Dec 6 03:13:21 localhost puppet-user[51819]: Total: 183 Dec 6 03:13:21 localhost puppet-user[51819]: Resources: Dec 6 03:13:21 localhost puppet-user[51819]: Changed: 183 Dec 6 03:13:21 localhost puppet-user[51819]: Out of sync: 183 Dec 6 03:13:21 localhost puppet-user[51819]: Skipped: 57 Dec 6 03:13:21 localhost puppet-user[51819]: Total: 487 Dec 6 03:13:21 localhost puppet-user[51819]: Time: Dec 6 03:13:21 localhost puppet-user[51819]: Concat fragment: 0.00 Dec 6 03:13:21 localhost puppet-user[51819]: Anchor: 0.00 Dec 6 03:13:21 localhost puppet-user[51819]: File line: 0.00 Dec 6 03:13:21 localhost puppet-user[51819]: Virtlogd config: 0.00 Dec 6 03:13:21 localhost puppet-user[51819]: Exec: 0.01 Dec 6 03:13:21 localhost puppet-user[51819]: Virtsecretd config: 0.01 Dec 6 03:13:21 localhost puppet-user[51819]: Virtqemud config: 0.01 Dec 6 03:13:21 localhost puppet-user[51819]: Package: 0.01 Dec 6 03:13:21 localhost puppet-user[51819]: Virtstoraged config: 0.01 Dec 6 03:13:21 localhost puppet-user[51819]: Virtnodedevd config: 0.02 Dec 6 03:13:21 localhost puppet-user[51819]: Virtproxyd config: 0.03 Dec 6 03:13:21 localhost puppet-user[51819]: File: 0.06 Dec 6 03:13:21 localhost puppet-user[51819]: Augeas: 1.17 Dec 6 03:13:21 localhost puppet-user[51819]: Config retrieval: 1.50 Dec 6 03:13:21 localhost puppet-user[51819]: Last run: 1765008801 Dec 6 03:13:21 localhost puppet-user[51819]: Nova config: 3.12 Dec 6 03:13:21 localhost puppet-user[51819]: Transaction evaluation: 4.66 Dec 6 03:13:21 localhost puppet-user[51819]: Catalog application: 4.73 Dec 6 03:13:21 localhost puppet-user[51819]: Resources: 0.00 Dec 6 03:13:21 localhost puppet-user[51819]: Concat file: 0.00 Dec 6 03:13:21 localhost puppet-user[51819]: Total: 4.73 Dec 6 03:13:21 localhost puppet-user[51819]: Version: Dec 6 03:13:21 localhost puppet-user[51819]: Config: 1765008795 Dec 6 03:13:21 localhost puppet-user[51819]: Puppet: 7.10.0 Dec 6 03:13:21 localhost puppet-user[53191]: Error: Facter: error while resolving custom fact "haproxy_version": undefined method `strip' for nil:NilClass Dec 6 03:13:22 localhost puppet-user[53191]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 6 03:13:22 localhost puppet-user[53191]: (file: /etc/puppet/hiera.yaml) Dec 6 03:13:22 localhost puppet-user[53191]: Warning: Undefined variable '::deploy_config_name'; Dec 6 03:13:22 localhost puppet-user[53191]: (file & line not available) Dec 6 03:13:22 localhost puppet-user[53191]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 6 03:13:22 localhost puppet-user[53191]: (file & line not available) Dec 6 03:13:22 localhost puppet-user[53191]: Warning: Unknown variable: 'dhcp_agents_per_net'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/neutron.pp, line: 154, column: 37) Dec 6 03:13:22 localhost puppet-user[53191]: Notice: Compiled catalog for np0005548789.localdomain in environment production in 0.60 seconds Dec 6 03:13:22 localhost systemd[1]: libpod-329949042ceb45cabbb88c012af20b9f8b51f1de61f81364a852037377f8c8c1.scope: Deactivated successfully. Dec 6 03:13:22 localhost systemd[1]: libpod-329949042ceb45cabbb88c012af20b9f8b51f1de61f81364a852037377f8c8c1.scope: Consumed 8.552s CPU time. Dec 6 03:13:22 localhost puppet-user[53191]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/auth_strategy]/ensure: created Dec 6 03:13:22 localhost puppet-user[53191]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/core_plugin]/ensure: created Dec 6 03:13:22 localhost podman[51452]: 2025-12-06 08:13:22.74381788 +0000 UTC m=+10.542992941 container died 329949042ceb45cabbb88c012af20b9f8b51f1de61f81364a852037377f8c8c1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_puppet_step1, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-libvirt, container_name=container-puppet-nova_libvirt, com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4) Dec 6 03:13:22 localhost puppet-user[53191]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/host]/ensure: created Dec 6 03:13:22 localhost puppet-user[53191]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dns_domain]/ensure: created Dec 6 03:13:22 localhost puppet-user[53191]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agent_notification]/ensure: created Dec 6 03:13:22 localhost puppet-user[53191]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_overlapping_ips]/ensure: created Dec 6 03:13:22 localhost puppet-user[53191]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/global_physnet_mtu]/ensure: created Dec 6 03:13:22 localhost puppet-user[53191]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/vlan_transparent]/ensure: created Dec 6 03:13:22 localhost puppet-user[53191]: Notice: /Stage[main]/Neutron/Neutron_config[agent/root_helper]/ensure: created Dec 6 03:13:22 localhost systemd[1]: tmp-crun.6xrvmn.mount: Deactivated successfully. Dec 6 03:13:22 localhost puppet-user[53191]: Notice: /Stage[main]/Neutron/Neutron_config[agent/report_interval]/ensure: created Dec 6 03:13:22 localhost puppet-user[53191]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/service_plugins]/ensure: created Dec 6 03:13:22 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-329949042ceb45cabbb88c012af20b9f8b51f1de61f81364a852037377f8c8c1-userdata-shm.mount: Deactivated successfully. Dec 6 03:13:22 localhost systemd[1]: var-lib-containers-storage-overlay-7114dad9ca5bd35640cc71d1bc50a0dfc385dbab46b008e450dae4492651614e-merged.mount: Deactivated successfully. Dec 6 03:13:22 localhost puppet-user[53191]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/debug]/ensure: created Dec 6 03:13:22 localhost puppet-user[53191]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/nova_metadata_host]/ensure: created Dec 6 03:13:22 localhost puppet-user[53191]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/nova_metadata_protocol]/ensure: created Dec 6 03:13:22 localhost puppet-user[53191]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/metadata_proxy_shared_secret]/ensure: created Dec 6 03:13:22 localhost puppet-user[53191]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/metadata_workers]/ensure: created Dec 6 03:13:22 localhost puppet-user[53191]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/state_path]/ensure: created Dec 6 03:13:22 localhost puppet-user[53191]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/hwol_qos_enabled]/ensure: created Dec 6 03:13:22 localhost puppet-user[53191]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[agent/root_helper]/ensure: created Dec 6 03:13:22 localhost puppet-user[53191]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovs/ovsdb_connection]/ensure: created Dec 6 03:13:22 localhost puppet-user[53191]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovs/ovsdb_connection_timeout]/ensure: created Dec 6 03:13:22 localhost podman[53303]: 2025-12-06 08:13:22.889268479 +0000 UTC m=+0.137628937 container cleanup 329949042ceb45cabbb88c012af20b9f8b51f1de61f81364a852037377f8c8c1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, release=1761123044, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, build-date=2025-11-19T00:35:22Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=container-puppet-nova_libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-libvirt-container, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_puppet_step1, name=rhosp17/openstack-nova-libvirt, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, io.openshift.expose-services=) Dec 6 03:13:22 localhost puppet-user[53191]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovsdb_probe_interval]/ensure: created Dec 6 03:13:22 localhost puppet-user[53191]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovn_nb_connection]/ensure: created Dec 6 03:13:22 localhost systemd[1]: libpod-conmon-329949042ceb45cabbb88c012af20b9f8b51f1de61f81364a852037377f8c8c1.scope: Deactivated successfully. Dec 6 03:13:22 localhost puppet-user[53191]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovn_sb_connection]/ensure: created Dec 6 03:13:22 localhost python3[51252]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-nova_libvirt --conmon-pidfile /run/container-puppet-nova_libvirt.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005548789 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password --env NAME=nova_libvirt --env STEP_CONFIG=include ::tripleo::packages#012# TODO(emilien): figure how to deal with libvirt profile.#012# We'll probably treat it like we do with Neutron plugins.#012# Until then, just include it in the default nova-compute role.#012include tripleo::profile::base::nova::compute::libvirt#012#012include tripleo::profile::base::nova::libvirt#012#012include tripleo::profile::base::nova::compute::libvirt_guests#012#012include tripleo::profile::base::sshd#012include tripleo::profile::base::nova::migration::target --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-nova_libvirt --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-nova_libvirt.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 6 03:13:22 localhost puppet-user[53191]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/transport_url]/ensure: created Dec 6 03:13:22 localhost puppet-user[53191]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/control_exchange]/ensure: created Dec 6 03:13:22 localhost puppet-user[53191]: Notice: /Stage[main]/Neutron/Oslo::Concurrency[neutron_config]/Neutron_config[oslo_concurrency/lock_path]/ensure: created Dec 6 03:13:22 localhost puppet-user[53191]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/driver]/ensure: created Dec 6 03:13:22 localhost puppet-user[53191]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/transport_url]/ensure: created Dec 6 03:13:22 localhost puppet-user[53191]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created Dec 6 03:13:22 localhost puppet-user[53191]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created Dec 6 03:13:23 localhost puppet-user[53191]: Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/debug]/ensure: created Dec 6 03:13:23 localhost puppet-user[53191]: Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/log_dir]/ensure: created Dec 6 03:13:23 localhost puppet-user[53191]: Notice: Applied catalog in 0.55 seconds Dec 6 03:13:23 localhost puppet-user[53191]: Application: Dec 6 03:13:23 localhost puppet-user[53191]: Initial environment: production Dec 6 03:13:23 localhost puppet-user[53191]: Converged environment: production Dec 6 03:13:23 localhost puppet-user[53191]: Run mode: user Dec 6 03:13:23 localhost puppet-user[53191]: Changes: Dec 6 03:13:23 localhost puppet-user[53191]: Total: 33 Dec 6 03:13:23 localhost puppet-user[53191]: Events: Dec 6 03:13:23 localhost puppet-user[53191]: Success: 33 Dec 6 03:13:23 localhost puppet-user[53191]: Total: 33 Dec 6 03:13:23 localhost puppet-user[53191]: Resources: Dec 6 03:13:23 localhost puppet-user[53191]: Skipped: 21 Dec 6 03:13:23 localhost puppet-user[53191]: Changed: 33 Dec 6 03:13:23 localhost puppet-user[53191]: Out of sync: 33 Dec 6 03:13:23 localhost puppet-user[53191]: Total: 155 Dec 6 03:13:23 localhost puppet-user[53191]: Time: Dec 6 03:13:23 localhost puppet-user[53191]: Resources: 0.00 Dec 6 03:13:23 localhost puppet-user[53191]: Ovn metadata agent config: 0.08 Dec 6 03:13:23 localhost puppet-user[53191]: Neutron config: 0.41 Dec 6 03:13:23 localhost puppet-user[53191]: Transaction evaluation: 0.54 Dec 6 03:13:23 localhost puppet-user[53191]: Catalog application: 0.55 Dec 6 03:13:23 localhost puppet-user[53191]: Config retrieval: 0.67 Dec 6 03:13:23 localhost puppet-user[53191]: Last run: 1765008803 Dec 6 03:13:23 localhost puppet-user[53191]: Total: 0.55 Dec 6 03:13:23 localhost puppet-user[53191]: Version: Dec 6 03:13:23 localhost puppet-user[53191]: Config: 1765008802 Dec 6 03:13:23 localhost puppet-user[53191]: Puppet: 7.10.0 Dec 6 03:13:23 localhost systemd[1]: libpod-17558ff0747fce48f7968377192af9e4833071686c98fabf2053cfd3378b2915.scope: Deactivated successfully. Dec 6 03:13:23 localhost systemd[1]: libpod-17558ff0747fce48f7968377192af9e4833071686c98fabf2053cfd3378b2915.scope: Consumed 3.496s CPU time. Dec 6 03:13:23 localhost podman[53136]: 2025-12-06 08:13:23.752160307 +0000 UTC m=+4.059987462 container died 17558ff0747fce48f7968377192af9e4833071686c98fabf2053cfd3378b2915 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-server, io.buildah.version=1.41.4, version=17.1.12, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, config_id=tripleo_puppet_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-server, batch=17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=container-puppet-neutron, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, com.redhat.component=openstack-neutron-server-container, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-server, build-date=2025-11-19T00:23:27Z, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:13:23 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-17558ff0747fce48f7968377192af9e4833071686c98fabf2053cfd3378b2915-userdata-shm.mount: Deactivated successfully. Dec 6 03:13:23 localhost systemd[1]: var-lib-containers-storage-overlay-2444c943287df738a2d4fa7e71e9d9b0754a541b609353f29be07295aed4c411-merged.mount: Deactivated successfully. Dec 6 03:13:23 localhost podman[53377]: 2025-12-06 08:13:23.86968371 +0000 UTC m=+0.110123234 container cleanup 17558ff0747fce48f7968377192af9e4833071686c98fabf2053cfd3378b2915 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-server, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, config_id=tripleo_puppet_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, architecture=x86_64, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-server-container, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-server, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, build-date=2025-11-19T00:23:27Z, container_name=container-puppet-neutron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-server, version=17.1.12, io.openshift.expose-services=, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4) Dec 6 03:13:23 localhost systemd[1]: libpod-conmon-17558ff0747fce48f7968377192af9e4833071686c98fabf2053cfd3378b2915.scope: Deactivated successfully. Dec 6 03:13:23 localhost python3[51252]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-neutron --conmon-pidfile /run/container-puppet-neutron.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005548789 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config --env NAME=neutron --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::neutron::ovn_metadata#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-neutron --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548789', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-neutron.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1 Dec 6 03:13:24 localhost python3[53430]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:13:25 localhost python3[53462]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 6 03:13:26 localhost python3[53512]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:13:26 localhost python3[53555]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008805.9646614-84320-120609074200236/source dest=/usr/libexec/tripleo-container-shutdown mode=0700 owner=root group=root _original_basename=tripleo-container-shutdown follow=False checksum=7d67b1986212f5548057505748cd74cfcf9c0d35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:13:27 localhost python3[53617]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:13:27 localhost python3[53660]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008806.766931-84320-138114916142107/source dest=/usr/libexec/tripleo-start-podman-container mode=0700 owner=root group=root _original_basename=tripleo-start-podman-container follow=False checksum=536965633b8d3b1ce794269ffb07be0105a560a0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:13:28 localhost python3[53722]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:13:28 localhost python3[53765]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008807.7425575-84395-17108290442330/source dest=/usr/lib/systemd/system/tripleo-container-shutdown.service mode=0644 owner=root group=root _original_basename=tripleo-container-shutdown-service follow=False checksum=66c1d41406ba8714feb9ed0a35259a7a57ef9707 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:13:29 localhost python3[53827]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:13:29 localhost python3[53870]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008808.7203999-84416-27283574029558/source dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset mode=0644 owner=root group=root _original_basename=91-tripleo-container-shutdown-preset follow=False checksum=bccb1207dcbcfaa5ca05f83c8f36ce4c2460f081 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:13:29 localhost python3[53900]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 03:13:29 localhost systemd[1]: Reloading. Dec 6 03:13:30 localhost systemd-rc-local-generator[53922]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:13:30 localhost systemd-sysv-generator[53926]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:13:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:13:30 localhost systemd[1]: Starting dnf makecache... Dec 6 03:13:30 localhost systemd[1]: Reloading. Dec 6 03:13:30 localhost systemd-sysv-generator[53968]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:13:30 localhost systemd-rc-local-generator[53963]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:13:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:13:30 localhost dnf[53938]: Updating Subscription Management repositories. Dec 6 03:13:30 localhost systemd[1]: Starting TripleO Container Shutdown... Dec 6 03:13:30 localhost systemd[1]: Finished TripleO Container Shutdown. Dec 6 03:13:30 localhost python3[54024]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:13:31 localhost python3[54067]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008810.6624763-84495-27298766745223/source dest=/usr/lib/systemd/system/netns-placeholder.service mode=0644 owner=root group=root _original_basename=netns-placeholder-service follow=False checksum=8e9c6d5ce3a6e7f71c18780ec899f32f23de4c71 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:13:31 localhost python3[54129]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:13:32 localhost dnf[53938]: Failed determining last makecache time. Dec 6 03:13:32 localhost python3[54172]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008811.5923584-84532-23126471426552/source dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset mode=0644 owner=root group=root _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:13:32 localhost dnf[53938]: Red Hat Enterprise Linux 9 for x86_64 - BaseOS 32 kB/s | 4.1 kB 00:00 Dec 6 03:13:32 localhost dnf[53938]: Red Hat Enterprise Linux 9 for x86_64 - BaseOS 47 kB/s | 4.1 kB 00:00 Dec 6 03:13:32 localhost dnf[53938]: Red Hat OpenStack Platform 17.1 for RHEL 9 x86_ 48 kB/s | 4.0 kB 00:00 Dec 6 03:13:32 localhost python3[54204]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 03:13:32 localhost dnf[53938]: Red Hat Enterprise Linux 9 for x86_64 - AppStre 62 kB/s | 4.5 kB 00:00 Dec 6 03:13:32 localhost systemd[1]: Reloading. Dec 6 03:13:32 localhost systemd-sysv-generator[54239]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:13:32 localhost systemd-rc-local-generator[54233]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:13:32 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:13:32 localhost dnf[53938]: Red Hat Enterprise Linux 9 for x86_64 - AppStre 48 kB/s | 4.5 kB 00:00 Dec 6 03:13:33 localhost systemd[1]: Reloading. Dec 6 03:13:33 localhost dnf[53938]: Fast Datapath for RHEL 9 x86_64 (RPMs) 47 kB/s | 4.0 kB 00:00 Dec 6 03:13:33 localhost systemd-sysv-generator[54278]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:13:33 localhost systemd-rc-local-generator[54272]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:13:33 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:13:33 localhost dnf[53938]: Red Hat Enterprise Linux 9 for x86_64 - High Av 48 kB/s | 4.0 kB 00:00 Dec 6 03:13:33 localhost systemd[1]: Starting Create netns directory... Dec 6 03:13:33 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Dec 6 03:13:33 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Dec 6 03:13:33 localhost systemd[1]: Finished Create netns directory. Dec 6 03:13:33 localhost dnf[53938]: Metadata cache created. Dec 6 03:13:33 localhost systemd[1]: dnf-makecache.service: Deactivated successfully. Dec 6 03:13:33 localhost systemd[1]: Finished dnf makecache. Dec 6 03:13:33 localhost systemd[1]: dnf-makecache.service: Consumed 2.736s CPU time. Dec 6 03:13:33 localhost python3[54303]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Dec 6 03:13:33 localhost python3[54303]: ansible-container_puppet_config [WARNING] Config change detected for metrics_qdr, new hash: e8f60832f8f2382eeceefcaaff307d45 Dec 6 03:13:33 localhost python3[54303]: ansible-container_puppet_config [WARNING] Config change detected for collectd, new hash: 4767aaabc3de112d8791c290aa2b669d Dec 6 03:13:33 localhost python3[54303]: ansible-container_puppet_config [WARNING] Config change detected for iscsid, new hash: 18576754feb36b85b5c8742ad9b5643d Dec 6 03:13:33 localhost python3[54303]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtlogd_wrapper, new hash: 179caa3982511c1fd3314b961771f96c Dec 6 03:13:33 localhost python3[54303]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtnodedevd, new hash: 179caa3982511c1fd3314b961771f96c Dec 6 03:13:33 localhost python3[54303]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtproxyd, new hash: 179caa3982511c1fd3314b961771f96c Dec 6 03:13:33 localhost python3[54303]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtqemud, new hash: 179caa3982511c1fd3314b961771f96c Dec 6 03:13:33 localhost python3[54303]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtsecretd, new hash: 179caa3982511c1fd3314b961771f96c Dec 6 03:13:33 localhost python3[54303]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtstoraged, new hash: 179caa3982511c1fd3314b961771f96c Dec 6 03:13:33 localhost python3[54303]: ansible-container_puppet_config [WARNING] Config change detected for rsyslog, new hash: 7a657a42c3cbd75086c59cf211d6fafe Dec 6 03:13:33 localhost python3[54303]: ansible-container_puppet_config [WARNING] Config change detected for ceilometer_agent_compute, new hash: 728090aef247cfdd273031dadf6d1125 Dec 6 03:13:33 localhost python3[54303]: ansible-container_puppet_config [WARNING] Config change detected for ceilometer_agent_ipmi, new hash: 728090aef247cfdd273031dadf6d1125 Dec 6 03:13:33 localhost python3[54303]: ansible-container_puppet_config [WARNING] Config change detected for logrotate_crond, new hash: 53ed83bb0cae779ff95edb2002262c6f Dec 6 03:13:33 localhost python3[54303]: ansible-container_puppet_config [WARNING] Config change detected for nova_libvirt_init_secret, new hash: 179caa3982511c1fd3314b961771f96c Dec 6 03:13:33 localhost python3[54303]: ansible-container_puppet_config [WARNING] Config change detected for nova_migration_target, new hash: 179caa3982511c1fd3314b961771f96c Dec 6 03:13:33 localhost python3[54303]: ansible-container_puppet_config [WARNING] Config change detected for ovn_metadata_agent, new hash: 270cf6e6b67cba1ef197c7fa89d5bb20 Dec 6 03:13:33 localhost python3[54303]: ansible-container_puppet_config [WARNING] Config change detected for nova_compute, new hash: 18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c Dec 6 03:13:33 localhost python3[54303]: ansible-container_puppet_config [WARNING] Config change detected for nova_wait_for_compute_service, new hash: 179caa3982511c1fd3314b961771f96c Dec 6 03:13:35 localhost python3[54361]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step1 config_dir=/var/lib/tripleo-config/container-startup-config/step_1 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Dec 6 03:13:35 localhost podman[54400]: 2025-12-06 08:13:35.579806028 +0000 UTC m=+0.087750040 container create 977f365293d6490c82213a07a5e48ea92b9bd55fc3f3bbe6f0c9ec1022b39471 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vcs-type=git, container_name=metrics_qdr_init_logs, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4) Dec 6 03:13:35 localhost systemd[1]: Started libpod-conmon-977f365293d6490c82213a07a5e48ea92b9bd55fc3f3bbe6f0c9ec1022b39471.scope. Dec 6 03:13:35 localhost systemd[1]: Started libcrun container. Dec 6 03:13:35 localhost podman[54400]: 2025-12-06 08:13:35.537041863 +0000 UTC m=+0.044985845 image pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Dec 6 03:13:35 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4cfdd88bd7d8f375c1733109d5a26e8b3ffb2befe75a3de2b4653848e69b6e4/merged/var/log/qdrouterd supports timestamps until 2038 (0x7fffffff) Dec 6 03:13:35 localhost podman[54400]: 2025-12-06 08:13:35.65184499 +0000 UTC m=+0.159788972 container init 977f365293d6490c82213a07a5e48ea92b9bd55fc3f3bbe6f0c9ec1022b39471 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr_init_logs, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, config_id=tripleo_step1, tcib_managed=true, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, maintainer=OpenStack TripleO Team) Dec 6 03:13:35 localhost podman[54400]: 2025-12-06 08:13:35.666257727 +0000 UTC m=+0.174201719 container start 977f365293d6490c82213a07a5e48ea92b9bd55fc3f3bbe6f0c9ec1022b39471 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, container_name=metrics_qdr_init_logs, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, distribution-scope=public, batch=17.1_20251118.1, version=17.1.12, vcs-type=git, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:13:35 localhost podman[54400]: 2025-12-06 08:13:35.666656169 +0000 UTC m=+0.174600211 container attach 977f365293d6490c82213a07a5e48ea92b9bd55fc3f3bbe6f0c9ec1022b39471 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, version=17.1.12, config_id=tripleo_step1, release=1761123044, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr_init_logs, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, vendor=Red Hat, Inc., url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container) Dec 6 03:13:35 localhost systemd[1]: libpod-977f365293d6490c82213a07a5e48ea92b9bd55fc3f3bbe6f0c9ec1022b39471.scope: Deactivated successfully. Dec 6 03:13:35 localhost podman[54400]: 2025-12-06 08:13:35.671136828 +0000 UTC m=+0.179080830 container died 977f365293d6490c82213a07a5e48ea92b9bd55fc3f3bbe6f0c9ec1022b39471 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., version=17.1.12, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr_init_logs, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044) Dec 6 03:13:35 localhost podman[54420]: 2025-12-06 08:13:35.771149247 +0000 UTC m=+0.086995957 container cleanup 977f365293d6490c82213a07a5e48ea92b9bd55fc3f3bbe6f0c9ec1022b39471 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, container_name=metrics_qdr_init_logs, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, release=1761123044, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, distribution-scope=public, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 03:13:35 localhost systemd[1]: libpod-conmon-977f365293d6490c82213a07a5e48ea92b9bd55fc3f3bbe6f0c9ec1022b39471.scope: Deactivated successfully. Dec 6 03:13:35 localhost python3[54361]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name metrics_qdr_init_logs --conmon-pidfile /run/metrics_qdr_init_logs.pid --detach=False --label config_id=tripleo_step1 --label container_name=metrics_qdr_init_logs --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/metrics_qdr_init_logs.log --network none --privileged=False --user root --volume /var/log/containers/metrics_qdr:/var/log/qdrouterd:z registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 /bin/bash -c chown -R qdrouterd:qdrouterd /var/log/qdrouterd Dec 6 03:13:36 localhost podman[54499]: 2025-12-06 08:13:36.228330274 +0000 UTC m=+0.083507499 container create 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=17.1.12, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_id=tripleo_step1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 6 03:13:36 localhost systemd[1]: Started libpod-conmon-203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.scope. Dec 6 03:13:36 localhost systemd[1]: Started libcrun container. Dec 6 03:13:36 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/beaf327340ccd7215a759765519263eed11e8999b460cf785f7dbab3207ce8ee/merged/var/lib/qdrouterd supports timestamps until 2038 (0x7fffffff) Dec 6 03:13:36 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/beaf327340ccd7215a759765519263eed11e8999b460cf785f7dbab3207ce8ee/merged/var/log/qdrouterd supports timestamps until 2038 (0x7fffffff) Dec 6 03:13:36 localhost podman[54499]: 2025-12-06 08:13:36.188096587 +0000 UTC m=+0.043273892 image pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Dec 6 03:13:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:13:36 localhost podman[54499]: 2025-12-06 08:13:36.30469197 +0000 UTC m=+0.159869275 container init 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, version=17.1.12, vcs-type=git, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, architecture=x86_64) Dec 6 03:13:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:13:36 localhost podman[54499]: 2025-12-06 08:13:36.339475868 +0000 UTC m=+0.194653103 container start 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, release=1761123044, container_name=metrics_qdr) Dec 6 03:13:36 localhost python3[54361]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name metrics_qdr --conmon-pidfile /run/metrics_qdr.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=e8f60832f8f2382eeceefcaaff307d45 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step1 --label container_name=metrics_qdr --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/metrics_qdr.log --network host --privileged=False --user qdrouterd --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro --volume /var/lib/metrics_qdr:/var/lib/qdrouterd:z --volume /var/log/containers/metrics_qdr:/var/log/qdrouterd:z registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Dec 6 03:13:36 localhost podman[54521]: 2025-12-06 08:13:36.483315295 +0000 UTC m=+0.139164123 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=starting, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, tcib_managed=true, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, release=1761123044, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, architecture=x86_64) Dec 6 03:13:36 localhost systemd[1]: tmp-crun.n5AVBM.mount: Deactivated successfully. Dec 6 03:13:36 localhost systemd[1]: var-lib-containers-storage-overlay-f4cfdd88bd7d8f375c1733109d5a26e8b3ffb2befe75a3de2b4653848e69b6e4-merged.mount: Deactivated successfully. Dec 6 03:13:36 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-977f365293d6490c82213a07a5e48ea92b9bd55fc3f3bbe6f0c9ec1022b39471-userdata-shm.mount: Deactivated successfully. Dec 6 03:13:36 localhost podman[54521]: 2025-12-06 08:13:36.727238484 +0000 UTC m=+0.383087312 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vendor=Red Hat, Inc., io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd) Dec 6 03:13:36 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:13:36 localhost python3[54597]: ansible-file Invoked with path=/etc/systemd/system/tripleo_metrics_qdr.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:13:37 localhost python3[54613]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_metrics_qdr_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 6 03:13:37 localhost python3[54674]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008817.2148407-84696-207396134986591/source dest=/etc/systemd/system/tripleo_metrics_qdr.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:13:38 localhost python3[54690]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 6 03:13:38 localhost systemd[1]: Reloading. Dec 6 03:13:38 localhost systemd-rc-local-generator[54714]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:13:38 localhost systemd-sysv-generator[54719]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:13:38 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:13:39 localhost python3[54742]: ansible-systemd Invoked with state=restarted name=tripleo_metrics_qdr.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 03:13:39 localhost systemd[1]: Reloading. Dec 6 03:13:39 localhost systemd-sysv-generator[54773]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:13:39 localhost systemd-rc-local-generator[54768]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:13:39 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:13:39 localhost systemd[1]: Starting metrics_qdr container... Dec 6 03:13:39 localhost systemd[1]: Started metrics_qdr container. Dec 6 03:13:39 localhost python3[54822]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks1.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:13:41 localhost python3[54943]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks1.json short_hostname=np0005548789 step=1 update_config_hash_only=False Dec 6 03:13:41 localhost python3[54959]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:13:42 localhost python3[54975]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_1 config_pattern=container-puppet-*.json config_overrides={} debug=True Dec 6 03:14:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:14:06 localhost podman[55054]: 2025-12-06 08:14:06.916306151 +0000 UTC m=+0.078039050 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, vcs-type=git, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 6 03:14:07 localhost podman[55054]: 2025-12-06 08:14:07.098013051 +0000 UTC m=+0.259745930 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, config_id=tripleo_step1, tcib_managed=true, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, io.buildah.version=1.41.4, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 6 03:14:07 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:14:11 localhost sshd[55083]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:14:15 localhost sshd[55085]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:14:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:14:37 localhost podman[55088]: 2025-12-06 08:14:37.922845189 +0000 UTC m=+0.083528398 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, tcib_managed=true, distribution-scope=public, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, managed_by=tripleo_ansible) Dec 6 03:14:38 localhost podman[55088]: 2025-12-06 08:14:38.130730692 +0000 UTC m=+0.291413861 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, release=1761123044, tcib_managed=true, architecture=x86_64, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, url=https://www.redhat.com, vendor=Red Hat, Inc.) Dec 6 03:14:38 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:15:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:15:08 localhost podman[55193]: 2025-12-06 08:15:08.921945215 +0000 UTC m=+0.079983321 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, release=1761123044, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 03:15:09 localhost podman[55193]: 2025-12-06 08:15:09.117359568 +0000 UTC m=+0.275397624 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4, config_id=tripleo_step1, release=1761123044, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc.) Dec 6 03:15:09 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:15:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:15:39 localhost systemd[1]: tmp-crun.9ArRx9.mount: Deactivated successfully. Dec 6 03:15:39 localhost podman[55222]: 2025-12-06 08:15:39.928296018 +0000 UTC m=+0.087339686 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, url=https://www.redhat.com, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, config_id=tripleo_step1, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1) Dec 6 03:15:40 localhost podman[55222]: 2025-12-06 08:15:40.121204784 +0000 UTC m=+0.280248432 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., release=1761123044, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, distribution-scope=public, architecture=x86_64, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12) Dec 6 03:15:40 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:15:40 localhost sshd[55252]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:15:49 localhost sshd[55254]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:16:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:16:10 localhost podman[55332]: 2025-12-06 08:16:10.922885059 +0000 UTC m=+0.081911905 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vendor=Red Hat, Inc., container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 03:16:11 localhost podman[55332]: 2025-12-06 08:16:11.109966475 +0000 UTC m=+0.268993351 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, managed_by=tripleo_ansible, architecture=x86_64, url=https://www.redhat.com, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, distribution-scope=public, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z) Dec 6 03:16:11 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:16:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:16:41 localhost systemd[1]: tmp-crun.5zskWq.mount: Deactivated successfully. Dec 6 03:16:41 localhost podman[55363]: 2025-12-06 08:16:41.93435171 +0000 UTC m=+0.091101149 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.12, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, release=1761123044, managed_by=tripleo_ansible, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, batch=17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Dec 6 03:16:42 localhost podman[55363]: 2025-12-06 08:16:42.173448236 +0000 UTC m=+0.330197605 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, tcib_managed=true, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, version=17.1.12, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container) Dec 6 03:16:42 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:17:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:17:12 localhost systemd[1]: tmp-crun.QC7XRG.mount: Deactivated successfully. Dec 6 03:17:12 localhost podman[55469]: 2025-12-06 08:17:12.921031341 +0000 UTC m=+0.083810382 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, distribution-scope=public, release=1761123044, managed_by=tripleo_ansible, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, tcib_managed=true, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step1) Dec 6 03:17:13 localhost podman[55469]: 2025-12-06 08:17:13.160446928 +0000 UTC m=+0.323225899 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, release=1761123044, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.12, tcib_managed=true) Dec 6 03:17:13 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:17:29 localhost sshd[55498]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:17:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:17:43 localhost podman[55500]: 2025-12-06 08:17:43.907013594 +0000 UTC m=+0.073025030 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step1, distribution-scope=public, vcs-type=git, name=rhosp17/openstack-qdrouterd) Dec 6 03:17:44 localhost podman[55500]: 2025-12-06 08:17:44.128209206 +0000 UTC m=+0.294220612 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, container_name=metrics_qdr, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1761123044, tcib_managed=true, version=17.1.12) Dec 6 03:17:44 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:18:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:18:14 localhost podman[55607]: 2025-12-06 08:18:14.924109176 +0000 UTC m=+0.080923944 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, version=17.1.12, architecture=x86_64, release=1761123044, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:18:15 localhost podman[55607]: 2025-12-06 08:18:15.116090717 +0000 UTC m=+0.272905445 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., distribution-scope=public, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 03:18:15 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:18:21 localhost sshd[55637]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:18:23 localhost sshd[55639]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:18:26 localhost sshd[55641]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:18:27 localhost sshd[55643]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:18:29 localhost ceph-osd[31726]: osd.1 pg_epoch: 20 pg[2.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [3,1,5] r=1 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:18:29 localhost sshd[55645]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:18:30 localhost ceph-osd[31726]: osd.1 pg_epoch: 22 pg[3.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [1,3,2] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:31 localhost ceph-osd[31726]: osd.1 pg_epoch: 23 pg[3.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [1,3,2] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:33 localhost ceph-osd[32665]: osd.4 pg_epoch: 25 pg[5.0( empty local-lis/les=0/0 n=0 ec=25/25 lis/c=0/0 les/c/f=0/0/0 sis=25) [4,5,3] r=0 lpr=25 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:33 localhost ceph-osd[31726]: osd.1 pg_epoch: 24 pg[4.0( empty local-lis/les=0/0 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [5,0,1] r=2 lpr=24 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:18:34 localhost ceph-osd[32665]: osd.4 pg_epoch: 26 pg[5.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=0/0 les/c/f=0/0/0 sis=25) [4,5,3] r=0 lpr=25 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:37 localhost ceph-osd[31726]: osd.1 pg_epoch: 29 pg[2.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.217467308s) [3,1,5] r=1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 1124.114501953s@ mbc={}] start_peering_interval up [3,1,5] -> [3,1,5], acting [3,1,5] -> [3,1,5], acting_primary 3 -> 3, up_primary 3 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:37 localhost ceph-osd[31726]: osd.1 pg_epoch: 29 pg[2.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.214870453s) [3,1,5] r=1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1124.114501953s@ mbc={}] state: transitioning to Stray Dec 6 03:18:38 localhost ceph-osd[31726]: osd.1 pg_epoch: 30 pg[2.1e( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:38 localhost ceph-osd[31726]: osd.1 pg_epoch: 30 pg[2.1f( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:38 localhost ceph-osd[31726]: osd.1 pg_epoch: 30 pg[2.1b( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:38 localhost ceph-osd[31726]: osd.1 pg_epoch: 30 pg[2.1c( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:38 localhost ceph-osd[31726]: osd.1 pg_epoch: 30 pg[2.a( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:38 localhost ceph-osd[31726]: osd.1 pg_epoch: 30 pg[2.9( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:38 localhost ceph-osd[31726]: osd.1 pg_epoch: 30 pg[2.8( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:38 localhost ceph-osd[31726]: osd.1 pg_epoch: 30 pg[2.7( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:38 localhost ceph-osd[31726]: osd.1 pg_epoch: 30 pg[2.5( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:38 localhost ceph-osd[31726]: osd.1 pg_epoch: 30 pg[2.4( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:38 localhost ceph-osd[31726]: osd.1 pg_epoch: 30 pg[2.3( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:38 localhost ceph-osd[31726]: osd.1 pg_epoch: 30 pg[2.6( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:38 localhost ceph-osd[31726]: osd.1 pg_epoch: 30 pg[2.2( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:38 localhost ceph-osd[31726]: osd.1 pg_epoch: 30 pg[2.1( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:38 localhost ceph-osd[31726]: osd.1 pg_epoch: 30 pg[2.b( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:38 localhost ceph-osd[31726]: osd.1 pg_epoch: 30 pg[2.c( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:38 localhost ceph-osd[31726]: osd.1 pg_epoch: 30 pg[2.e( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:38 localhost ceph-osd[31726]: osd.1 pg_epoch: 30 pg[2.f( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:38 localhost ceph-osd[31726]: osd.1 pg_epoch: 30 pg[2.10( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:38 localhost ceph-osd[31726]: osd.1 pg_epoch: 30 pg[2.11( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:38 localhost ceph-osd[31726]: osd.1 pg_epoch: 30 pg[2.d( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:38 localhost ceph-osd[31726]: osd.1 pg_epoch: 30 pg[2.13( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:38 localhost ceph-osd[31726]: osd.1 pg_epoch: 30 pg[2.12( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:38 localhost ceph-osd[31726]: osd.1 pg_epoch: 30 pg[2.14( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:38 localhost ceph-osd[31726]: osd.1 pg_epoch: 30 pg[2.15( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:38 localhost ceph-osd[31726]: osd.1 pg_epoch: 30 pg[2.17( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:38 localhost ceph-osd[31726]: osd.1 pg_epoch: 30 pg[2.16( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:38 localhost ceph-osd[31726]: osd.1 pg_epoch: 30 pg[2.18( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:38 localhost ceph-osd[31726]: osd.1 pg_epoch: 30 pg[2.19( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:38 localhost ceph-osd[31726]: osd.1 pg_epoch: 30 pg[2.1a( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:38 localhost ceph-osd[31726]: osd.1 pg_epoch: 30 pg[2.1d( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=1 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:39 localhost ceph-osd[31726]: osd.1 pg_epoch: 31 pg[3.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=31 pruub=15.685779572s) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 active pruub 1126.021484375s@ mbc={}] start_peering_interval up [1,3,2] -> [1,3,2], acting [1,3,2] -> [1,3,2], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:39 localhost ceph-osd[31726]: osd.1 pg_epoch: 31 pg[4.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=31 pruub=10.205893517s) [5,0,1] r=2 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.541748047s@ mbc={}] start_peering_interval up [5,0,1] -> [5,0,1], acting [5,0,1] -> [5,0,1], acting_primary 5 -> 5, up_primary 5 -> 5, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:39 localhost ceph-osd[31726]: osd.1 pg_epoch: 31 pg[3.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=31 pruub=15.685779572s) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown pruub 1126.021484375s@ mbc={}] state: transitioning to Primary Dec 6 03:18:39 localhost ceph-osd[31726]: osd.1 pg_epoch: 31 pg[4.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=31 pruub=10.202860832s) [5,0,1] r=2 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1120.541748047s@ mbc={}] state: transitioning to Stray Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[4.19( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=2 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[4.18( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=2 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[4.1a( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=2 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[4.f( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=2 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[4.e( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=2 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[4.c( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=2 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[4.1d( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=2 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[4.1( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=2 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[4.3( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=2 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[4.2( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=2 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[4.5( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=2 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[4.4( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=2 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[4.7( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=2 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[4.6( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=2 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[4.d( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=2 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[4.a( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=2 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[4.b( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=2 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[4.8( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=2 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[4.1b( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=2 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[4.9( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=2 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[4.16( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=2 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[4.14( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=2 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[4.12( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=2 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[4.15( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=2 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[4.13( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=2 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[4.17( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=2 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[4.10( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=2 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[4.11( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=2 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[4.1f( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=2 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[4.1e( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=2 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[4.1c( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=2 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.1b( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.18( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.19( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.17( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.16( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.14( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.15( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.12( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.13( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.10( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.e( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.11( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.c( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.f( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.d( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.a( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.1( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.3( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.2( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.5( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.7( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.4( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.6( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.8( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.b( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.1a( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.1d( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.1c( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.1e( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.1f( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.9( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.0( empty local-lis/les=31/32 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.1a( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.19( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.15( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.18( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.16( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.13( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.12( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.17( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.14( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.11( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.e( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.f( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.2( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.d( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.1( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.10( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.c( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.3( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.5( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.4( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.7( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.6( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.8( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.9( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.a( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.1d( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.1f( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.1c( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.1e( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.1b( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:40 localhost ceph-osd[31726]: osd.1 pg_epoch: 32 pg[3.b( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=0 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:41 localhost ceph-osd[32665]: osd.4 pg_epoch: 33 pg[5.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=33 pruub=8.847756386s) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 active pruub 1117.332031250s@ mbc={}] start_peering_interval up [4,5,3] -> [4,5,3], acting [4,5,3] -> [4,5,3], acting_primary 4 -> 4, up_primary 4 -> 4, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:41 localhost ceph-osd[32665]: osd.4 pg_epoch: 33 pg[5.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=33 pruub=8.847756386s) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown pruub 1117.332031250s@ mbc={}] state: transitioning to Primary Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.19( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.16( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.15( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.14( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.17( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.13( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.11( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.12( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.10( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.e( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.f( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.d( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.c( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.4( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.1b( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.b( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.2( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.1( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.18( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.3( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.5( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.6( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.7( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.1a( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.8( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.a( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.9( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.1c( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.1f( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.1e( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.1d( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.0( empty local-lis/les=33/34 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.e( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.c( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.4( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.d( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.19( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.17( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.11( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.15( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.13( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.16( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.10( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.14( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.9( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.12( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.1e( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.f( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.7( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.8( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.5( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.1f( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.b( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.3( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.2( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.1c( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.1( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.18( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.a( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.1d( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.1a( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.1b( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:42 localhost ceph-osd[32665]: osd.4 pg_epoch: 34 pg[5.6( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=0 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:44 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 5.0 deep-scrub starts Dec 6 03:18:44 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 5.0 deep-scrub ok Dec 6 03:18:45 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 3.0 deep-scrub starts Dec 6 03:18:45 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 3.0 deep-scrub ok Dec 6 03:18:45 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 5.e scrub starts Dec 6 03:18:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:18:45 localhost systemd[1]: tmp-crun.1VrhIS.mount: Deactivated successfully. Dec 6 03:18:45 localhost podman[55647]: 2025-12-06 08:18:45.925667846 +0000 UTC m=+0.085035641 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., batch=17.1_20251118.1, tcib_managed=true, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, release=1761123044, distribution-scope=public, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, url=https://www.redhat.com, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=) Dec 6 03:18:46 localhost podman[55647]: 2025-12-06 08:18:46.122114945 +0000 UTC m=+0.281482780 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Dec 6 03:18:46 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:18:46 localhost ceph-osd[31726]: osd.1 pg_epoch: 35 pg[6.0( empty local-lis/les=0/0 n=0 ec=35/35 lis/c=0/0 les/c/f=0/0/0 sis=35) [5,0,1] r=2 lpr=35 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:18:46 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 5.19 scrub starts Dec 6 03:18:46 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 5.19 scrub ok Dec 6 03:18:47 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 5.4 deep-scrub starts Dec 6 03:18:47 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 5.4 deep-scrub ok Dec 6 03:18:48 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 3.1a scrub starts Dec 6 03:18:48 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 3.1a scrub ok Dec 6 03:18:48 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 5.d scrub starts Dec 6 03:18:48 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 5.d scrub ok Dec 6 03:18:48 localhost ceph-osd[31726]: osd.1 pg_epoch: 37 pg[7.0( empty local-lis/les=0/0 n=0 ec=37/37 lis/c=0/0 les/c/f=0/0/0 sis=37) [0,1,5] r=1 lpr=37 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:18:49 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 3.15 scrub starts Dec 6 03:18:49 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 3.15 scrub ok Dec 6 03:18:50 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 3.17 scrub starts Dec 6 03:18:50 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 3.17 scrub ok Dec 6 03:18:50 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 5.c scrub starts Dec 6 03:18:50 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 5.c scrub ok Dec 6 03:18:52 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 3.18 scrub starts Dec 6 03:18:52 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 3.18 scrub ok Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[3.9( empty local-lis/les=0/0 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41) [4,3,2] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[3.8( empty local-lis/les=0/0 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41) [4,5,0] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.1f( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.340268135s) [3,4,2] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.518920898s@ mbc={}] start_peering_interval up [4,5,3] -> [3,4,2], acting [4,5,3] -> [3,4,2], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.1f( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.340203285s) [3,4,2] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.518920898s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[2.18( empty local-lis/les=0/0 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41) [4,3,2] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[3.11( empty local-lis/les=0/0 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41) [4,0,2] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.11( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.338585854s) [3,4,5] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.518066406s@ mbc={}] start_peering_interval up [4,5,3] -> [3,4,5], acting [4,5,3] -> [3,4,5], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.1e( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.338965416s) [5,3,4] r=2 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.518676758s@ mbc={}] start_peering_interval up [4,5,3] -> [5,3,4], acting [4,5,3] -> [5,3,4], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.1c( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.339303017s) [0,4,2] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.519042969s@ mbc={}] start_peering_interval up [4,5,3] -> [0,4,2], acting [4,5,3] -> [0,4,2], acting_primary 4 -> 0, up_primary 4 -> 0, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.1d( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.339650154s) [5,0,4] r=2 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.519287109s@ mbc={}] start_peering_interval up [4,5,3] -> [5,0,4], acting [4,5,3] -> [5,0,4], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.1c( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.339261055s) [0,4,2] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.519042969s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.1e( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.338898659s) [5,3,4] r=2 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.518676758s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.1d( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.339530945s) [5,0,4] r=2 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.519287109s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.9( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.338674545s) [0,4,2] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.518676758s@ mbc={}] start_peering_interval up [4,5,3] -> [0,4,2], acting [4,5,3] -> [0,4,2], acting_primary 4 -> 0, up_primary 4 -> 0, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.11( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.338496208s) [3,4,5] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.518066406s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.9( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.338641167s) [0,4,2] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.518676758s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.7( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.338445663s) [3,5,4] r=2 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.518676758s@ mbc={}] start_peering_interval up [4,5,3] -> [3,5,4], acting [4,5,3] -> [3,5,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.a( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.339014053s) [2,1,3] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.519287109s@ mbc={}] start_peering_interval up [4,5,3] -> [2,1,3], acting [4,5,3] -> [2,1,3], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.7( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.338404655s) [3,5,4] r=2 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.518676758s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.a( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.338947296s) [2,1,3] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.519287109s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.1a( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.338918686s) [1,0,5] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.519287109s@ mbc={}] start_peering_interval up [4,5,3] -> [1,0,5], acting [4,5,3] -> [1,0,5], acting_primary 4 -> 1, up_primary 4 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.8( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.338329315s) [1,2,3] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.518798828s@ mbc={}] start_peering_interval up [4,5,3] -> [1,2,3], acting [4,5,3] -> [1,2,3], acting_primary 4 -> 1, up_primary 4 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.8( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.338288307s) [1,2,3] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.518798828s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.1a( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.338768959s) [1,0,5] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.519287109s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.6( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.338772774s) [2,0,4] r=2 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.519287109s@ mbc={}] start_peering_interval up [4,5,3] -> [2,0,4], acting [4,5,3] -> [2,0,4], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.6( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.338633537s) [2,0,4] r=2 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.519287109s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.18( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.338482857s) [0,1,5] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.519165039s@ mbc={}] start_peering_interval up [4,5,3] -> [0,1,5], acting [4,5,3] -> [0,1,5], acting_primary 4 -> 0, up_primary 4 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.5( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.338046074s) [5,3,1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.518798828s@ mbc={}] start_peering_interval up [4,5,3] -> [5,3,1], acting [4,5,3] -> [5,3,1], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.18( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.338452339s) [0,1,5] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.519165039s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.5( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.337989807s) [5,3,1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.518798828s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.3( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.338069916s) [2,3,1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.518920898s@ mbc={}] start_peering_interval up [4,5,3] -> [2,3,1], acting [4,5,3] -> [2,3,1], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.1( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.338170052s) [3,5,1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.519165039s@ mbc={}] start_peering_interval up [4,5,3] -> [3,5,1], acting [4,5,3] -> [3,5,1], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.1( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.338128090s) [3,5,1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.519165039s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.2( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.337833405s) [0,5,1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.518920898s@ mbc={}] start_peering_interval up [4,5,3] -> [0,5,1], acting [4,5,3] -> [0,5,1], acting_primary 4 -> 0, up_primary 4 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.3( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.337875366s) [2,3,1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.518920898s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.1b( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.338096619s) [0,5,4] r=2 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.519287109s@ mbc={}] start_peering_interval up [4,5,3] -> [0,5,4], acting [4,5,3] -> [0,5,4], acting_primary 4 -> 0, up_primary 4 -> 0, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.1b( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.338067055s) [0,5,4] r=2 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.519287109s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.2( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.337771416s) [0,5,1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.518920898s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.4( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.336598396s) [1,2,3] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.517944336s@ mbc={}] start_peering_interval up [4,5,3] -> [1,2,3], acting [4,5,3] -> [1,2,3], acting_primary 4 -> 1, up_primary 4 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.4( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.336528778s) [1,2,3] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.517944336s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[2.12( empty local-lis/les=0/0 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41) [4,2,3] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.15( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.336084366s) [3,2,1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.518310547s@ mbc={}] start_peering_interval up [4,5,3] -> [3,2,1], acting [4,5,3] -> [3,2,1], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.c( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.335754395s) [5,0,4] r=2 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.517944336s@ mbc={}] start_peering_interval up [4,5,3] -> [5,0,4], acting [4,5,3] -> [5,0,4], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.c( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.335641861s) [5,0,4] r=2 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.517944336s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.d( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.335768700s) [4,0,2] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.518066406s@ mbc={}] start_peering_interval up [4,5,3] -> [4,0,2], acting [4,5,3] -> [4,0,2], acting_primary 4 -> 4, up_primary 4 -> 4, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.d( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.335768700s) [4,0,2] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1133.518066406s@ mbc={}] state: transitioning to Primary Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.f( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.336028099s) [0,1,2] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.518676758s@ mbc={}] start_peering_interval up [4,5,3] -> [0,1,2], acting [4,5,3] -> [0,1,2], acting_primary 4 -> 0, up_primary 4 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.f( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.335985184s) [0,1,2] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.518676758s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.10( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.335522652s) [0,4,5] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.518310547s@ mbc={}] start_peering_interval up [4,5,3] -> [0,4,5], acting [4,5,3] -> [0,4,5], acting_primary 4 -> 0, up_primary 4 -> 0, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.14( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.335630417s) [2,4,0] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.518554688s@ mbc={}] start_peering_interval up [4,5,3] -> [2,4,0], acting [4,5,3] -> [2,4,0], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.10( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.335442543s) [0,4,5] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.518310547s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.14( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.335517883s) [2,4,0] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.518554688s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.12( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.335613251s) [1,3,5] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.518676758s@ mbc={}] start_peering_interval up [4,5,3] -> [1,3,5], acting [4,5,3] -> [1,3,5], acting_primary 4 -> 1, up_primary 4 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.15( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.335433006s) [3,2,1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.518310547s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.12( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.335539818s) [1,3,5] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.518676758s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.17( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.334833145s) [5,1,0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.517944336s@ mbc={}] start_peering_interval up [4,5,3] -> [5,1,0], acting [4,5,3] -> [5,1,0], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.17( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.334788322s) [5,1,0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.517944336s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.16( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.334922791s) [3,5,1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.518310547s@ mbc={}] start_peering_interval up [4,5,3] -> [3,5,1], acting [4,5,3] -> [3,5,1], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.19( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.334559441s) [2,3,1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.517944336s@ mbc={}] start_peering_interval up [4,5,3] -> [2,3,1], acting [4,5,3] -> [2,3,1], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.16( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.334863663s) [3,5,1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.518310547s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[5.19( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.334429741s) [2,3,1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.517944336s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[2.10( empty local-lis/les=0/0 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41) [4,5,3] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[5.1a( empty local-lis/les=0/0 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41) [1,0,5] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.1f( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.289616585s) [2,3,4] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.385009766s@ mbc={}] start_peering_interval up [1,3,2] -> [2,3,4], acting [1,3,2] -> [2,3,4], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.1f( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.289537430s) [2,3,4] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.385009766s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.b( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.289740562s) [2,0,1] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.385375977s@ mbc={}] start_peering_interval up [1,3,2] -> [2,0,1], acting [1,3,2] -> [2,0,1], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.1b( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.242539406s) [1,0,2] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.338378906s@ mbc={}] start_peering_interval up [3,1,5] -> [1,0,2], acting [3,1,5] -> [1,0,2], acting_primary 3 -> 1, up_primary 3 -> 1, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.a( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.242082596s) [1,5,3] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.337890625s@ mbc={}] start_peering_interval up [3,1,5] -> [1,5,3], acting [3,1,5] -> [1,5,3], acting_primary 3 -> 1, up_primary 3 -> 1, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.1b( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.242539406s) [1,0,2] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1133.338378906s@ mbc={}] state: transitioning to Primary Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.1d( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.288921356s) [1,0,2] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.384887695s@ mbc={}] start_peering_interval up [1,3,2] -> [1,0,2], acting [1,3,2] -> [1,0,2], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.a( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.242082596s) [1,5,3] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1133.337890625s@ mbc={}] state: transitioning to Primary Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[2.f( empty local-lis/les=0/0 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41) [4,0,5] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.1d( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.288921356s) [1,0,2] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1135.384887695s@ mbc={}] state: transitioning to Primary Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.b( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.289569855s) [2,0,1] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.385375977s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.8( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.286011696s) [4,5,0] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.382202148s@ mbc={}] start_peering_interval up [1,3,2] -> [4,5,0], acting [1,3,2] -> [4,5,0], acting_primary 1 -> 4, up_primary 1 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.9( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.288366318s) [4,3,2] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.384765625s@ mbc={}] start_peering_interval up [1,3,2] -> [4,3,2], acting [1,3,2] -> [4,3,2], acting_primary 1 -> 4, up_primary 1 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.8( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.285967827s) [4,5,0] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.382202148s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.9( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.288305283s) [4,3,2] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.384765625s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.7( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.240776062s) [3,1,2] r=1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.337524414s@ mbc={}] start_peering_interval up [3,1,5] -> [3,1,2], acting [3,1,5] -> [3,1,2], acting_primary 3 -> 3, up_primary 3 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.4( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.285346985s) [2,1,3] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.382080078s@ mbc={}] start_peering_interval up [1,3,2] -> [2,1,3], acting [1,3,2] -> [2,1,3], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[3.1b( empty local-lis/les=0/0 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41) [4,0,2] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.4( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.285290718s) [2,1,3] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.382080078s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.7( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.240674973s) [3,1,2] r=1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.337524414s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[3.1a( empty local-lis/les=0/0 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41) [4,5,0] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.5( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.240542412s) [1,2,0] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.337890625s@ mbc={}] start_peering_interval up [3,1,5] -> [1,2,0], acting [3,1,5] -> [1,2,0], acting_primary 3 -> 1, up_primary 3 -> 1, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.5( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.240542412s) [1,2,0] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1133.337890625s@ mbc={}] state: transitioning to Primary Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.3( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.239907265s) [3,2,1] r=2 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.337524414s@ mbc={}] start_peering_interval up [3,1,5] -> [3,2,1], acting [3,1,5] -> [3,2,1], acting_primary 3 -> 3, up_primary 3 -> 3, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.3( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.239867210s) [3,2,1] r=2 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.337524414s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.5( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.284351349s) [3,5,4] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.381958008s@ mbc={}] start_peering_interval up [1,3,2] -> [3,5,4], acting [1,3,2] -> [3,5,4], acting_primary 1 -> 3, up_primary 1 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[5.4( empty local-lis/les=0/0 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41) [1,2,3] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.5( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.284281731s) [3,5,4] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.381958008s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[2.1c( empty local-lis/les=0/0 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41) [4,3,2] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.2( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.245015144s) [3,5,4] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.343261719s@ mbc={}] start_peering_interval up [3,1,5] -> [3,5,4], acting [3,1,5] -> [3,5,4], acting_primary 3 -> 3, up_primary 3 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.7( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.283912659s) [2,1,3] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.382080078s@ mbc={}] start_peering_interval up [1,3,2] -> [2,1,3], acting [1,3,2] -> [2,1,3], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.7( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.283845901s) [2,1,3] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.382080078s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.2( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.244854927s) [3,5,4] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.343261719s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[2.1d( empty local-lis/les=0/0 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41) [4,3,2] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.b( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.238761902s) [1,3,2] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.337524414s@ mbc={}] start_peering_interval up [3,1,5] -> [1,3,2], acting [3,1,5] -> [1,3,2], acting_primary 3 -> 1, up_primary 3 -> 1, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.d( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.282959938s) [3,1,2] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.381713867s@ mbc={}] start_peering_interval up [1,3,2] -> [3,1,2], acting [1,3,2] -> [3,1,2], acting_primary 1 -> 3, up_primary 1 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.b( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.238761902s) [1,3,2] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1133.337524414s@ mbc={}] state: transitioning to Primary Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.a( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.285868645s) [3,5,1] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.384887695s@ mbc={}] start_peering_interval up [1,3,2] -> [3,5,1], acting [1,3,2] -> [3,5,1], acting_primary 1 -> 3, up_primary 1 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.d( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.282713890s) [3,1,2] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.381713867s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.1( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.282619476s) [2,3,4] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.381713867s@ mbc={}] start_peering_interval up [1,3,2] -> [2,3,4], acting [1,3,2] -> [2,3,4], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.1( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.282588005s) [2,3,4] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.381713867s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.a( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.285821915s) [3,5,1] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.384887695s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.d( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.237878799s) [1,0,5] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.337280273s@ mbc={}] start_peering_interval up [3,1,5] -> [1,0,5], acting [3,1,5] -> [1,0,5], acting_primary 3 -> 1, up_primary 3 -> 1, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.d( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.237878799s) [1,0,5] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1133.337280273s@ mbc={}] state: transitioning to Primary Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[5.8( empty local-lis/les=0/0 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41) [1,2,3] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.c( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.237279892s) [1,5,0] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.337158203s@ mbc={}] start_peering_interval up [3,1,5] -> [1,5,0], acting [3,1,5] -> [1,5,0], acting_primary 3 -> 1, up_primary 3 -> 1, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.c( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.237279892s) [1,5,0] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1133.337158203s@ mbc={}] state: transitioning to Primary Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.13( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.279194832s) [3,2,1] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.379760742s@ mbc={}] start_peering_interval up [1,3,2] -> [3,2,1], acting [1,3,2] -> [3,2,1], acting_primary 1 -> 3, up_primary 1 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.13( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.279093742s) [3,2,1] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.379760742s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[4.1f( empty local-lis/les=0/0 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [4,3,2] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.12( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.235630035s) [4,2,3] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.336669922s@ mbc={}] start_peering_interval up [3,1,5] -> [4,2,3], acting [3,1,5] -> [4,2,3], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.12( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.235581398s) [4,2,3] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.336669922s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.10( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.280607224s) [3,1,5] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.381835938s@ mbc={}] start_peering_interval up [1,3,2] -> [3,1,5], acting [1,3,2] -> [3,1,5], acting_primary 1 -> 3, up_primary 1 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.12( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.278309822s) [2,3,1] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.379760742s@ mbc={}] start_peering_interval up [1,3,2] -> [2,3,1], acting [1,3,2] -> [2,3,1], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.11( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.280846596s) [4,0,2] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.381591797s@ mbc={}] start_peering_interval up [1,3,2] -> [4,0,2], acting [1,3,2] -> [4,0,2], acting_primary 1 -> 4, up_primary 1 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.11( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.279998779s) [4,0,2] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.381591797s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.13( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.236655235s) [1,0,2] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.338256836s@ mbc={}] start_peering_interval up [3,1,5] -> [1,0,2], acting [3,1,5] -> [1,0,2], acting_primary 3 -> 1, up_primary 3 -> 1, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.13( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.236655235s) [1,0,2] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1133.338256836s@ mbc={}] state: transitioning to Primary Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.10( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.280501366s) [3,1,5] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.381835938s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.14( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.277237892s) [3,4,2] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.380004883s@ mbc={}] start_peering_interval up [1,3,2] -> [3,4,2], acting [1,3,2] -> [3,4,2], acting_primary 1 -> 3, up_primary 1 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.15( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.276803970s) [1,3,5] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.379516602s@ mbc={}] start_peering_interval up [1,3,2] -> [1,3,5], acting [1,3,2] -> [1,3,5], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.14( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.234052658s) [3,4,2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.336669922s@ mbc={}] start_peering_interval up [3,1,5] -> [3,4,2], acting [3,1,5] -> [3,4,2], acting_primary 3 -> 3, up_primary 3 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.12( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.277863503s) [2,3,1] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.379760742s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[4.6( empty local-lis/les=0/0 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [4,5,0] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.15( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.233534813s) [1,2,3] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.336669922s@ mbc={}] start_peering_interval up [3,1,5] -> [1,2,3], acting [3,1,5] -> [1,2,3], acting_primary 3 -> 1, up_primary 3 -> 1, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.15( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.233534813s) [1,2,3] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1133.336669922s@ mbc={}] state: transitioning to Primary Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.15( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.276803970s) [1,3,5] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1135.379516602s@ mbc={}] state: transitioning to Primary Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[4.1d( empty local-lis/les=0/0 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [4,3,5] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.14( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.276491165s) [3,4,2] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.380004883s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[5.12( empty local-lis/les=0/0 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41) [1,3,5] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.14( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.232672691s) [3,4,2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.336669922s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.17( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.238903999s) [3,1,2] r=1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.343261719s@ mbc={}] start_peering_interval up [3,1,5] -> [3,1,2], acting [3,1,5] -> [3,1,2], acting_primary 3 -> 3, up_primary 3 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[4.1( empty local-lis/les=0/0 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [4,0,2] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.17( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.238856316s) [3,1,2] r=1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.343261719s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.18( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.238903999s) [4,3,2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.343505859s@ mbc={}] start_peering_interval up [3,1,5] -> [4,3,2], acting [3,1,5] -> [4,3,2], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.16( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.275229454s) [3,2,4] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.379638672s@ mbc={}] start_peering_interval up [1,3,2] -> [3,2,4], acting [1,3,2] -> [3,2,4], acting_primary 1 -> 3, up_primary 1 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[4.14( empty local-lis/les=0/0 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [4,2,0] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.1c( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.271645546s) [1,2,3] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.376586914s@ mbc={}] start_peering_interval up [5,0,1] -> [1,2,3], acting [5,0,1] -> [1,2,3], acting_primary 5 -> 1, up_primary 5 -> 1, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.18( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.238822937s) [4,3,2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.343505859s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.1c( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.271645546s) [1,2,3] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1135.376586914s@ mbc={}] state: transitioning to Primary Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.1a( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.238750458s) [3,4,2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.343505859s@ mbc={}] start_peering_interval up [3,1,5] -> [3,4,2], acting [3,1,5] -> [3,4,2], acting_primary 3 -> 3, up_primary 3 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.16( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.274788857s) [3,2,4] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.379638672s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[4.15( empty local-lis/les=0/0 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [4,2,0] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.1b( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.279878616s) [4,0,2] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.385009766s@ mbc={}] start_peering_interval up [1,3,2] -> [4,0,2], acting [1,3,2] -> [4,0,2], acting_primary 1 -> 4, up_primary 1 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.1b( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.279705048s) [4,0,2] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.385009766s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.19( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.237913132s) [2,0,4] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.343505859s@ mbc={}] start_peering_interval up [3,1,5] -> [2,0,4], acting [3,1,5] -> [2,0,4], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.19( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.237832069s) [2,0,4] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.343505859s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.1e( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.270583153s) [2,0,1] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.376586914s@ mbc={}] start_peering_interval up [5,0,1] -> [2,0,1], acting [5,0,1] -> [2,0,1], acting_primary 5 -> 2, up_primary 5 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.18( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.273570061s) [5,1,3] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.379516602s@ mbc={}] start_peering_interval up [1,3,2] -> [5,1,3], acting [1,3,2] -> [5,1,3], acting_primary 1 -> 5, up_primary 1 -> 5, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.18( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.273510933s) [5,1,3] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.379516602s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.1f( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.269839287s) [4,3,2] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.376098633s@ mbc={}] start_peering_interval up [5,0,1] -> [4,3,2], acting [5,0,1] -> [4,3,2], acting_primary 5 -> 4, up_primary 5 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.1a( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.238442421s) [3,4,2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.343505859s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.1f( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.269789696s) [4,3,2] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.376098633s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.19( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.272933960s) [5,0,1] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.379516602s@ mbc={}] start_peering_interval up [1,3,2] -> [5,0,1], acting [1,3,2] -> [5,0,1], acting_primary 1 -> 5, up_primary 1 -> 5, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.11( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.270020485s) [2,0,4] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.376464844s@ mbc={}] start_peering_interval up [5,0,1] -> [2,0,4], acting [5,0,1] -> [2,0,4], acting_primary 5 -> 2, up_primary 5 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.1e( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.270485878s) [2,0,1] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.376586914s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.10( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.269218445s) [5,4,3] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.376098633s@ mbc={}] start_peering_interval up [5,0,1] -> [5,4,3], acting [5,0,1] -> [5,4,3], acting_primary 5 -> 5, up_primary 5 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.10( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.269165993s) [5,4,3] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.376098633s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.17( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.272994995s) [5,3,1] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.379882812s@ mbc={}] start_peering_interval up [1,3,2] -> [5,3,1], acting [1,3,2] -> [5,3,1], acting_primary 1 -> 5, up_primary 1 -> 5, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.11( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.269561768s) [2,0,4] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.376464844s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.13( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.272998810s) [0,1,5] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.380126953s@ mbc={}] start_peering_interval up [5,0,1] -> [0,1,5], acting [5,0,1] -> [0,1,5], acting_primary 5 -> 0, up_primary 5 -> 0, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.13( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.272933960s) [0,1,5] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.380126953s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.12( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.268793106s) [5,1,3] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.376098633s@ mbc={}] start_peering_interval up [5,0,1] -> [5,1,3], acting [5,0,1] -> [5,1,3], acting_primary 5 -> 5, up_primary 5 -> 5, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.12( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.268742561s) [5,1,3] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.376098633s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.15( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.267452240s) [4,2,0] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.374877930s@ mbc={}] start_peering_interval up [5,0,1] -> [4,2,0], acting [5,0,1] -> [4,2,0], acting_primary 5 -> 4, up_primary 5 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.19( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.272860527s) [5,0,1] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.379516602s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.14( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.267168999s) [4,2,0] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.374755859s@ mbc={}] start_peering_interval up [5,0,1] -> [4,2,0], acting [5,0,1] -> [4,2,0], acting_primary 5 -> 4, up_primary 5 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.17( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.272921562s) [5,3,1] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.379882812s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.14( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.267083168s) [4,2,0] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.374755859s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.17( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.268669128s) [5,3,4] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.376586914s@ mbc={}] start_peering_interval up [5,0,1] -> [5,3,4], acting [5,0,1] -> [5,3,4], acting_primary 5 -> 5, up_primary 5 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.15( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.267400742s) [4,2,0] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.374877930s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.10( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.229127884s) [4,5,3] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.337280273s@ mbc={}] start_peering_interval up [3,1,5] -> [4,5,3], acting [3,1,5] -> [4,5,3], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.10( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.229074478s) [4,5,3] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.337280273s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.11( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.228689194s) [0,2,4] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.337036133s@ mbc={}] start_peering_interval up [3,1,5] -> [0,2,4], acting [3,1,5] -> [0,2,4], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.9( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.266458511s) [1,5,0] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.374755859s@ mbc={}] start_peering_interval up [5,0,1] -> [1,5,0], acting [5,0,1] -> [1,5,0], acting_primary 5 -> 1, up_primary 5 -> 1, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.f( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.228619576s) [4,0,5] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.337036133s@ mbc={}] start_peering_interval up [3,1,5] -> [4,0,5], acting [3,1,5] -> [4,0,5], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.9( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.266458511s) [1,5,0] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1135.374755859s@ mbc={}] state: transitioning to Primary Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.11( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.228550911s) [0,2,4] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.337036133s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.17( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.268606186s) [5,3,4] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.376586914s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.16( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.234749794s) [0,1,2] r=1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.343383789s@ mbc={}] start_peering_interval up [3,1,5] -> [0,1,2], acting [3,1,5] -> [0,1,2], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.8( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.265679359s) [1,3,5] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.374389648s@ mbc={}] start_peering_interval up [5,0,1] -> [1,3,5], acting [5,0,1] -> [1,3,5], acting_primary 5 -> 1, up_primary 5 -> 1, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.16( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.234657288s) [0,1,2] r=1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.343383789s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.e( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.228191376s) [2,4,3] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.337036133s@ mbc={}] start_peering_interval up [3,1,5] -> [2,4,3], acting [3,1,5] -> [2,4,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.8( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.265679359s) [1,3,5] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1135.374389648s@ mbc={}] state: transitioning to Primary Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.e( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.228139877s) [2,4,3] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.337036133s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.f( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.272604942s) [0,1,2] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.381591797s@ mbc={}] start_peering_interval up [1,3,2] -> [0,1,2], acting [1,3,2] -> [0,1,2], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.f( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.228571892s) [4,0,5] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.337036133s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.c( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.272802353s) [0,5,1] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.381958008s@ mbc={}] start_peering_interval up [1,3,2] -> [0,5,1], acting [1,3,2] -> [0,5,1], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.c( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.272746086s) [0,5,1] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.381958008s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.f( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.272253990s) [0,1,2] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.381591797s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.a( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.264956474s) [0,5,1] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.374389648s@ mbc={}] start_peering_interval up [5,0,1] -> [0,5,1], acting [5,0,1] -> [0,5,1], acting_primary 5 -> 0, up_primary 5 -> 0, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.b( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.270475388s) [2,0,4] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.380004883s@ mbc={}] start_peering_interval up [5,0,1] -> [2,0,4], acting [5,0,1] -> [2,0,4], acting_primary 5 -> 2, up_primary 5 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.b( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.270392418s) [2,0,4] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.380004883s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.d( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.264524460s) [3,1,5] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.374267578s@ mbc={}] start_peering_interval up [5,0,1] -> [3,1,5], acting [5,0,1] -> [3,1,5], acting_primary 5 -> 3, up_primary 5 -> 3, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.6( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.264867783s) [4,5,0] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.374633789s@ mbc={}] start_peering_interval up [5,0,1] -> [4,5,0], acting [5,0,1] -> [4,5,0], acting_primary 5 -> 4, up_primary 5 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.d( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.264428139s) [3,1,5] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.374267578s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.6( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.264818192s) [4,5,0] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.374633789s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.7( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.264459610s) [2,1,3] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.374389648s@ mbc={}] start_peering_interval up [5,0,1] -> [2,1,3], acting [5,0,1] -> [2,1,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.7( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.264404297s) [2,1,3] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.374389648s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.1( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.227999687s) [2,4,3] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.338134766s@ mbc={}] start_peering_interval up [3,1,5] -> [2,4,3], acting [3,1,5] -> [2,4,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.6( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.227627754s) [2,1,3] r=1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.337768555s@ mbc={}] start_peering_interval up [3,1,5] -> [2,1,3], acting [3,1,5] -> [2,1,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.1( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.227949142s) [2,4,3] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.338134766s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.6( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.227521896s) [2,1,3] r=1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.337768555s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.4( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.263625145s) [5,1,3] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.374023438s@ mbc={}] start_peering_interval up [5,0,1] -> [5,1,3], acting [5,0,1] -> [5,1,3], acting_primary 5 -> 5, up_primary 5 -> 5, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.4( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.263570786s) [5,1,3] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.374023438s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.3( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.271447182s) [0,2,4] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.381958008s@ mbc={}] start_peering_interval up [1,3,2] -> [0,2,4], acting [1,3,2] -> [0,2,4], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.5( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.264107704s) [3,1,2] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.374633789s@ mbc={}] start_peering_interval up [5,0,1] -> [3,1,2], acting [5,0,1] -> [3,1,2], acting_primary 5 -> 3, up_primary 5 -> 3, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.3( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.271395683s) [0,2,4] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.381958008s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.5( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.264052391s) [3,1,2] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.374633789s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.2( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.270923615s) [5,4,0] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.381591797s@ mbc={}] start_peering_interval up [1,3,2] -> [5,4,0], acting [1,3,2] -> [5,4,0], acting_primary 1 -> 5, up_primary 1 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.a( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.264909744s) [0,5,1] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.374389648s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.2( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.263178825s) [1,3,2] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.374023438s@ mbc={}] start_peering_interval up [5,0,1] -> [1,3,2], acting [5,0,1] -> [1,3,2], acting_primary 5 -> 1, up_primary 5 -> 1, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.2( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.270862579s) [5,4,0] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.381591797s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.4( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.226953506s) [5,1,3] r=1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.337890625s@ mbc={}] start_peering_interval up [3,1,5] -> [5,1,3], acting [3,1,5] -> [5,1,3], acting_primary 3 -> 5, up_primary 3 -> 5, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.4( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.226895332s) [5,1,3] r=1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.337890625s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.3( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.262713432s) [1,3,2] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.374145508s@ mbc={}] start_peering_interval up [5,0,1] -> [1,3,2], acting [5,0,1] -> [1,3,2], acting_primary 5 -> 1, up_primary 5 -> 1, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.6( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.270758629s) [5,0,4] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.382080078s@ mbc={}] start_peering_interval up [1,3,2] -> [5,0,4], acting [1,3,2] -> [5,0,4], acting_primary 1 -> 5, up_primary 1 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.e( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.261864662s) [3,4,2] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.373291016s@ mbc={}] start_peering_interval up [5,0,1] -> [3,4,2], acting [5,0,1] -> [3,4,2], acting_primary 5 -> 3, up_primary 5 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.2( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.263178825s) [1,3,2] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1135.374023438s@ mbc={}] state: transitioning to Primary Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.3( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.262713432s) [1,3,2] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1135.374145508s@ mbc={}] state: transitioning to Primary Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.e( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.261789322s) [3,4,2] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.373291016s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.6( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.270571709s) [5,0,4] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.382080078s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.1( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.261681557s) [4,0,2] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.373413086s@ mbc={}] start_peering_interval up [5,0,1] -> [4,0,2], acting [5,0,1] -> [4,0,2], acting_primary 5 -> 4, up_primary 5 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.8( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.231292725s) [0,1,2] r=1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.343017578s@ mbc={}] start_peering_interval up [3,1,5] -> [0,1,2], acting [3,1,5] -> [0,1,2], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.8( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.231239319s) [0,1,2] r=1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.343017578s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.9( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.226190567s) [5,0,4] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.338012695s@ mbc={}] start_peering_interval up [3,1,5] -> [5,0,4], acting [3,1,5] -> [5,0,4], acting_primary 3 -> 5, up_primary 3 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.1( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.261618614s) [4,0,2] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.373413086s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.f( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.262058258s) [2,4,3] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.374145508s@ mbc={}] start_peering_interval up [5,0,1] -> [2,4,3], acting [5,0,1] -> [2,4,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.9( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.225929260s) [5,0,4] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.338012695s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.c( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.260646820s) [0,5,1] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.373168945s@ mbc={}] start_peering_interval up [5,0,1] -> [0,5,1], acting [5,0,1] -> [0,5,1], acting_primary 5 -> 0, up_primary 5 -> 0, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.c( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.260542870s) [0,5,1] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.373168945s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.1d( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.260731697s) [4,3,5] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.373413086s@ mbc={}] start_peering_interval up [5,0,1] -> [4,3,5], acting [5,0,1] -> [4,3,5], acting_primary 5 -> 4, up_primary 5 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.1a( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.266613007s) [4,5,0] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.379516602s@ mbc={}] start_peering_interval up [1,3,2] -> [4,5,0], acting [1,3,2] -> [4,5,0], acting_primary 1 -> 4, up_primary 1 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.1d( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.260600090s) [4,3,5] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.373413086s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.1a( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.266546249s) [4,5,0] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.379516602s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.1a( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.260270119s) [3,2,4] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.373168945s@ mbc={}] start_peering_interval up [5,0,1] -> [3,2,4], acting [5,0,1] -> [3,2,4], acting_primary 5 -> 3, up_primary 5 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.1a( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.260220528s) [3,2,4] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.373168945s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.1c( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.230233192s) [4,3,2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.343383789s@ mbc={}] start_peering_interval up [3,1,5] -> [4,3,2], acting [3,1,5] -> [4,3,2], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.1b( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.261491776s) [3,5,1] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.374633789s@ mbc={}] start_peering_interval up [5,0,1] -> [3,5,1], acting [5,0,1] -> [3,5,1], acting_primary 5 -> 3, up_primary 5 -> 3, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.1c( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.271749496s) [0,5,1] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.385009766s@ mbc={}] start_peering_interval up [1,3,2] -> [0,5,1], acting [1,3,2] -> [0,5,1], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.1c( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.230112076s) [4,3,2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.343383789s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.1c( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.271697044s) [0,5,1] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.385009766s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.1b( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.261339188s) [3,5,1] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.374633789s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.1d( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.229953766s) [4,3,2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.343505859s@ mbc={}] start_peering_interval up [3,1,5] -> [4,3,2], acting [3,1,5] -> [4,3,2], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.18( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.259502411s) [3,2,1] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.373168945s@ mbc={}] start_peering_interval up [5,0,1] -> [3,2,1], acting [5,0,1] -> [3,2,1], acting_primary 5 -> 3, up_primary 5 -> 3, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.1e( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.229522705s) [5,1,3] r=1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.343261719s@ mbc={}] start_peering_interval up [3,1,5] -> [5,1,3], acting [3,1,5] -> [5,1,3], acting_primary 3 -> 5, up_primary 3 -> 5, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.18( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.259428978s) [3,2,1] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.373168945s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.19( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.259214401s) [1,2,0] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.373168945s@ mbc={}] start_peering_interval up [5,0,1] -> [1,2,0], acting [5,0,1] -> [1,2,0], acting_primary 5 -> 1, up_primary 5 -> 1, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.f( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.260226250s) [2,4,3] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.374145508s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[4.19( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.259214401s) [1,2,0] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1135.373168945s@ mbc={}] state: transitioning to Primary Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.1e( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.270941734s) [5,4,3] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.385009766s@ mbc={}] start_peering_interval up [1,3,2] -> [5,4,3], acting [1,3,2] -> [5,4,3], acting_primary 1 -> 5, up_primary 1 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.1e( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.229468346s) [5,1,3] r=1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.343261719s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[3.1e( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.270884514s) [5,4,3] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.385009766s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.1d( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.229892731s) [4,3,2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.343505859s@ mbc={}] state: transitioning to Stray Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.1f( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.228734016s) [2,3,4] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.343261719s@ mbc={}] start_peering_interval up [3,1,5] -> [2,3,4], acting [3,1,5] -> [2,3,4], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:18:53 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[2.1f( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.228308678s) [2,3,4] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.343261719s@ mbc={}] state: transitioning to Stray Dec 6 03:18:54 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[3.6( empty local-lis/les=0/0 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41) [5,0,4] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:18:54 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[4.10( empty local-lis/les=0/0 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [5,4,3] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:18:54 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[4.17( empty local-lis/les=0/0 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [5,3,4] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:18:54 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[3.2( empty local-lis/les=0/0 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41) [5,4,0] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:18:54 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[2.9( empty local-lis/les=0/0 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41) [5,0,4] r=2 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:18:54 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[3.1e( empty local-lis/les=0/0 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41) [5,4,3] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:18:54 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[5.5( empty local-lis/les=0/0 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41) [5,3,1] r=2 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:18:54 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[2.1a( empty local-lis/les=0/0 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41) [3,4,2] r=1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:18:54 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[2.14( empty local-lis/les=0/0 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41) [3,4,2] r=1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:18:54 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[3.16( empty local-lis/les=0/0 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41) [3,2,4] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:18:54 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[3.14( empty local-lis/les=0/0 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41) [3,4,2] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:18:54 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[5.17( empty local-lis/les=0/0 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41) [5,1,0] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:18:54 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[4.f( empty local-lis/les=0/0 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [2,4,3] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:18:54 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[5.16( empty local-lis/les=0/0 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41) [3,5,1] r=2 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:18:54 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[2.1f( empty local-lis/les=0/0 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41) [2,3,4] r=2 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:18:54 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[5.15( empty local-lis/les=0/0 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41) [3,2,1] r=2 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:18:54 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[2.2( empty local-lis/les=0/0 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41) [3,5,4] r=2 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:18:54 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[3.5( empty local-lis/les=0/0 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41) [3,5,4] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:18:54 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[4.e( empty local-lis/les=0/0 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [3,4,2] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:18:54 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[2.1( empty local-lis/les=0/0 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41) [2,4,3] r=1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:18:54 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[4.b( empty local-lis/les=0/0 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [2,0,4] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:18:54 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[5.19( empty local-lis/les=0/0 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41) [2,3,1] r=2 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:18:54 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[5.3( empty local-lis/les=0/0 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41) [2,3,1] r=2 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:18:54 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[2.e( empty local-lis/les=0/0 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41) [2,4,3] r=1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:18:54 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[4.11( empty local-lis/les=0/0 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [2,0,4] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:18:54 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[5.a( empty local-lis/les=0/0 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41) [2,1,3] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:18:54 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[2.19( empty local-lis/les=0/0 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41) [2,0,4] r=2 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:18:54 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[3.1( empty local-lis/les=0/0 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41) [2,3,4] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:18:54 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[4.1a( empty local-lis/les=0/0 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [3,2,4] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:18:54 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[3.1f( empty local-lis/les=0/0 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41) [2,3,4] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:18:54 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[5.1( empty local-lis/les=0/0 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41) [3,5,1] r=2 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:18:54 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[3.3( empty local-lis/les=0/0 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41) [0,2,4] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:18:54 localhost ceph-osd[32665]: osd.4 pg_epoch: 41 pg[2.11( empty local-lis/les=0/0 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41) [0,2,4] r=2 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:18:54 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[5.18( empty local-lis/les=0/0 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41) [0,1,5] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:18:54 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[5.2( empty local-lis/les=0/0 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41) [0,5,1] r=2 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:18:54 localhost ceph-osd[32665]: osd.4 pg_epoch: 42 pg[2.18( empty local-lis/les=41/42 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41) [4,3,2] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:54 localhost ceph-osd[32665]: osd.4 pg_epoch: 42 pg[2.12( empty local-lis/les=41/42 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41) [4,2,3] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:54 localhost ceph-osd[32665]: osd.4 pg_epoch: 42 pg[4.1f( empty local-lis/les=41/42 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [4,3,2] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:54 localhost ceph-osd[31726]: osd.1 pg_epoch: 41 pg[5.f( empty local-lis/les=0/0 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41) [0,1,2] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:18:54 localhost ceph-osd[32665]: osd.4 pg_epoch: 42 pg[2.10( empty local-lis/les=41/42 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41) [4,5,3] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:54 localhost ceph-osd[32665]: osd.4 pg_epoch: 42 pg[4.6( empty local-lis/les=41/42 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [4,5,0] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:54 localhost ceph-osd[31726]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=41/42 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41) [1,5,3] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:54 localhost ceph-osd[32665]: osd.4 pg_epoch: 42 pg[4.15( empty local-lis/les=41/42 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [4,2,0] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:54 localhost ceph-osd[32665]: osd.4 pg_epoch: 42 pg[4.14( empty local-lis/les=41/42 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [4,2,0] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:54 localhost ceph-osd[32665]: osd.4 pg_epoch: 42 pg[2.f( empty local-lis/les=41/42 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41) [4,0,5] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:54 localhost ceph-osd[32665]: osd.4 pg_epoch: 42 pg[4.1( empty local-lis/les=41/42 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [4,0,2] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:54 localhost ceph-osd[31726]: osd.1 pg_epoch: 42 pg[4.8( empty local-lis/les=41/42 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [1,3,5] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:54 localhost ceph-osd[31726]: osd.1 pg_epoch: 42 pg[4.3( empty local-lis/les=41/42 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [1,3,2] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:54 localhost ceph-osd[32665]: osd.4 pg_epoch: 42 pg[2.1c( empty local-lis/les=41/42 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41) [4,3,2] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:54 localhost ceph-osd[32665]: osd.4 pg_epoch: 42 pg[2.1d( empty local-lis/les=41/42 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41) [4,3,2] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:54 localhost ceph-osd[32665]: osd.4 pg_epoch: 42 pg[3.9( empty local-lis/les=41/42 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41) [4,3,2] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:54 localhost ceph-osd[32665]: osd.4 pg_epoch: 42 pg[3.11( empty local-lis/les=41/42 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41) [4,0,2] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:54 localhost ceph-osd[31726]: osd.1 pg_epoch: 42 pg[4.19( empty local-lis/les=41/42 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [1,2,0] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:54 localhost ceph-osd[31726]: osd.1 pg_epoch: 42 pg[4.2( empty local-lis/les=41/42 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [1,3,2] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:54 localhost ceph-osd[31726]: osd.1 pg_epoch: 42 pg[2.5( empty local-lis/les=41/42 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41) [1,2,0] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:54 localhost ceph-osd[31726]: osd.1 pg_epoch: 42 pg[5.4( empty local-lis/les=41/42 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41) [1,2,3] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:54 localhost ceph-osd[31726]: osd.1 pg_epoch: 42 pg[2.1b( empty local-lis/les=41/42 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41) [1,0,2] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:54 localhost ceph-osd[31726]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=41/42 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41) [1,0,5] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:54 localhost ceph-osd[31726]: osd.1 pg_epoch: 42 pg[2.b( empty local-lis/les=41/42 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41) [1,3,2] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:54 localhost ceph-osd[31726]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=41/42 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41) [1,0,5] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:54 localhost ceph-osd[31726]: osd.1 pg_epoch: 42 pg[2.c( empty local-lis/les=41/42 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41) [1,5,0] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:54 localhost ceph-osd[31726]: osd.1 pg_epoch: 42 pg[4.9( empty local-lis/les=41/42 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [1,5,0] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:54 localhost ceph-osd[31726]: osd.1 pg_epoch: 42 pg[2.13( empty local-lis/les=41/42 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41) [1,0,2] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:54 localhost ceph-osd[31726]: osd.1 pg_epoch: 42 pg[4.1c( empty local-lis/les=41/42 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [1,2,3] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:54 localhost ceph-osd[31726]: osd.1 pg_epoch: 42 pg[5.8( empty local-lis/les=41/42 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41) [1,2,3] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:54 localhost ceph-osd[31726]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=41/42 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41) [1,2,3] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:54 localhost ceph-osd[32665]: osd.4 pg_epoch: 42 pg[4.1d( empty local-lis/les=41/42 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [4,3,5] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:54 localhost ceph-osd[32665]: osd.4 pg_epoch: 42 pg[3.8( empty local-lis/les=41/42 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41) [4,5,0] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:54 localhost ceph-osd[32665]: osd.4 pg_epoch: 42 pg[3.1b( empty local-lis/les=41/42 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41) [4,0,2] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:54 localhost ceph-osd[32665]: osd.4 pg_epoch: 42 pg[5.d( empty local-lis/les=41/42 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41) [4,0,2] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:54 localhost ceph-osd[32665]: osd.4 pg_epoch: 42 pg[3.1a( empty local-lis/les=41/42 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41) [4,5,0] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:54 localhost ceph-osd[31726]: osd.1 pg_epoch: 42 pg[5.12( empty local-lis/les=41/42 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41) [1,3,5] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:54 localhost ceph-osd[31726]: osd.1 pg_epoch: 42 pg[3.15( empty local-lis/les=41/42 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41) [1,3,5] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:54 localhost ceph-osd[31726]: osd.1 pg_epoch: 42 pg[3.1d( empty local-lis/les=41/42 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41) [1,0,2] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:18:58 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 5.13 scrub starts Dec 6 03:18:58 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 3.e scrub starts Dec 6 03:18:59 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 3.e scrub ok Dec 6 03:19:03 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 3.1d deep-scrub starts Dec 6 03:19:03 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 3.1d deep-scrub ok Dec 6 03:19:04 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 5.b scrub starts Dec 6 03:19:04 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 5.b scrub ok Dec 6 03:19:08 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 4.9 scrub starts Dec 6 03:19:08 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 4.9 scrub ok Dec 6 03:19:08 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 4.1f scrub starts Dec 6 03:19:08 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 4.1f scrub ok Dec 6 03:19:09 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 2.d scrub starts Dec 6 03:19:09 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 2.d scrub ok Dec 6 03:19:10 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 4.8 scrub starts Dec 6 03:19:10 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 4.8 scrub ok Dec 6 03:19:10 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 4.6 scrub starts Dec 6 03:19:10 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 4.6 scrub ok Dec 6 03:19:11 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 2.c scrub starts Dec 6 03:19:11 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 2.c scrub ok Dec 6 03:19:12 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 4.14 scrub starts Dec 6 03:19:12 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 4.14 scrub ok Dec 6 03:19:13 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 2.a deep-scrub starts Dec 6 03:19:13 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 2.a deep-scrub ok Dec 6 03:19:16 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 2.1b scrub starts Dec 6 03:19:16 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 2.1b scrub ok Dec 6 03:19:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:19:16 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 4.15 scrub starts Dec 6 03:19:16 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 4.15 scrub ok Dec 6 03:19:16 localhost podman[55721]: 2025-12-06 08:19:16.920607594 +0000 UTC m=+0.083521684 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 6 03:19:17 localhost podman[55721]: 2025-12-06 08:19:17.118236499 +0000 UTC m=+0.281150589 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, io.buildah.version=1.41.4, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:19:17 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:19:17 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 2.5 scrub starts Dec 6 03:19:17 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 2.5 scrub ok Dec 6 03:19:18 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 2.b scrub starts Dec 6 03:19:18 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 2.b scrub ok Dec 6 03:19:18 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 3.9 scrub starts Dec 6 03:19:19 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 3.9 scrub ok Dec 6 03:19:19 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 4.1 scrub starts Dec 6 03:19:19 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 4.1 scrub ok Dec 6 03:19:20 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 2.13 scrub starts Dec 6 03:19:20 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 2.13 scrub ok Dec 6 03:19:22 localhost python3[55767]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:19:24 localhost python3[55783]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:19:25 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 4.1d scrub starts Dec 6 03:19:25 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 4.1d scrub ok Dec 6 03:19:26 localhost python3[55799]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:19:27 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 2.15 scrub starts Dec 6 03:19:27 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 2.15 scrub ok Dec 6 03:19:29 localhost python3[55847]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:19:29 localhost python3[55890]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009168.8126051-92087-50419872536182/source dest=/var/lib/tripleo-config/ceph/ceph.client.openstack.keyring mode=600 _original_basename=ceph.client.openstack.keyring follow=False checksum=9d631b6552ddeaa0e75a39b18f2bdb583e0e85e3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:19:30 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 4.1c scrub starts Dec 6 03:19:30 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 4.1c scrub ok Dec 6 03:19:31 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 3.8 scrub starts Dec 6 03:19:31 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 3.8 scrub ok Dec 6 03:19:32 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 3.11 scrub starts Dec 6 03:19:32 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 3.11 scrub ok Dec 6 03:19:33 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 5.12 scrub starts Dec 6 03:19:33 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 5.12 scrub ok Dec 6 03:19:34 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 5.1a scrub starts Dec 6 03:19:34 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 5.1a scrub ok Dec 6 03:19:34 localhost python3[55953]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.client.manila.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:19:35 localhost python3[55996]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009174.3886807-92087-195413774088246/source dest=/var/lib/tripleo-config/ceph/ceph.client.manila.keyring mode=600 _original_basename=ceph.client.manila.keyring follow=False checksum=04fcaa63c42fa3b2b702e4421ebc774041538ebd backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:19:35 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 2.12 scrub starts Dec 6 03:19:35 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 2.12 scrub ok Dec 6 03:19:36 localhost ceph-osd[32665]: osd.4 43 crush map has features 432629239337189376, adjusting msgr requires for clients Dec 6 03:19:36 localhost ceph-osd[32665]: osd.4 43 crush map has features 432629239337189376 was 288514051259245057, adjusting msgr requires for mons Dec 6 03:19:36 localhost ceph-osd[32665]: osd.4 43 crush map has features 3314933000854323200, adjusting msgr requires for osds Dec 6 03:19:36 localhost ceph-osd[32665]: osd.4 pg_epoch: 43 pg[2.1f( empty local-lis/les=41/42 n=0 ec=29/20 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.512729645s) [5,3,4] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1177.215209961s@ mbc={}] start_peering_interval up [2,3,4] -> [5,3,4], acting [2,3,4] -> [5,3,4], acting_primary 2 -> 5, up_primary 2 -> 5, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:19:36 localhost ceph-osd[32665]: osd.4 pg_epoch: 43 pg[2.1f( empty local-lis/les=41/42 n=0 ec=29/20 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.512590408s) [5,3,4] r=2 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1177.215209961s@ mbc={}] state: transitioning to Stray Dec 6 03:19:37 localhost ceph-osd[31726]: osd.1 43 crush map has features 432629239337189376, adjusting msgr requires for clients Dec 6 03:19:37 localhost ceph-osd[31726]: osd.1 43 crush map has features 432629239337189376 was 288514051259245057, adjusting msgr requires for mons Dec 6 03:19:37 localhost ceph-osd[31726]: osd.1 43 crush map has features 3314933000854323200, adjusting msgr requires for osds Dec 6 03:19:39 localhost python3[56058]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:19:40 localhost python3[56101]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009179.7421753-92087-77202260007466/source dest=/var/lib/tripleo-config/ceph/ceph.conf mode=644 _original_basename=ceph.conf follow=False checksum=0cb3e740065655621c29366f25db5e0ef0002cd5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:19:40 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 4.2 scrub starts Dec 6 03:19:40 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 4.2 scrub ok Dec 6 03:19:41 localhost ceph-osd[31726]: osd.1 pg_epoch: 46 pg[6.0( empty local-lis/les=35/36 n=0 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=46 pruub=9.460670471s) [5,0,1] r=2 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 active pruub 1181.762695312s@ mbc={}] start_peering_interval up [5,0,1] -> [5,0,1], acting [5,0,1] -> [5,0,1], acting_primary 5 -> 5, up_primary 5 -> 5, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:19:41 localhost ceph-osd[31726]: osd.1 pg_epoch: 46 pg[6.0( empty local-lis/les=35/36 n=0 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=46 pruub=9.457298279s) [5,0,1] r=2 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1181.762695312s@ mbc={}] state: transitioning to Stray Dec 6 03:19:42 localhost ceph-osd[31726]: osd.1 pg_epoch: 47 pg[6.18( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=2 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:19:42 localhost ceph-osd[31726]: osd.1 pg_epoch: 47 pg[6.1a( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=2 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:19:42 localhost ceph-osd[31726]: osd.1 pg_epoch: 47 pg[6.1f( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=2 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:19:42 localhost ceph-osd[31726]: osd.1 pg_epoch: 47 pg[6.19( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=2 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:19:42 localhost ceph-osd[31726]: osd.1 pg_epoch: 47 pg[6.e( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=2 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:19:42 localhost ceph-osd[31726]: osd.1 pg_epoch: 47 pg[6.c( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=2 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:19:42 localhost ceph-osd[31726]: osd.1 pg_epoch: 47 pg[6.3( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=2 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:19:42 localhost ceph-osd[31726]: osd.1 pg_epoch: 47 pg[6.1( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=2 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:19:42 localhost ceph-osd[31726]: osd.1 pg_epoch: 47 pg[6.7( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=2 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:19:42 localhost ceph-osd[31726]: osd.1 pg_epoch: 47 pg[6.6( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=2 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:19:42 localhost ceph-osd[31726]: osd.1 pg_epoch: 47 pg[6.2( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=2 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:19:42 localhost ceph-osd[31726]: osd.1 pg_epoch: 47 pg[6.5( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=2 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:19:42 localhost ceph-osd[31726]: osd.1 pg_epoch: 47 pg[6.4( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=2 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:19:42 localhost ceph-osd[31726]: osd.1 pg_epoch: 47 pg[6.f( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=2 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:19:42 localhost ceph-osd[31726]: osd.1 pg_epoch: 47 pg[6.8( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=2 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:19:42 localhost ceph-osd[31726]: osd.1 pg_epoch: 47 pg[6.a( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=2 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:19:42 localhost ceph-osd[31726]: osd.1 pg_epoch: 47 pg[6.1b( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=2 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:19:42 localhost ceph-osd[31726]: osd.1 pg_epoch: 47 pg[6.9( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=2 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:19:42 localhost ceph-osd[31726]: osd.1 pg_epoch: 47 pg[6.b( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=2 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:19:42 localhost ceph-osd[31726]: osd.1 pg_epoch: 47 pg[6.14( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=2 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:19:42 localhost ceph-osd[31726]: osd.1 pg_epoch: 47 pg[6.16( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=2 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:19:42 localhost ceph-osd[31726]: osd.1 pg_epoch: 47 pg[6.17( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=2 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:19:42 localhost ceph-osd[31726]: osd.1 pg_epoch: 47 pg[6.15( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=2 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:19:42 localhost ceph-osd[31726]: osd.1 pg_epoch: 47 pg[6.10( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=2 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:19:42 localhost ceph-osd[31726]: osd.1 pg_epoch: 47 pg[6.13( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=2 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:19:42 localhost ceph-osd[31726]: osd.1 pg_epoch: 47 pg[6.1c( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=2 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:19:42 localhost ceph-osd[31726]: osd.1 pg_epoch: 47 pg[6.12( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=2 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:19:42 localhost ceph-osd[31726]: osd.1 pg_epoch: 47 pg[6.1d( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=2 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:19:42 localhost ceph-osd[31726]: osd.1 pg_epoch: 47 pg[6.11( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=2 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:19:42 localhost ceph-osd[31726]: osd.1 pg_epoch: 47 pg[6.1e( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=2 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:19:42 localhost ceph-osd[31726]: osd.1 pg_epoch: 47 pg[6.d( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=2 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:19:43 localhost ceph-osd[31726]: osd.1 pg_epoch: 48 pg[7.0( v 40'39 (0'0,40'39] local-lis/les=37/38 n=22 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=9.503730774s) [0,1,5] r=1 lpr=48 pi=[37,48)/1 luod=0'0 lua=40'37 crt=40'39 lcod 40'38 mlcod 0'0 active pruub 1183.820800781s@ mbc={}] start_peering_interval up [0,1,5] -> [0,1,5], acting [0,1,5] -> [0,1,5], acting_primary 0 -> 0, up_primary 0 -> 0, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:19:43 localhost ceph-osd[31726]: osd.1 pg_epoch: 48 pg[7.0( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=37/38 n=1 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=9.501980782s) [0,1,5] r=1 lpr=48 pi=[37,48)/1 crt=40'39 lcod 40'38 mlcod 0'0 unknown NOTIFY pruub 1183.820800781s@ mbc={}] state: transitioning to Stray Dec 6 03:19:44 localhost ceph-osd[31726]: osd.1 pg_epoch: 49 pg[7.c( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=37/38 n=1 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [0,1,5] r=1 lpr=48 pi=[37,48)/1 crt=40'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:19:44 localhost ceph-osd[31726]: osd.1 pg_epoch: 49 pg[7.a( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=37/38 n=1 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [0,1,5] r=1 lpr=48 pi=[37,48)/1 crt=40'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:19:44 localhost ceph-osd[31726]: osd.1 pg_epoch: 49 pg[7.b( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=37/38 n=1 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [0,1,5] r=1 lpr=48 pi=[37,48)/1 crt=40'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:19:44 localhost ceph-osd[31726]: osd.1 pg_epoch: 49 pg[7.9( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=37/38 n=1 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [0,1,5] r=1 lpr=48 pi=[37,48)/1 crt=40'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:19:44 localhost ceph-osd[31726]: osd.1 pg_epoch: 49 pg[7.8( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=37/38 n=1 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [0,1,5] r=1 lpr=48 pi=[37,48)/1 crt=40'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:19:44 localhost ceph-osd[31726]: osd.1 pg_epoch: 49 pg[7.e( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=37/38 n=1 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [0,1,5] r=1 lpr=48 pi=[37,48)/1 crt=40'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:19:44 localhost ceph-osd[31726]: osd.1 pg_epoch: 49 pg[7.4( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=37/38 n=2 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [0,1,5] r=1 lpr=48 pi=[37,48)/1 crt=40'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:19:44 localhost ceph-osd[31726]: osd.1 pg_epoch: 49 pg[7.5( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=37/38 n=2 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [0,1,5] r=1 lpr=48 pi=[37,48)/1 crt=40'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:19:44 localhost ceph-osd[31726]: osd.1 pg_epoch: 49 pg[7.3( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=37/38 n=2 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [0,1,5] r=1 lpr=48 pi=[37,48)/1 crt=40'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:19:44 localhost ceph-osd[31726]: osd.1 pg_epoch: 49 pg[7.7( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=37/38 n=1 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [0,1,5] r=1 lpr=48 pi=[37,48)/1 crt=40'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:19:44 localhost ceph-osd[31726]: osd.1 pg_epoch: 49 pg[7.6( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=37/38 n=2 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [0,1,5] r=1 lpr=48 pi=[37,48)/1 crt=40'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:19:44 localhost ceph-osd[31726]: osd.1 pg_epoch: 49 pg[7.1( v 40'39 (0'0,40'39] local-lis/les=37/38 n=2 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [0,1,5] r=1 lpr=48 pi=[37,48)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:19:44 localhost ceph-osd[31726]: osd.1 pg_epoch: 49 pg[7.d( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=37/38 n=1 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [0,1,5] r=1 lpr=48 pi=[37,48)/1 crt=40'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:19:44 localhost ceph-osd[31726]: osd.1 pg_epoch: 49 pg[7.2( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=37/38 n=2 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [0,1,5] r=1 lpr=48 pi=[37,48)/1 crt=40'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:19:44 localhost ceph-osd[31726]: osd.1 pg_epoch: 49 pg[7.f( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=37/38 n=1 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [0,1,5] r=1 lpr=48 pi=[37,48)/1 crt=40'39 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Dec 6 03:19:44 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 3.1b scrub starts Dec 6 03:19:44 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 3.1b scrub ok Dec 6 03:19:45 localhost python3[56163]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:19:46 localhost python3[56208]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009185.5373125-92412-279512322772049/source _original_basename=tmp57evwyi0 follow=False checksum=f17091ee142621a3c8290c8c96b5b52d67b3a864 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:19:46 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 4.3 scrub starts Dec 6 03:19:46 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 2.18 scrub starts Dec 6 03:19:46 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 4.3 scrub ok Dec 6 03:19:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:19:47 localhost systemd[1]: tmp-crun.vu8IPH.mount: Deactivated successfully. Dec 6 03:19:47 localhost podman[56271]: 2025-12-06 08:19:47.446532368 +0000 UTC m=+0.082722959 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, release=1761123044, version=17.1.12, batch=17.1_20251118.1, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, managed_by=tripleo_ansible, architecture=x86_64) Dec 6 03:19:47 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 4.19 deep-scrub starts Dec 6 03:19:47 localhost python3[56270]: ansible-ansible.legacy.stat Invoked with path=/usr/local/sbin/containers-tmpwatch follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:19:47 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 4.19 deep-scrub ok Dec 6 03:19:47 localhost podman[56271]: 2025-12-06 08:19:47.630686345 +0000 UTC m=+0.266876996 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., vcs-type=git, managed_by=tripleo_ansible, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, release=1761123044) Dec 6 03:19:47 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:19:47 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 2.10 scrub starts Dec 6 03:19:47 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 2.10 scrub ok Dec 6 03:19:47 localhost python3[56342]: ansible-ansible.legacy.copy Invoked with dest=/usr/local/sbin/containers-tmpwatch group=root mode=493 owner=root src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009187.1791785-92498-160004399388075/source _original_basename=tmp0ven4ftp follow=False checksum=84397b037dad9813fed388c4bcdd4871f384cd22 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:19:48 localhost python3[56372]: ansible-cron Invoked with job=/usr/local/sbin/containers-tmpwatch name=Remove old logs special_time=daily user=root state=present backup=False minute=* hour=* day=* month=* weekday=* disabled=False env=False cron_file=None insertafter=None insertbefore=None Dec 6 03:19:48 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 5.4 scrub starts Dec 6 03:19:48 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 5.4 scrub ok Dec 6 03:19:48 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 2.f scrub starts Dec 6 03:19:48 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 2.f scrub ok Dec 6 03:19:48 localhost python3[56390]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_2 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.1e( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.028770447s) [4,0,2] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.329711914s@ mbc={}] start_peering_interval up [5,0,1] -> [4,0,2], acting [5,0,1] -> [4,0,2], acting_primary 5 -> 4, up_primary 5 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.13( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.029058456s) [2,4,3] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.330200195s@ mbc={}] start_peering_interval up [5,0,1] -> [2,4,3], acting [5,0,1] -> [2,4,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.1e( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.028700829s) [4,0,2] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.329711914s@ mbc={}] state: transitioning to Stray Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.13( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.029010773s) [2,4,3] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.330200195s@ mbc={}] state: transitioning to Stray Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.1c( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.028232574s) [1,5,3] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.329223633s@ mbc={}] start_peering_interval up [5,0,1] -> [1,5,3], acting [5,0,1] -> [1,5,3], acting_primary 5 -> 1, up_primary 5 -> 1, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.d( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.028406143s) [0,5,1] r=2 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.330200195s@ mbc={}] start_peering_interval up [5,0,1] -> [0,5,1], acting [5,0,1] -> [0,5,1], acting_primary 5 -> 0, up_primary 5 -> 0, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.1d( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.023814201s) [5,4,3] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.325195312s@ mbc={}] start_peering_interval up [5,0,1] -> [5,4,3], acting [5,0,1] -> [5,4,3], acting_primary 5 -> 5, up_primary 5 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.12( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.026974678s) [4,0,2] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.328857422s@ mbc={}] start_peering_interval up [5,0,1] -> [4,0,2], acting [5,0,1] -> [4,0,2], acting_primary 5 -> 4, up_primary 5 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.11( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.027729988s) [5,1,3] r=1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.329589844s@ mbc={}] start_peering_interval up [5,0,1] -> [5,1,3], acting [5,0,1] -> [5,1,3], acting_primary 5 -> 5, up_primary 5 -> 5, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.11( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.027685165s) [5,1,3] r=1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.329589844s@ mbc={}] state: transitioning to Stray Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.1c( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.028232574s) [1,5,3] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown pruub 1189.329223633s@ mbc={}] state: transitioning to Primary Dec 6 03:19:49 localhost ceph-osd[32665]: osd.4 pg_epoch: 50 pg[6.1e( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [4,0,2] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.12( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.026546478s) [4,0,2] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.328857422s@ mbc={}] state: transitioning to Stray Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.10( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.027506828s) [2,1,3] r=1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.329956055s@ mbc={}] start_peering_interval up [5,0,1] -> [2,1,3], acting [5,0,1] -> [2,1,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.10( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.027451515s) [2,1,3] r=1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.329956055s@ mbc={}] state: transitioning to Stray Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.17( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.027018547s) [1,2,0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.329589844s@ mbc={}] start_peering_interval up [5,0,1] -> [1,2,0], acting [5,0,1] -> [1,2,0], acting_primary 5 -> 1, up_primary 5 -> 1, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.d( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.027871132s) [0,5,1] r=2 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.330200195s@ mbc={}] state: transitioning to Stray Dec 6 03:19:49 localhost ceph-osd[32665]: osd.4 pg_epoch: 50 pg[6.12( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [4,0,2] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.1d( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.022918701s) [5,4,3] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.325195312s@ mbc={}] state: transitioning to Stray Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.16( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.027281761s) [5,0,4] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.330078125s@ mbc={}] start_peering_interval up [5,0,1] -> [5,0,4], acting [5,0,1] -> [5,0,4], acting_primary 5 -> 5, up_primary 5 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.17( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.027018547s) [1,2,0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown pruub 1189.329589844s@ mbc={}] state: transitioning to Primary Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.16( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.026957512s) [5,0,4] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.330078125s@ mbc={}] state: transitioning to Stray Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.15( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.026705742s) [3,4,5] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.329711914s@ mbc={}] start_peering_interval up [5,0,1] -> [3,4,5], acting [5,0,1] -> [3,4,5], acting_primary 5 -> 3, up_primary 5 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.15( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.026460648s) [3,4,5] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.329711914s@ mbc={}] state: transitioning to Stray Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.14( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.025653839s) [5,3,4] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.329711914s@ mbc={}] start_peering_interval up [5,0,1] -> [5,3,4], acting [5,0,1] -> [5,3,4], acting_primary 5 -> 5, up_primary 5 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.a( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.025733948s) [0,5,4] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.329956055s@ mbc={}] start_peering_interval up [5,0,1] -> [0,5,4], acting [5,0,1] -> [0,5,4], acting_primary 5 -> 0, up_primary 5 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.a( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.025692940s) [0,5,4] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.329956055s@ mbc={}] state: transitioning to Stray Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.9( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.020556450s) [5,1,0] r=1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.324951172s@ mbc={}] start_peering_interval up [5,0,1] -> [5,1,0], acting [5,0,1] -> [5,1,0], acting_primary 5 -> 5, up_primary 5 -> 5, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.f( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.024387360s) [5,4,0] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.328857422s@ mbc={}] start_peering_interval up [5,0,1] -> [5,4,0], acting [5,0,1] -> [5,4,0], acting_primary 5 -> 5, up_primary 5 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.9( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.020507812s) [5,1,0] r=1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.324951172s@ mbc={}] state: transitioning to Stray Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.b( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.020248413s) [2,3,1] r=2 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.324829102s@ mbc={}] start_peering_interval up [5,0,1] -> [2,3,1], acting [5,0,1] -> [2,3,1], acting_primary 5 -> 2, up_primary 5 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.b( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.020225525s) [2,3,1] r=2 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.324829102s@ mbc={}] state: transitioning to Stray Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.f( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.024341583s) [5,4,0] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.328857422s@ mbc={}] state: transitioning to Stray Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.8( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.020699501s) [3,1,5] r=1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.325561523s@ mbc={}] start_peering_interval up [5,0,1] -> [3,1,5], acting [5,0,1] -> [3,1,5], acting_primary 5 -> 3, up_primary 5 -> 3, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.5( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.023706436s) [0,1,2] r=1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.328735352s@ mbc={}] start_peering_interval up [5,0,1] -> [0,1,2], acting [5,0,1] -> [0,1,2], acting_primary 5 -> 0, up_primary 5 -> 0, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.2( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.023640633s) [3,5,4] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.328735352s@ mbc={}] start_peering_interval up [5,0,1] -> [3,5,4], acting [5,0,1] -> [3,5,4], acting_primary 5 -> 3, up_primary 5 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.2( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.023591042s) [3,5,4] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.328735352s@ mbc={}] state: transitioning to Stray Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.6( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.020242691s) [2,3,1] r=2 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.325439453s@ mbc={}] start_peering_interval up [5,0,1] -> [2,3,1], acting [5,0,1] -> [2,3,1], acting_primary 5 -> 2, up_primary 5 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.6( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.020191193s) [2,3,1] r=2 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.325439453s@ mbc={}] state: transitioning to Stray Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.7( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.019382477s) [3,5,4] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.324829102s@ mbc={}] start_peering_interval up [5,0,1] -> [3,5,4], acting [5,0,1] -> [3,5,4], acting_primary 5 -> 3, up_primary 5 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.14( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.025566101s) [5,3,4] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.329711914s@ mbc={}] state: transitioning to Stray Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.1( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.023789406s) [1,0,2] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.329223633s@ mbc={}] start_peering_interval up [5,0,1] -> [1,0,2], acting [5,0,1] -> [1,0,2], acting_primary 5 -> 1, up_primary 5 -> 1, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.1( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.023789406s) [1,0,2] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown pruub 1189.329223633s@ mbc={}] state: transitioning to Primary Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.7( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.019342422s) [3,5,4] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.324829102s@ mbc={}] state: transitioning to Stray Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.8( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.019848824s) [3,1,5] r=1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.325561523s@ mbc={}] state: transitioning to Stray Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.c( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.022847176s) [2,0,4] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.328735352s@ mbc={}] start_peering_interval up [5,0,1] -> [2,0,4], acting [5,0,1] -> [2,0,4], acting_primary 5 -> 2, up_primary 5 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.3( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.022686005s) [0,4,2] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.328735352s@ mbc={}] start_peering_interval up [5,0,1] -> [0,4,2], acting [5,0,1] -> [0,4,2], acting_primary 5 -> 0, up_primary 5 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.5( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.022943497s) [0,1,2] r=1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.328735352s@ mbc={}] state: transitioning to Stray Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.3( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.022561073s) [0,4,2] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.328735352s@ mbc={}] state: transitioning to Stray Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.e( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.018326759s) [3,2,4] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.324707031s@ mbc={}] start_peering_interval up [5,0,1] -> [3,2,4], acting [5,0,1] -> [3,2,4], acting_primary 5 -> 3, up_primary 5 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.e( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.018283844s) [3,2,4] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.324707031s@ mbc={}] state: transitioning to Stray Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.18( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.018775940s) [5,3,4] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.325317383s@ mbc={}] start_peering_interval up [5,0,1] -> [5,3,4], acting [5,0,1] -> [5,3,4], acting_primary 5 -> 5, up_primary 5 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.19( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.017786980s) [0,5,1] r=2 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.324462891s@ mbc={}] start_peering_interval up [5,0,1] -> [0,5,1], acting [5,0,1] -> [0,5,1], acting_primary 5 -> 0, up_primary 5 -> 0, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.18( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.018623352s) [5,3,4] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.325317383s@ mbc={}] state: transitioning to Stray Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.19( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.017730713s) [0,5,1] r=2 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.324462891s@ mbc={}] state: transitioning to Stray Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.1b( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.018284798s) [1,3,2] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.325317383s@ mbc={}] start_peering_interval up [5,0,1] -> [1,3,2], acting [5,0,1] -> [1,3,2], acting_primary 5 -> 1, up_primary 5 -> 1, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.1b( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.018284798s) [1,3,2] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown pruub 1189.325317383s@ mbc={}] state: transitioning to Primary Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.1f( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.017577171s) [5,4,0] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.324462891s@ mbc={}] start_peering_interval up [5,0,1] -> [5,4,0], acting [5,0,1] -> [5,4,0], acting_primary 5 -> 5, up_primary 5 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.1f( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.017308235s) [5,4,0] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.324462891s@ mbc={}] state: transitioning to Stray Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.c( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.022798538s) [2,0,4] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.328735352s@ mbc={}] state: transitioning to Stray Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.1a( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.022010803s) [3,4,5] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.330078125s@ mbc={}] start_peering_interval up [5,0,1] -> [3,4,5], acting [5,0,1] -> [3,4,5], acting_primary 5 -> 3, up_primary 5 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:19:49 localhost ceph-osd[31726]: osd.1 pg_epoch: 50 pg[6.1a( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.021780014s) [3,4,5] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.330078125s@ mbc={}] state: transitioning to Stray Dec 6 03:19:49 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 2.1d deep-scrub starts Dec 6 03:19:49 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 2.1d deep-scrub ok Dec 6 03:19:50 localhost ceph-osd[32665]: osd.4 pg_epoch: 50 pg[6.1d( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [5,4,3] r=1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:19:50 localhost ceph-osd[32665]: osd.4 pg_epoch: 50 pg[6.18( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [5,3,4] r=2 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:19:50 localhost ceph-osd[32665]: osd.4 pg_epoch: 50 pg[6.f( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [5,4,0] r=1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:19:50 localhost ceph-osd[32665]: osd.4 pg_epoch: 50 pg[6.e( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [3,2,4] r=2 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:19:50 localhost ceph-osd[32665]: osd.4 pg_epoch: 50 pg[6.16( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [5,0,4] r=2 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:19:50 localhost ceph-osd[32665]: osd.4 pg_epoch: 50 pg[6.15( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [3,4,5] r=1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:19:50 localhost ceph-osd[32665]: osd.4 pg_epoch: 50 pg[6.14( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [5,3,4] r=2 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:19:50 localhost ceph-osd[32665]: osd.4 pg_epoch: 50 pg[6.c( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [2,0,4] r=2 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:19:50 localhost ceph-osd[32665]: osd.4 pg_epoch: 50 pg[6.1f( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [5,4,0] r=1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:19:50 localhost ceph-osd[32665]: osd.4 pg_epoch: 50 pg[6.7( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [3,5,4] r=2 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:19:50 localhost ceph-osd[32665]: osd.4 pg_epoch: 50 pg[6.13( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [2,4,3] r=1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:19:50 localhost ceph-osd[32665]: osd.4 pg_epoch: 50 pg[6.2( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [3,5,4] r=2 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:19:50 localhost ceph-osd[32665]: osd.4 pg_epoch: 50 pg[6.1a( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [3,4,5] r=1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:19:50 localhost ceph-osd[32665]: osd.4 pg_epoch: 50 pg[6.a( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [0,5,4] r=2 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:19:50 localhost ceph-osd[32665]: osd.4 pg_epoch: 50 pg[6.3( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [0,4,2] r=1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:19:50 localhost ceph-osd[31726]: osd.1 pg_epoch: 51 pg[6.1c( empty local-lis/les=50/51 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [1,5,3] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:19:50 localhost ceph-osd[31726]: osd.1 pg_epoch: 51 pg[6.1b( empty local-lis/les=50/51 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [1,3,2] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:19:50 localhost ceph-osd[31726]: osd.1 pg_epoch: 51 pg[6.1( empty local-lis/les=50/51 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [1,0,2] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:19:50 localhost ceph-osd[32665]: osd.4 pg_epoch: 51 pg[6.1e( empty local-lis/les=50/51 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [4,0,2] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:19:50 localhost ceph-osd[32665]: osd.4 pg_epoch: 51 pg[6.12( empty local-lis/les=50/51 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [4,0,2] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:19:50 localhost ceph-osd[31726]: osd.1 pg_epoch: 51 pg[6.17( empty local-lis/les=50/51 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [1,2,0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:19:50 localhost ansible-async_wrapper.py[56562]: Invoked with 102559012711 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009189.9660482-92654-46361919254174/AnsiballZ_command.py _ Dec 6 03:19:50 localhost ansible-async_wrapper.py[56565]: Starting module and watcher Dec 6 03:19:50 localhost ansible-async_wrapper.py[56565]: Start watching 56566 (3600) Dec 6 03:19:50 localhost ansible-async_wrapper.py[56566]: Start module (56566) Dec 6 03:19:50 localhost ansible-async_wrapper.py[56562]: Return async_wrapper task started. Dec 6 03:19:50 localhost python3[56587]: ansible-ansible.legacy.async_status Invoked with jid=102559012711.56562 mode=status _async_dir=/tmp/.ansible_async Dec 6 03:19:51 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 2.1c scrub starts Dec 6 03:19:51 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 2.1c scrub ok Dec 6 03:19:52 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 6.12 scrub starts Dec 6 03:19:52 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 5.8 scrub starts Dec 6 03:19:52 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 6.12 scrub ok Dec 6 03:19:53 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 6.1e scrub starts Dec 6 03:19:53 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 6.1e scrub ok Dec 6 03:19:54 localhost puppet-user[56582]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 6 03:19:54 localhost puppet-user[56582]: (file: /etc/puppet/hiera.yaml) Dec 6 03:19:54 localhost puppet-user[56582]: Warning: Undefined variable '::deploy_config_name'; Dec 6 03:19:54 localhost puppet-user[56582]: (file & line not available) Dec 6 03:19:54 localhost puppet-user[56582]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 6 03:19:54 localhost puppet-user[56582]: (file & line not available) Dec 6 03:19:54 localhost puppet-user[56582]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Dec 6 03:19:54 localhost puppet-user[56582]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Dec 6 03:19:54 localhost puppet-user[56582]: Notice: Compiled catalog for np0005548789.localdomain in environment production in 0.13 seconds Dec 6 03:19:54 localhost puppet-user[56582]: Notice: Applied catalog in 0.03 seconds Dec 6 03:19:54 localhost puppet-user[56582]: Application: Dec 6 03:19:54 localhost puppet-user[56582]: Initial environment: production Dec 6 03:19:54 localhost puppet-user[56582]: Converged environment: production Dec 6 03:19:54 localhost puppet-user[56582]: Run mode: user Dec 6 03:19:54 localhost puppet-user[56582]: Changes: Dec 6 03:19:54 localhost puppet-user[56582]: Events: Dec 6 03:19:54 localhost puppet-user[56582]: Resources: Dec 6 03:19:54 localhost puppet-user[56582]: Total: 10 Dec 6 03:19:54 localhost puppet-user[56582]: Time: Dec 6 03:19:54 localhost puppet-user[56582]: Schedule: 0.00 Dec 6 03:19:54 localhost puppet-user[56582]: File: 0.00 Dec 6 03:19:54 localhost puppet-user[56582]: Exec: 0.00 Dec 6 03:19:54 localhost puppet-user[56582]: Augeas: 0.01 Dec 6 03:19:54 localhost puppet-user[56582]: Transaction evaluation: 0.02 Dec 6 03:19:54 localhost puppet-user[56582]: Catalog application: 0.03 Dec 6 03:19:54 localhost puppet-user[56582]: Config retrieval: 0.16 Dec 6 03:19:54 localhost puppet-user[56582]: Last run: 1765009194 Dec 6 03:19:54 localhost puppet-user[56582]: Filebucket: 0.00 Dec 6 03:19:54 localhost puppet-user[56582]: Total: 0.04 Dec 6 03:19:54 localhost puppet-user[56582]: Version: Dec 6 03:19:54 localhost puppet-user[56582]: Config: 1765009194 Dec 6 03:19:54 localhost puppet-user[56582]: Puppet: 7.10.0 Dec 6 03:19:54 localhost ansible-async_wrapper.py[56566]: Module complete (56566) Dec 6 03:19:55 localhost ansible-async_wrapper.py[56565]: Done in kid B. Dec 6 03:19:56 localhost ceph-osd[31726]: osd.1 pg_epoch: 52 pg[7.6( v 40'39 (0'0,40'39] local-lis/les=48/49 n=2 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=11.540736198s) [2,1,0] r=1 lpr=52 pi=[48,52)/1 luod=0'0 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1199.366455078s@ mbc={}] start_peering_interval up [0,1,5] -> [2,1,0], acting [0,1,5] -> [2,1,0], acting_primary 0 -> 2, up_primary 0 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:19:56 localhost ceph-osd[31726]: osd.1 pg_epoch: 52 pg[7.a( v 40'39 (0'0,40'39] local-lis/les=48/49 n=1 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=11.540463448s) [2,1,0] r=1 lpr=52 pi=[48,52)/1 luod=0'0 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1199.366210938s@ mbc={}] start_peering_interval up [0,1,5] -> [2,1,0], acting [0,1,5] -> [2,1,0], acting_primary 0 -> 2, up_primary 0 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:19:56 localhost ceph-osd[31726]: osd.1 pg_epoch: 52 pg[7.2( v 40'39 (0'0,40'39] local-lis/les=48/49 n=2 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=11.540756226s) [2,1,0] r=1 lpr=52 pi=[48,52)/1 luod=0'0 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1199.366577148s@ mbc={}] start_peering_interval up [0,1,5] -> [2,1,0], acting [0,1,5] -> [2,1,0], acting_primary 0 -> 2, up_primary 0 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:19:56 localhost ceph-osd[31726]: osd.1 pg_epoch: 52 pg[7.6( v 40'39 (0'0,40'39] local-lis/les=48/49 n=2 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=11.540627480s) [2,1,0] r=1 lpr=52 pi=[48,52)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1199.366455078s@ mbc={}] state: transitioning to Stray Dec 6 03:19:56 localhost ceph-osd[31726]: osd.1 pg_epoch: 52 pg[7.a( v 40'39 (0'0,40'39] local-lis/les=48/49 n=1 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=11.540381432s) [2,1,0] r=1 lpr=52 pi=[48,52)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1199.366210938s@ mbc={}] state: transitioning to Stray Dec 6 03:19:56 localhost ceph-osd[31726]: osd.1 pg_epoch: 52 pg[7.2( v 40'39 (0'0,40'39] local-lis/les=48/49 n=2 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=11.540667534s) [2,1,0] r=1 lpr=52 pi=[48,52)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1199.366577148s@ mbc={}] state: transitioning to Stray Dec 6 03:19:56 localhost ceph-osd[31726]: osd.1 pg_epoch: 52 pg[7.e( v 40'39 (0'0,40'39] local-lis/les=48/49 n=1 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=11.541256905s) [2,1,0] r=1 lpr=52 pi=[48,52)/1 luod=0'0 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1199.366943359s@ mbc={}] start_peering_interval up [0,1,5] -> [2,1,0], acting [0,1,5] -> [2,1,0], acting_primary 0 -> 2, up_primary 0 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:19:56 localhost ceph-osd[31726]: osd.1 pg_epoch: 52 pg[7.e( v 40'39 (0'0,40'39] local-lis/les=48/49 n=1 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=11.540686607s) [2,1,0] r=1 lpr=52 pi=[48,52)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1199.366943359s@ mbc={}] state: transitioning to Stray Dec 6 03:19:57 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 5.e deep-scrub starts Dec 6 03:19:57 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 5.e deep-scrub ok Dec 6 03:19:59 localhost ceph-osd[31726]: osd.1 pg_epoch: 54 pg[7.b( v 40'39 (0'0,40'39] local-lis/les=48/49 n=1 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=9.096426964s) [5,3,4] r=-1 lpr=54 pi=[48,54)/1 luod=0'0 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1199.366455078s@ mbc={}] start_peering_interval up [0,1,5] -> [5,3,4], acting [0,1,5] -> [5,3,4], acting_primary 0 -> 5, up_primary 0 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:19:59 localhost ceph-osd[31726]: osd.1 pg_epoch: 54 pg[7.b( v 40'39 (0'0,40'39] local-lis/les=48/49 n=1 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=9.096349716s) [5,3,4] r=-1 lpr=54 pi=[48,54)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1199.366455078s@ mbc={}] state: transitioning to Stray Dec 6 03:19:59 localhost ceph-osd[31726]: osd.1 pg_epoch: 54 pg[7.7( v 40'39 (0'0,40'39] local-lis/les=48/49 n=1 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=9.096376419s) [5,3,4] r=-1 lpr=54 pi=[48,54)/1 luod=0'0 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1199.366455078s@ mbc={}] start_peering_interval up [0,1,5] -> [5,3,4], acting [0,1,5] -> [5,3,4], acting_primary 0 -> 5, up_primary 0 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:19:59 localhost ceph-osd[31726]: osd.1 pg_epoch: 54 pg[7.f( v 40'39 (0'0,40'39] local-lis/les=48/49 n=1 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=9.096095085s) [5,3,4] r=-1 lpr=54 pi=[48,54)/1 luod=0'0 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1199.366210938s@ mbc={}] start_peering_interval up [0,1,5] -> [5,3,4], acting [0,1,5] -> [5,3,4], acting_primary 0 -> 5, up_primary 0 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:19:59 localhost ceph-osd[31726]: osd.1 pg_epoch: 54 pg[7.3( v 40'39 (0'0,40'39] local-lis/les=48/49 n=2 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=9.096679688s) [5,3,4] r=-1 lpr=54 pi=[48,54)/1 luod=0'0 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1199.366821289s@ mbc={}] start_peering_interval up [0,1,5] -> [5,3,4], acting [0,1,5] -> [5,3,4], acting_primary 0 -> 5, up_primary 0 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:19:59 localhost ceph-osd[31726]: osd.1 pg_epoch: 54 pg[7.f( v 40'39 (0'0,40'39] local-lis/les=48/49 n=1 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=9.096002579s) [5,3,4] r=-1 lpr=54 pi=[48,54)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1199.366210938s@ mbc={}] state: transitioning to Stray Dec 6 03:19:59 localhost ceph-osd[31726]: osd.1 pg_epoch: 54 pg[7.7( v 40'39 (0'0,40'39] local-lis/les=48/49 n=1 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=9.096216202s) [5,3,4] r=-1 lpr=54 pi=[48,54)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1199.366455078s@ mbc={}] state: transitioning to Stray Dec 6 03:19:59 localhost ceph-osd[31726]: osd.1 pg_epoch: 54 pg[7.3( v 40'39 (0'0,40'39] local-lis/les=48/49 n=2 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=9.096534729s) [5,3,4] r=-1 lpr=54 pi=[48,54)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1199.366821289s@ mbc={}] state: transitioning to Stray Dec 6 03:20:00 localhost ceph-osd[32665]: osd.4 pg_epoch: 54 pg[7.b( empty local-lis/les=0/0 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=54) [5,3,4] r=2 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:20:00 localhost ceph-osd[32665]: osd.4 pg_epoch: 54 pg[7.3( empty local-lis/les=0/0 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=54) [5,3,4] r=2 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:20:00 localhost ceph-osd[32665]: osd.4 pg_epoch: 54 pg[7.7( empty local-lis/les=0/0 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=54) [5,3,4] r=2 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:20:00 localhost ceph-osd[32665]: osd.4 pg_epoch: 54 pg[7.f( empty local-lis/les=0/0 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=54) [5,3,4] r=2 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:20:00 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 6.1 scrub starts Dec 6 03:20:00 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 6.1 scrub ok Dec 6 03:20:01 localhost python3[56842]: ansible-ansible.legacy.async_status Invoked with jid=102559012711.56562 mode=status _async_dir=/tmp/.ansible_async Dec 6 03:20:01 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 5.13 scrub starts Dec 6 03:20:01 localhost ceph-osd[31726]: osd.1 pg_epoch: 56 pg[7.4( v 40'39 (0'0,40'39] local-lis/les=48/49 n=2 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=56 pruub=14.609109879s) [2,3,4] r=-1 lpr=56 pi=[48,56)/1 luod=0'0 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1207.366333008s@ mbc={}] start_peering_interval up [0,1,5] -> [2,3,4], acting [0,1,5] -> [2,3,4], acting_primary 0 -> 2, up_primary 0 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:20:01 localhost ceph-osd[31726]: osd.1 pg_epoch: 56 pg[7.c( v 40'39 (0'0,40'39] local-lis/les=48/49 n=1 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=56 pruub=14.609762192s) [2,3,4] r=-1 lpr=56 pi=[48,56)/1 luod=0'0 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1207.366943359s@ mbc={}] start_peering_interval up [0,1,5] -> [2,3,4], acting [0,1,5] -> [2,3,4], acting_primary 0 -> 2, up_primary 0 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:20:01 localhost ceph-osd[31726]: osd.1 pg_epoch: 56 pg[7.4( v 40'39 (0'0,40'39] local-lis/les=48/49 n=2 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=56 pruub=14.609000206s) [2,3,4] r=-1 lpr=56 pi=[48,56)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1207.366333008s@ mbc={}] state: transitioning to Stray Dec 6 03:20:01 localhost ceph-osd[31726]: osd.1 pg_epoch: 56 pg[7.c( v 40'39 (0'0,40'39] local-lis/les=48/49 n=1 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=56 pruub=14.609653473s) [2,3,4] r=-1 lpr=56 pi=[48,56)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1207.366943359s@ mbc={}] state: transitioning to Stray Dec 6 03:20:01 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 5.13 scrub ok Dec 6 03:20:01 localhost python3[56858]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 6 03:20:02 localhost python3[56874]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 6 03:20:02 localhost ceph-osd[32665]: osd.4 pg_epoch: 56 pg[7.4( empty local-lis/les=0/0 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=56) [2,3,4] r=2 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:20:02 localhost ceph-osd[32665]: osd.4 pg_epoch: 56 pg[7.c( empty local-lis/les=0/0 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=56) [2,3,4] r=2 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:20:02 localhost python3[56924]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:20:03 localhost python3[56942]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmppqet6sy3 recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 6 03:20:03 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 6.1b scrub starts Dec 6 03:20:03 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 6.1b scrub ok Dec 6 03:20:03 localhost ceph-osd[32665]: osd.4 pg_epoch: 58 pg[7.d( empty local-lis/les=0/0 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=58) [4,5,0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:20:03 localhost ceph-osd[32665]: osd.4 pg_epoch: 58 pg[7.5( empty local-lis/les=0/0 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=58) [4,5,0] r=0 lpr=58 pi=[48,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:20:03 localhost ceph-osd[31726]: osd.1 pg_epoch: 58 pg[7.d( v 40'39 (0'0,40'39] local-lis/les=48/49 n=1 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=58 pruub=12.849601746s) [4,5,0] r=-1 lpr=58 pi=[48,58)/1 luod=0'0 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1207.367187500s@ mbc={}] start_peering_interval up [0,1,5] -> [4,5,0], acting [0,1,5] -> [4,5,0], acting_primary 0 -> 4, up_primary 0 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:20:03 localhost ceph-osd[31726]: osd.1 pg_epoch: 58 pg[7.d( v 40'39 (0'0,40'39] local-lis/les=48/49 n=1 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=58 pruub=12.849519730s) [4,5,0] r=-1 lpr=58 pi=[48,58)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1207.367187500s@ mbc={}] state: transitioning to Stray Dec 6 03:20:03 localhost ceph-osd[31726]: osd.1 pg_epoch: 58 pg[7.5( v 40'39 (0'0,40'39] local-lis/les=48/49 n=2 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=58 pruub=12.849435806s) [4,5,0] r=-1 lpr=58 pi=[48,58)/1 luod=0'0 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1207.367187500s@ mbc={}] start_peering_interval up [0,1,5] -> [4,5,0], acting [0,1,5] -> [4,5,0], acting_primary 0 -> 4, up_primary 0 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:20:03 localhost ceph-osd[31726]: osd.1 pg_epoch: 58 pg[7.5( v 40'39 (0'0,40'39] local-lis/les=48/49 n=2 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=58 pruub=12.849357605s) [4,5,0] r=-1 lpr=58 pi=[48,58)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1207.367187500s@ mbc={}] state: transitioning to Stray Dec 6 03:20:03 localhost python3[56972]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:20:03 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 2.18 scrub starts Dec 6 03:20:03 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 2.18 scrub ok Dec 6 03:20:04 localhost ceph-osd[32665]: osd.4 pg_epoch: 59 pg[7.5( v 40'39 lc 40'7 (0'0,40'39] local-lis/les=58/59 n=2 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=58) [4,5,0] r=0 lpr=58 pi=[48,58)/1 crt=40'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(2+0)=2}}] state: react AllReplicasActivated Activating complete Dec 6 03:20:04 localhost ceph-osd[32665]: osd.4 pg_epoch: 59 pg[7.d( v 40'39 lc 40'8 (0'0,40'39] local-lis/les=58/59 n=1 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=58) [4,5,0] r=0 lpr=58 pi=[48,58)/1 crt=40'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(2+0)=2}}] state: react AllReplicasActivated Activating complete Dec 6 03:20:04 localhost python3[57076]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Dec 6 03:20:05 localhost ceph-osd[31726]: osd.1 pg_epoch: 60 pg[7.e( v 40'39 (0'0,40'39] local-lis/les=52/53 n=1 ec=48/37 lis/c=52/52 les/c/f=53/53/0 sis=60 pruub=8.290819168s) [5,0,4] r=-1 lpr=60 pi=[52,60)/1 luod=0'0 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1204.848999023s@ mbc={}] start_peering_interval up [2,1,0] -> [5,0,4], acting [2,1,0] -> [5,0,4], acting_primary 2 -> 5, up_primary 2 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:20:05 localhost ceph-osd[31726]: osd.1 pg_epoch: 60 pg[7.6( v 40'39 (0'0,40'39] local-lis/les=52/53 n=2 ec=48/37 lis/c=52/52 les/c/f=53/53/0 sis=60 pruub=8.287181854s) [5,0,4] r=-1 lpr=60 pi=[52,60)/1 luod=0'0 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1204.845581055s@ mbc={}] start_peering_interval up [2,1,0] -> [5,0,4], acting [2,1,0] -> [5,0,4], acting_primary 2 -> 5, up_primary 2 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:20:05 localhost ceph-osd[31726]: osd.1 pg_epoch: 60 pg[7.6( v 40'39 (0'0,40'39] local-lis/les=52/53 n=2 ec=48/37 lis/c=52/52 les/c/f=53/53/0 sis=60 pruub=8.286940575s) [5,0,4] r=-1 lpr=60 pi=[52,60)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1204.845581055s@ mbc={}] state: transitioning to Stray Dec 6 03:20:05 localhost ceph-osd[31726]: osd.1 pg_epoch: 60 pg[7.e( v 40'39 (0'0,40'39] local-lis/les=52/53 n=1 ec=48/37 lis/c=52/52 les/c/f=53/53/0 sis=60 pruub=8.290492058s) [5,0,4] r=-1 lpr=60 pi=[52,60)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1204.848999023s@ mbc={}] state: transitioning to Stray Dec 6 03:20:05 localhost python3[57095]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:20:06 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 6.1c scrub starts Dec 6 03:20:06 localhost ceph-osd[32665]: osd.4 pg_epoch: 61 pg[7.f( v 40'39 (0'0,40'39] local-lis/les=54/55 n=1 ec=48/37 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=10.122196198s) [3,1,5] r=-1 lpr=61 pi=[54,61)/1 luod=0'0 crt=40'39 mlcod 0'0 active pruub 1203.795776367s@ mbc={}] start_peering_interval up [5,3,4] -> [3,1,5], acting [5,3,4] -> [3,1,5], acting_primary 5 -> 3, up_primary 5 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:20:06 localhost ceph-osd[32665]: osd.4 pg_epoch: 61 pg[7.f( v 40'39 (0'0,40'39] local-lis/les=54/55 n=1 ec=48/37 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=10.122116089s) [3,1,5] r=-1 lpr=61 pi=[54,61)/1 crt=40'39 mlcod 0'0 unknown NOTIFY pruub 1203.795776367s@ mbc={}] state: transitioning to Stray Dec 6 03:20:06 localhost ceph-osd[32665]: osd.4 pg_epoch: 60 pg[7.6( empty local-lis/les=0/0 n=0 ec=48/37 lis/c=52/52 les/c/f=53/53/0 sis=60) [5,0,4] r=2 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:20:06 localhost ceph-osd[32665]: osd.4 pg_epoch: 60 pg[7.e( empty local-lis/les=0/0 n=0 ec=48/37 lis/c=52/52 les/c/f=53/53/0 sis=60) [5,0,4] r=2 lpr=60 pi=[52,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:20:06 localhost ceph-osd[32665]: osd.4 pg_epoch: 61 pg[7.7( v 40'39 (0'0,40'39] local-lis/les=54/55 n=1 ec=48/37 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=10.119771004s) [3,1,5] r=-1 lpr=61 pi=[54,61)/1 luod=0'0 crt=40'39 mlcod 0'0 active pruub 1203.795654297s@ mbc={}] start_peering_interval up [5,3,4] -> [3,1,5], acting [5,3,4] -> [3,1,5], acting_primary 5 -> 3, up_primary 5 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:20:06 localhost ceph-osd[32665]: osd.4 pg_epoch: 61 pg[7.7( v 40'39 (0'0,40'39] local-lis/les=54/55 n=1 ec=48/37 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=10.119685173s) [3,1,5] r=-1 lpr=61 pi=[54,61)/1 crt=40'39 mlcod 0'0 unknown NOTIFY pruub 1203.795654297s@ mbc={}] state: transitioning to Stray Dec 6 03:20:06 localhost python3[57127]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 6 03:20:07 localhost python3[57177]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:20:07 localhost python3[57195]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:20:07 localhost ceph-osd[31726]: osd.1 pg_epoch: 61 pg[7.7( empty local-lis/les=0/0 n=0 ec=48/37 lis/c=54/54 les/c/f=55/55/0 sis=61) [3,1,5] r=1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:20:07 localhost ceph-osd[31726]: osd.1 pg_epoch: 61 pg[7.f( empty local-lis/les=0/0 n=0 ec=48/37 lis/c=54/54 les/c/f=55/55/0 sis=61) [3,1,5] r=1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:20:07 localhost python3[57257]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:20:08 localhost python3[57275]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:20:08 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 7.d scrub starts Dec 6 03:20:08 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 7.d scrub ok Dec 6 03:20:08 localhost python3[57337]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:20:08 localhost python3[57355]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:20:09 localhost ceph-osd[31726]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 6 03:20:09 localhost ceph-osd[31726]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5030 writes, 22K keys, 5030 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5030 writes, 506 syncs, 9.94 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1771 writes, 6287 keys, 1771 commit groups, 1.0 writes per commit group, ingest: 2.42 MB, 0.00 MB/s#012Interval WAL: 1771 writes, 361 syncs, 4.91 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.005 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.005 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.005 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d1180b62d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d1180b62d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtab Dec 6 03:20:09 localhost ceph-osd[31726]: osd.1 pg_epoch: 63 pg[7.8( v 40'39 (0'0,40'39] local-lis/les=48/49 n=1 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=63 pruub=15.088534355s) [2,0,1] r=2 lpr=63 pi=[48,63)/1 luod=0'0 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1215.366455078s@ mbc={}] start_peering_interval up [0,1,5] -> [2,0,1], acting [0,1,5] -> [2,0,1], acting_primary 0 -> 2, up_primary 0 -> 2, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:20:09 localhost ceph-osd[31726]: osd.1 pg_epoch: 63 pg[7.8( v 40'39 (0'0,40'39] local-lis/les=48/49 n=1 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=63 pruub=15.088434219s) [2,0,1] r=2 lpr=63 pi=[48,63)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1215.366455078s@ mbc={}] state: transitioning to Stray Dec 6 03:20:09 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 6.17 scrub starts Dec 6 03:20:09 localhost python3[57417]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:20:09 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 6.17 scrub ok Dec 6 03:20:09 localhost python3[57435]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:20:10 localhost python3[57465]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 03:20:10 localhost systemd[1]: Reloading. Dec 6 03:20:10 localhost systemd-sysv-generator[57491]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:20:10 localhost systemd-rc-local-generator[57488]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:20:10 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:20:11 localhost python3[57551]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:20:11 localhost ceph-osd[31726]: osd.1 pg_epoch: 65 pg[7.9( v 40'39 (0'0,40'39] local-lis/les=48/49 n=1 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=65 pruub=13.055438042s) [5,4,3] r=-1 lpr=65 pi=[48,65)/1 luod=0'0 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1215.366699219s@ mbc={}] start_peering_interval up [0,1,5] -> [5,4,3], acting [0,1,5] -> [5,4,3], acting_primary 0 -> 5, up_primary 0 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:20:11 localhost ceph-osd[31726]: osd.1 pg_epoch: 65 pg[7.9( v 40'39 (0'0,40'39] local-lis/les=48/49 n=1 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=65 pruub=13.055321693s) [5,4,3] r=-1 lpr=65 pi=[48,65)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1215.366699219s@ mbc={}] state: transitioning to Stray Dec 6 03:20:11 localhost python3[57569]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:20:12 localhost python3[57631]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:20:12 localhost ceph-osd[32665]: osd.4 pg_epoch: 65 pg[7.9( empty local-lis/les=0/0 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=65) [5,4,3] r=1 lpr=65 pi=[48,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:20:12 localhost python3[57649]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:20:12 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 7.5 scrub starts Dec 6 03:20:12 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 5.8 scrub starts Dec 6 03:20:12 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 5.8 scrub ok Dec 6 03:20:12 localhost ceph-osd[32665]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 6 03:20:12 localhost ceph-osd[32665]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.2 total, 600.0 interval#012Cumulative writes: 4343 writes, 20K keys, 4343 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4343 writes, 459 syncs, 9.46 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 955 writes, 3346 keys, 955 commit groups, 1.0 writes per commit group, ingest: 1.76 MB, 0.00 MB/s#012Interval WAL: 955 writes, 261 syncs, 3.66 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e1468ea2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e1468ea2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memta Dec 6 03:20:12 localhost python3[57679]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 03:20:13 localhost systemd[1]: Reloading. Dec 6 03:20:13 localhost systemd-rc-local-generator[57704]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:20:13 localhost systemd-sysv-generator[57709]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:20:13 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:20:13 localhost systemd[1]: Starting Create netns directory... Dec 6 03:20:13 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Dec 6 03:20:13 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Dec 6 03:20:13 localhost systemd[1]: Finished Create netns directory. Dec 6 03:20:13 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 7.5 scrub starts Dec 6 03:20:13 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 7.5 scrub ok Dec 6 03:20:13 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 6.1c scrub starts Dec 6 03:20:13 localhost ceph-osd[31726]: log_channel(cluster) log [DBG] : 6.1c scrub ok Dec 6 03:20:13 localhost python3[57737]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Dec 6 03:20:15 localhost python3[57794]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step2 config_dir=/var/lib/tripleo-config/container-startup-config/step_2 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Dec 6 03:20:15 localhost podman[57869]: 2025-12-06 08:20:15.800919239 +0000 UTC m=+0.081660831 container create 15b3289dfb7ed52ab0d10f0af104ac3227bc5ecb093aa34d613c9c265f6e2f89 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, build-date=2025-11-19T00:35:22Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_id=tripleo_step2, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtqemud_init_logs, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, url=https://www.redhat.com, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, name=rhosp17/openstack-nova-libvirt) Dec 6 03:20:15 localhost podman[57870]: 2025-12-06 08:20:15.831272785 +0000 UTC m=+0.106069756 container create 0d3b158edc684f8600676f0dcf0cb5a14357db1593e587d02be14e52e3f8b304 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.buildah.version=1.41.4, container_name=nova_compute_init_log, url=https://www.redhat.com, vcs-type=git, build-date=2025-11-19T00:36:58Z, version=17.1.12, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step2, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 03:20:15 localhost podman[57869]: 2025-12-06 08:20:15.751270595 +0000 UTC m=+0.032012227 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 6 03:20:15 localhost podman[57870]: 2025-12-06 08:20:15.768023986 +0000 UTC m=+0.042821027 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Dec 6 03:20:15 localhost systemd[1]: Started libpod-conmon-15b3289dfb7ed52ab0d10f0af104ac3227bc5ecb093aa34d613c9c265f6e2f89.scope. Dec 6 03:20:15 localhost systemd[1]: Started libpod-conmon-0d3b158edc684f8600676f0dcf0cb5a14357db1593e587d02be14e52e3f8b304.scope. Dec 6 03:20:15 localhost systemd[1]: Started libcrun container. Dec 6 03:20:15 localhost systemd[1]: Started libcrun container. Dec 6 03:20:15 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89ee8dead5a29d0553b978375c66d4bc010ba2732baca36dcfe5a54e3214c8ff/merged/var/log/swtpm supports timestamps until 2038 (0x7fffffff) Dec 6 03:20:15 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d7c22414f3b3b03ee747009a3ba1860a523968c599aef8234ba1bea94f6d58e/merged/var/log/nova supports timestamps until 2038 (0x7fffffff) Dec 6 03:20:15 localhost podman[57869]: 2025-12-06 08:20:15.909366357 +0000 UTC m=+0.190107949 container init 15b3289dfb7ed52ab0d10f0af104ac3227bc5ecb093aa34d613c9c265f6e2f89 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, name=rhosp17/openstack-nova-libvirt, version=17.1.12, config_id=tripleo_step2, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1761123044, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, container_name=nova_virtqemud_init_logs, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt) Dec 6 03:20:15 localhost podman[57869]: 2025-12-06 08:20:15.920442455 +0000 UTC m=+0.201184027 container start 15b3289dfb7ed52ab0d10f0af104ac3227bc5ecb093aa34d613c9c265f6e2f89 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, name=rhosp17/openstack-nova-libvirt, vcs-type=git, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, container_name=nova_virtqemud_init_logs, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step2, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:35:22Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-libvirt-container, distribution-scope=public, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, url=https://www.redhat.com, tcib_managed=true, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 6 03:20:15 localhost python3[57794]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtqemud_init_logs --conmon-pidfile /run/nova_virtqemud_init_logs.pid --detach=True --env TRIPLEO_DEPLOY_IDENTIFIER=1765008053 --label config_id=tripleo_step2 --label container_name=nova_virtqemud_init_logs --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtqemud_init_logs.log --network none --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --user root --volume /var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /bin/bash -c chown -R tss:tss /var/log/swtpm Dec 6 03:20:15 localhost systemd[1]: libpod-15b3289dfb7ed52ab0d10f0af104ac3227bc5ecb093aa34d613c9c265f6e2f89.scope: Deactivated successfully. Dec 6 03:20:15 localhost podman[57870]: 2025-12-06 08:20:15.958788984 +0000 UTC m=+0.233585955 container init 0d3b158edc684f8600676f0dcf0cb5a14357db1593e587d02be14e52e3f8b304 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step2, release=1761123044, architecture=x86_64, container_name=nova_compute_init_log, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, version=17.1.12, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc.) Dec 6 03:20:15 localhost podman[57870]: 2025-12-06 08:20:15.973865254 +0000 UTC m=+0.248662255 container start 0d3b158edc684f8600676f0dcf0cb5a14357db1593e587d02be14e52e3f8b304 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, vendor=Red Hat, Inc., tcib_managed=true, managed_by=tripleo_ansible, release=1761123044, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute_init_log, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step2, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute) Dec 6 03:20:15 localhost systemd[1]: libpod-0d3b158edc684f8600676f0dcf0cb5a14357db1593e587d02be14e52e3f8b304.scope: Deactivated successfully. Dec 6 03:20:15 localhost python3[57794]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_compute_init_log --conmon-pidfile /run/nova_compute_init_log.pid --detach=True --env TRIPLEO_DEPLOY_IDENTIFIER=1765008053 --label config_id=tripleo_step2 --label container_name=nova_compute_init_log --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_compute_init_log.log --network none --privileged=False --user root --volume /var/log/containers/nova:/var/log/nova:z registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 /bin/bash -c chown -R nova:nova /var/log/nova Dec 6 03:20:16 localhost podman[57908]: 2025-12-06 08:20:16.004039334 +0000 UTC m=+0.060650291 container died 15b3289dfb7ed52ab0d10f0af104ac3227bc5ecb093aa34d613c9c265f6e2f89 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_virtqemud_init_logs, config_id=tripleo_step2, managed_by=tripleo_ansible, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:35:22Z, tcib_managed=true, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, architecture=x86_64, name=rhosp17/openstack-nova-libvirt, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, release=1761123044, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-libvirt) Dec 6 03:20:16 localhost podman[57933]: 2025-12-06 08:20:16.041970251 +0000 UTC m=+0.056824434 container died 0d3b158edc684f8600676f0dcf0cb5a14357db1593e587d02be14e52e3f8b304 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute_init_log, distribution-scope=public, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, config_id=tripleo_step2, managed_by=tripleo_ansible, vcs-type=git, version=17.1.12, io.openshift.expose-services=, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:20:16 localhost podman[57908]: 2025-12-06 08:20:16.153157022 +0000 UTC m=+0.209767909 container cleanup 15b3289dfb7ed52ab0d10f0af104ac3227bc5ecb093aa34d613c9c265f6e2f89 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, config_id=tripleo_step2, url=https://www.redhat.com, build-date=2025-11-19T00:35:22Z, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1761123044, name=rhosp17/openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_virtqemud_init_logs, distribution-scope=public, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 6 03:20:16 localhost systemd[1]: libpod-conmon-15b3289dfb7ed52ab0d10f0af104ac3227bc5ecb093aa34d613c9c265f6e2f89.scope: Deactivated successfully. Dec 6 03:20:16 localhost podman[57933]: 2025-12-06 08:20:16.19865768 +0000 UTC m=+0.213511823 container cleanup 0d3b158edc684f8600676f0dcf0cb5a14357db1593e587d02be14e52e3f8b304 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, release=1761123044, io.buildah.version=1.41.4, config_id=tripleo_step2, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, distribution-scope=public, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, container_name=nova_compute_init_log, vcs-type=git, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 03:20:16 localhost systemd[1]: libpod-conmon-0d3b158edc684f8600676f0dcf0cb5a14357db1593e587d02be14e52e3f8b304.scope: Deactivated successfully. Dec 6 03:20:16 localhost podman[58045]: 2025-12-06 08:20:16.585617031 +0000 UTC m=+0.078704531 container create fe916b04b5a4965a58b15296c4cfd066cc2cd03ab9317bd7e14d1e94989cda28 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, url=https://www.redhat.com, release=1761123044, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, name=rhosp17/openstack-nova-libvirt, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, config_id=tripleo_step2, vcs-type=git, container_name=create_virtlogd_wrapper, architecture=x86_64, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, com.redhat.component=openstack-nova-libvirt-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-libvirt) Dec 6 03:20:16 localhost systemd[1]: Started libpod-conmon-fe916b04b5a4965a58b15296c4cfd066cc2cd03ab9317bd7e14d1e94989cda28.scope. Dec 6 03:20:16 localhost systemd[1]: Started libcrun container. Dec 6 03:20:16 localhost podman[58045]: 2025-12-06 08:20:16.537740701 +0000 UTC m=+0.030828251 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 6 03:20:16 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b65786436fb22482b0004739b377157c398689a3d99c4b69d96c34a55e2067d5/merged/var/lib/container-config-scripts supports timestamps until 2038 (0x7fffffff) Dec 6 03:20:16 localhost podman[58045]: 2025-12-06 08:20:16.650613583 +0000 UTC m=+0.143701093 container init fe916b04b5a4965a58b15296c4cfd066cc2cd03ab9317bd7e14d1e94989cda28 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, io.openshift.expose-services=, container_name=create_virtlogd_wrapper, io.buildah.version=1.41.4, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step2, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:35:22Z, name=rhosp17/openstack-nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, tcib_managed=true, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044) Dec 6 03:20:16 localhost podman[58045]: 2025-12-06 08:20:16.657318879 +0000 UTC m=+0.150406349 container start fe916b04b5a4965a58b15296c4cfd066cc2cd03ab9317bd7e14d1e94989cda28 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, com.redhat.component=openstack-nova-libvirt-container, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=create_virtlogd_wrapper, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_id=tripleo_step2, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, name=rhosp17/openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:20:16 localhost podman[58045]: 2025-12-06 08:20:16.657546135 +0000 UTC m=+0.150633695 container attach fe916b04b5a4965a58b15296c4cfd066cc2cd03ab9317bd7e14d1e94989cda28 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, build-date=2025-11-19T00:35:22Z, config_id=tripleo_step2, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-nova-libvirt-container, io.buildah.version=1.41.4, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=create_virtlogd_wrapper, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.expose-services=) Dec 6 03:20:16 localhost podman[58069]: 2025-12-06 08:20:16.687839789 +0000 UTC m=+0.133973007 container create 19fd50ca84f305510197cafcf52300c0564622e4e13459d1a5f1f90f7f835db6 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, io.openshift.expose-services=, architecture=x86_64, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=create_haproxy_wrapper, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step2, vcs-type=git, vendor=Red Hat, Inc.) Dec 6 03:20:16 localhost systemd[1]: Started libpod-conmon-19fd50ca84f305510197cafcf52300c0564622e4e13459d1a5f1f90f7f835db6.scope. Dec 6 03:20:16 localhost systemd[1]: Started libcrun container. Dec 6 03:20:16 localhost podman[58069]: 2025-12-06 08:20:16.636449321 +0000 UTC m=+0.082582580 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Dec 6 03:20:16 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17bd947360a6f617c72dcee5b3dca8b1bd8a672c96dfaa4c4c4ee08fb50892d9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 03:20:16 localhost podman[58069]: 2025-12-06 08:20:16.744861618 +0000 UTC m=+0.190994836 container init 19fd50ca84f305510197cafcf52300c0564622e4e13459d1a5f1f90f7f835db6 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step2, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, version=17.1.12, container_name=create_haproxy_wrapper, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, vcs-type=git, io.buildah.version=1.41.4, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Dec 6 03:20:16 localhost podman[58069]: 2025-12-06 08:20:16.753939675 +0000 UTC m=+0.200072893 container start 19fd50ca84f305510197cafcf52300c0564622e4e13459d1a5f1f90f7f835db6 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, tcib_managed=true, container_name=create_haproxy_wrapper, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, managed_by=tripleo_ansible) Dec 6 03:20:16 localhost podman[58069]: 2025-12-06 08:20:16.754607316 +0000 UTC m=+0.200740584 container attach 19fd50ca84f305510197cafcf52300c0564622e4e13459d1a5f1f90f7f835db6 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, architecture=x86_64, version=17.1.12, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, container_name=create_haproxy_wrapper, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step2, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z) Dec 6 03:20:16 localhost systemd[1]: var-lib-containers-storage-overlay-7d7c22414f3b3b03ee747009a3ba1860a523968c599aef8234ba1bea94f6d58e-merged.mount: Deactivated successfully. Dec 6 03:20:16 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0d3b158edc684f8600676f0dcf0cb5a14357db1593e587d02be14e52e3f8b304-userdata-shm.mount: Deactivated successfully. Dec 6 03:20:16 localhost systemd[1]: var-lib-containers-storage-overlay-89ee8dead5a29d0553b978375c66d4bc010ba2732baca36dcfe5a54e3214c8ff-merged.mount: Deactivated successfully. Dec 6 03:20:16 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-15b3289dfb7ed52ab0d10f0af104ac3227bc5ecb093aa34d613c9c265f6e2f89-userdata-shm.mount: Deactivated successfully. Dec 6 03:20:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:20:17 localhost podman[58125]: 2025-12-06 08:20:17.917120181 +0000 UTC m=+0.081073203 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, name=rhosp17/openstack-qdrouterd, tcib_managed=true, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, config_id=tripleo_step1, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, version=17.1.12, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 03:20:18 localhost podman[58125]: 2025-12-06 08:20:18.085983091 +0000 UTC m=+0.249936103 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, architecture=x86_64, vcs-type=git, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, tcib_managed=true, version=17.1.12, maintainer=OpenStack TripleO Team, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 03:20:18 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:20:18 localhost ovs-vsctl[58186]: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) Dec 6 03:20:18 localhost systemd[1]: libpod-fe916b04b5a4965a58b15296c4cfd066cc2cd03ab9317bd7e14d1e94989cda28.scope: Deactivated successfully. Dec 6 03:20:18 localhost systemd[1]: libpod-fe916b04b5a4965a58b15296c4cfd066cc2cd03ab9317bd7e14d1e94989cda28.scope: Consumed 2.186s CPU time. Dec 6 03:20:18 localhost podman[58336]: 2025-12-06 08:20:18.96469432 +0000 UTC m=+0.054655397 container died fe916b04b5a4965a58b15296c4cfd066cc2cd03ab9317bd7e14d1e94989cda28 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z, description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-type=git, name=rhosp17/openstack-nova-libvirt, release=1761123044, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, version=17.1.12, container_name=create_virtlogd_wrapper, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, architecture=x86_64, config_id=tripleo_step2, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 03:20:18 localhost systemd[1]: tmp-crun.AsIgMl.mount: Deactivated successfully. Dec 6 03:20:18 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fe916b04b5a4965a58b15296c4cfd066cc2cd03ab9317bd7e14d1e94989cda28-userdata-shm.mount: Deactivated successfully. Dec 6 03:20:19 localhost podman[58336]: 2025-12-06 08:20:19.006050252 +0000 UTC m=+0.096011279 container cleanup fe916b04b5a4965a58b15296c4cfd066cc2cd03ab9317bd7e14d1e94989cda28 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=create_virtlogd_wrapper, version=17.1.12, build-date=2025-11-19T00:35:22Z, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, name=rhosp17/openstack-nova-libvirt, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, config_id=tripleo_step2, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vendor=Red Hat, Inc.) Dec 6 03:20:19 localhost systemd[1]: libpod-conmon-fe916b04b5a4965a58b15296c4cfd066cc2cd03ab9317bd7e14d1e94989cda28.scope: Deactivated successfully. Dec 6 03:20:19 localhost python3[57794]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name create_virtlogd_wrapper --cgroupns=host --conmon-pidfile /run/create_virtlogd_wrapper.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1765008053 --label config_id=tripleo_step2 --label container_name=create_virtlogd_wrapper --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/create_virtlogd_wrapper.log --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /container_puppet_apply.sh 4 file include ::tripleo::profile::base::nova::virtlogd_wrapper Dec 6 03:20:19 localhost ceph-osd[31726]: osd.1 pg_epoch: 67 pg[7.a( v 40'39 (0'0,40'39] local-lis/les=52/53 n=1 ec=48/37 lis/c=52/52 les/c/f=53/53/0 sis=67 pruub=10.368765831s) [4,2,3] r=-1 lpr=67 pi=[52,67)/1 luod=0'0 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1220.845825195s@ mbc={}] start_peering_interval up [2,1,0] -> [4,2,3], acting [2,1,0] -> [4,2,3], acting_primary 2 -> 4, up_primary 2 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:20:19 localhost ceph-osd[31726]: osd.1 pg_epoch: 67 pg[7.a( v 40'39 (0'0,40'39] local-lis/les=52/53 n=1 ec=48/37 lis/c=52/52 les/c/f=53/53/0 sis=67 pruub=10.368695259s) [4,2,3] r=-1 lpr=67 pi=[52,67)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1220.845825195s@ mbc={}] state: transitioning to Stray Dec 6 03:20:19 localhost ceph-osd[32665]: osd.4 pg_epoch: 67 pg[7.a( empty local-lis/les=0/0 n=0 ec=48/37 lis/c=52/52 les/c/f=53/53/0 sis=67) [4,2,3] r=0 lpr=67 pi=[52,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Dec 6 03:20:19 localhost systemd[1]: libpod-19fd50ca84f305510197cafcf52300c0564622e4e13459d1a5f1f90f7f835db6.scope: Deactivated successfully. Dec 6 03:20:19 localhost systemd[1]: libpod-19fd50ca84f305510197cafcf52300c0564622e4e13459d1a5f1f90f7f835db6.scope: Consumed 2.154s CPU time. Dec 6 03:20:19 localhost podman[58069]: 2025-12-06 08:20:19.645271867 +0000 UTC m=+3.091405135 container died 19fd50ca84f305510197cafcf52300c0564622e4e13459d1a5f1f90f7f835db6 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, url=https://www.redhat.com, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, version=17.1.12, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, config_id=tripleo_step2, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, release=1761123044, build-date=2025-11-19T00:14:25Z, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, container_name=create_haproxy_wrapper, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 6 03:20:19 localhost podman[58375]: 2025-12-06 08:20:19.765903966 +0000 UTC m=+0.111259244 container cleanup 19fd50ca84f305510197cafcf52300c0564622e4e13459d1a5f1f90f7f835db6 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step2, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=create_haproxy_wrapper, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible) Dec 6 03:20:19 localhost systemd[1]: libpod-conmon-19fd50ca84f305510197cafcf52300c0564622e4e13459d1a5f1f90f7f835db6.scope: Deactivated successfully. Dec 6 03:20:19 localhost python3[57794]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name create_haproxy_wrapper --conmon-pidfile /run/create_haproxy_wrapper.pid --detach=False --label config_id=tripleo_step2 --label container_name=create_haproxy_wrapper --label managed_by=tripleo_ansible --label config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/create_haproxy_wrapper.log --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron:/var/lib/neutron:shared,z registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 /container_puppet_apply.sh 4 file include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers Dec 6 03:20:19 localhost systemd[1]: var-lib-containers-storage-overlay-17bd947360a6f617c72dcee5b3dca8b1bd8a672c96dfaa4c4c4ee08fb50892d9-merged.mount: Deactivated successfully. Dec 6 03:20:19 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-19fd50ca84f305510197cafcf52300c0564622e4e13459d1a5f1f90f7f835db6-userdata-shm.mount: Deactivated successfully. Dec 6 03:20:19 localhost systemd[1]: var-lib-containers-storage-overlay-b65786436fb22482b0004739b377157c398689a3d99c4b69d96c34a55e2067d5-merged.mount: Deactivated successfully. Dec 6 03:20:20 localhost ceph-osd[32665]: osd.4 pg_epoch: 68 pg[7.a( v 40'39 (0'0,40'39] local-lis/les=67/68 n=1 ec=48/37 lis/c=52/52 les/c/f=53/53/0 sis=67) [4,2,3] r=0 lpr=67 pi=[52,67)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Dec 6 03:20:20 localhost python3[58431]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks2.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:20:22 localhost python3[58552]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks2.json short_hostname=np0005548789 step=2 update_config_hash_only=False Dec 6 03:20:22 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 7.a scrub starts Dec 6 03:20:22 localhost python3[58568]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:20:22 localhost ceph-osd[32665]: log_channel(cluster) log [DBG] : 7.a scrub ok Dec 6 03:20:22 localhost python3[58584]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_2 config_pattern=container-puppet-*.json config_overrides={} debug=True Dec 6 03:20:29 localhost ceph-osd[32665]: osd.4 pg_epoch: 70 pg[7.c( v 40'39 (0'0,40'39] local-lis/les=56/57 n=1 ec=48/37 lis/c=56/56 les/c/f=57/57/0 sis=70 pruub=13.193599701s) [0,5,4] r=2 lpr=70 pi=[56,70)/1 luod=0'0 crt=40'39 mlcod 0'0 active pruub 1229.589599609s@ mbc={}] start_peering_interval up [2,3,4] -> [0,5,4], acting [2,3,4] -> [0,5,4], acting_primary 2 -> 0, up_primary 2 -> 0, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:20:29 localhost ceph-osd[32665]: osd.4 pg_epoch: 70 pg[7.c( v 40'39 (0'0,40'39] local-lis/les=56/57 n=1 ec=48/37 lis/c=56/56 les/c/f=57/57/0 sis=70 pruub=13.193515778s) [0,5,4] r=2 lpr=70 pi=[56,70)/1 crt=40'39 mlcod 0'0 unknown NOTIFY pruub 1229.589599609s@ mbc={}] state: transitioning to Stray Dec 6 03:20:31 localhost ceph-osd[32665]: osd.4 pg_epoch: 72 pg[7.d( v 40'39 (0'0,40'39] local-lis/les=58/59 n=2 ec=48/37 lis/c=58/58 les/c/f=59/59/0 sis=72 pruub=13.232522011s) [3,2,1] r=-1 lpr=72 pi=[58,72)/1 crt=40'39 mlcod 0'0 active pruub 1231.655029297s@ mbc={255={}}] start_peering_interval up [4,5,0] -> [3,2,1], acting [4,5,0] -> [3,2,1], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:20:31 localhost ceph-osd[32665]: osd.4 pg_epoch: 72 pg[7.d( v 40'39 (0'0,40'39] local-lis/les=58/59 n=2 ec=48/37 lis/c=58/58 les/c/f=59/59/0 sis=72 pruub=13.232336998s) [3,2,1] r=-1 lpr=72 pi=[58,72)/1 crt=40'39 mlcod 0'0 unknown NOTIFY pruub 1231.655029297s@ mbc={}] state: transitioning to Stray Dec 6 03:20:32 localhost ceph-osd[31726]: osd.1 pg_epoch: 72 pg[7.d( empty local-lis/les=0/0 n=0 ec=48/37 lis/c=58/58 les/c/f=59/59/0 sis=72) [3,2,1] r=2 lpr=72 pi=[58,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:20:33 localhost ceph-osd[32665]: osd.4 pg_epoch: 74 pg[7.e( v 40'39 (0'0,40'39] local-lis/les=60/61 n=1 ec=48/37 lis/c=60/60 les/c/f=61/61/0 sis=74 pruub=13.228281975s) [2,1,3] r=-1 lpr=74 pi=[60,74)/1 luod=0'0 crt=40'39 mlcod 0'0 active pruub 1233.692016602s@ mbc={}] start_peering_interval up [5,0,4] -> [2,1,3], acting [5,0,4] -> [2,1,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:20:33 localhost ceph-osd[32665]: osd.4 pg_epoch: 74 pg[7.e( v 40'39 (0'0,40'39] local-lis/les=60/61 n=1 ec=48/37 lis/c=60/60 les/c/f=61/61/0 sis=74 pruub=13.228202820s) [2,1,3] r=-1 lpr=74 pi=[60,74)/1 crt=40'39 mlcod 0'0 unknown NOTIFY pruub 1233.692016602s@ mbc={}] state: transitioning to Stray Dec 6 03:20:34 localhost ceph-osd[31726]: osd.1 pg_epoch: 74 pg[7.e( empty local-lis/les=0/0 n=0 ec=48/37 lis/c=60/60 les/c/f=61/61/0 sis=74) [2,1,3] r=1 lpr=74 pi=[60,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:20:35 localhost ceph-osd[31726]: osd.1 pg_epoch: 76 pg[7.f( v 40'39 (0'0,40'39] local-lis/les=61/62 n=1 ec=48/37 lis/c=61/61 les/c/f=62/62/0 sis=76 pruub=12.203485489s) [2,4,3] r=-1 lpr=76 pi=[61,76)/1 luod=0'0 crt=40'39 mlcod 0'0 active pruub 1238.626586914s@ mbc={}] start_peering_interval up [3,1,5] -> [2,4,3], acting [3,1,5] -> [2,4,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Dec 6 03:20:35 localhost ceph-osd[31726]: osd.1 pg_epoch: 76 pg[7.f( v 40'39 (0'0,40'39] local-lis/les=61/62 n=1 ec=48/37 lis/c=61/61 les/c/f=62/62/0 sis=76 pruub=12.203265190s) [2,4,3] r=-1 lpr=76 pi=[61,76)/1 crt=40'39 mlcod 0'0 unknown NOTIFY pruub 1238.626586914s@ mbc={}] state: transitioning to Stray Dec 6 03:20:36 localhost ceph-osd[32665]: osd.4 pg_epoch: 76 pg[7.f( empty local-lis/les=0/0 n=0 ec=48/37 lis/c=61/61 les/c/f=62/62/0 sis=76) [2,4,3] r=1 lpr=76 pi=[61,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Dec 6 03:20:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:20:48 localhost systemd[1]: tmp-crun.27yuHl.mount: Deactivated successfully. Dec 6 03:20:48 localhost podman[58585]: 2025-12-06 08:20:48.915151982 +0000 UTC m=+0.077363071 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, version=17.1.12, architecture=x86_64, io.openshift.expose-services=) Dec 6 03:20:49 localhost podman[58585]: 2025-12-06 08:20:49.115163942 +0000 UTC m=+0.277375051 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, version=17.1.12, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.buildah.version=1.41.4, tcib_managed=true, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 6 03:20:49 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:21:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:21:19 localhost podman[58690]: 2025-12-06 08:21:19.93106655 +0000 UTC m=+0.091035327 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, managed_by=tripleo_ansible, io.buildah.version=1.41.4, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, vendor=Red Hat, Inc., version=17.1.12, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:21:20 localhost podman[58690]: 2025-12-06 08:21:20.108313637 +0000 UTC m=+0.268282454 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, config_id=tripleo_step1, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., version=17.1.12, batch=17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public) Dec 6 03:21:20 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:21:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:21:50 localhost podman[58720]: 2025-12-06 08:21:50.916704142 +0000 UTC m=+0.075751081 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp17/openstack-qdrouterd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044) Dec 6 03:21:51 localhost podman[58720]: 2025-12-06 08:21:51.085390187 +0000 UTC m=+0.244437026 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, tcib_managed=true, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., batch=17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.buildah.version=1.41.4) Dec 6 03:21:51 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:21:59 localhost podman[58849]: 2025-12-06 08:21:59.660964923 +0000 UTC m=+0.080508357 container exec ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, release=1763362218, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, description=Red Hat Ceph Storage 7, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, name=rhceph, io.buildah.version=1.41.4, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, distribution-scope=public, vendor=Red Hat, Inc., version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 6 03:21:59 localhost podman[58849]: 2025-12-06 08:21:59.781092546 +0000 UTC m=+0.200635950 container exec_died ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , name=rhceph, release=1763362218, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, io.buildah.version=1.41.4, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., version=7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 6 03:22:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:22:21 localhost systemd[1]: tmp-crun.V9l7At.mount: Deactivated successfully. Dec 6 03:22:21 localhost podman[58994]: 2025-12-06 08:22:21.938515005 +0000 UTC m=+0.091178776 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, version=17.1.12, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-type=git, release=1761123044, tcib_managed=true, batch=17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1) Dec 6 03:22:22 localhost podman[58994]: 2025-12-06 08:22:22.13425045 +0000 UTC m=+0.286914211 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, tcib_managed=true, vcs-type=git, version=17.1.12, distribution-scope=public, vendor=Red Hat, Inc., url=https://www.redhat.com, io.buildah.version=1.41.4, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd) Dec 6 03:22:22 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:22:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:22:52 localhost podman[59022]: 2025-12-06 08:22:52.92957614 +0000 UTC m=+0.087538056 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, vendor=Red Hat, Inc., version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, managed_by=tripleo_ansible, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 6 03:22:53 localhost podman[59022]: 2025-12-06 08:22:53.152226301 +0000 UTC m=+0.310188177 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, tcib_managed=true, vcs-type=git, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vendor=Red Hat, Inc., distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team) Dec 6 03:22:53 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:23:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:23:23 localhost podman[59128]: 2025-12-06 08:23:23.924226286 +0000 UTC m=+0.086837684 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, version=17.1.12, architecture=x86_64, tcib_managed=true, vcs-type=git, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 6 03:23:24 localhost podman[59128]: 2025-12-06 08:23:24.131382377 +0000 UTC m=+0.293993775 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp17/openstack-qdrouterd) Dec 6 03:23:24 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:23:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:23:54 localhost podman[59159]: 2025-12-06 08:23:54.920747937 +0000 UTC m=+0.081524393 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.12, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, release=1761123044, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, container_name=metrics_qdr) Dec 6 03:23:55 localhost podman[59159]: 2025-12-06 08:23:55.117184583 +0000 UTC m=+0.277961079 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, config_id=tripleo_step1, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd) Dec 6 03:23:55 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:24:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:24:25 localhost systemd[1]: tmp-crun.VydYiQ.mount: Deactivated successfully. Dec 6 03:24:25 localhost podman[59265]: 2025-12-06 08:24:25.924219477 +0000 UTC m=+0.086568713 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.buildah.version=1.41.4, managed_by=tripleo_ansible, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_id=tripleo_step1) Dec 6 03:24:26 localhost podman[59265]: 2025-12-06 08:24:26.123000348 +0000 UTC m=+0.285349564 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, vcs-type=git, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, io.openshift.expose-services=) Dec 6 03:24:26 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:24:52 localhost python3[59342]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:24:52 localhost python3[59387]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009492.2425036-98735-234126235061465/source _original_basename=tmp7nhgmnu9 follow=False checksum=62439dd24dde40c90e7a39f6a1b31cc6061fe59b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:24:53 localhost python3[59417]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 6 03:24:55 localhost ansible-async_wrapper.py[59589]: Invoked with 965851980157 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009495.0948784-98891-228612148580821/AnsiballZ_command.py _ Dec 6 03:24:55 localhost ansible-async_wrapper.py[59592]: Starting module and watcher Dec 6 03:24:55 localhost ansible-async_wrapper.py[59592]: Start watching 59593 (3600) Dec 6 03:24:55 localhost ansible-async_wrapper.py[59593]: Start module (59593) Dec 6 03:24:55 localhost ansible-async_wrapper.py[59589]: Return async_wrapper task started. Dec 6 03:24:55 localhost python3[59613]: ansible-ansible.legacy.async_status Invoked with jid=965851980157.59589 mode=status _async_dir=/tmp/.ansible_async Dec 6 03:24:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:24:56 localhost systemd[1]: tmp-crun.28v0AJ.mount: Deactivated successfully. Dec 6 03:24:56 localhost podman[59627]: 2025-12-06 08:24:56.883217903 +0000 UTC m=+0.049053995 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, version=17.1.12, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 6 03:24:57 localhost podman[59627]: 2025-12-06 08:24:57.037479409 +0000 UTC m=+0.203315521 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, io.buildah.version=1.41.4, version=17.1.12, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Dec 6 03:24:57 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:24:59 localhost puppet-user[59612]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 6 03:24:59 localhost puppet-user[59612]: (file: /etc/puppet/hiera.yaml) Dec 6 03:24:59 localhost puppet-user[59612]: Warning: Undefined variable '::deploy_config_name'; Dec 6 03:24:59 localhost puppet-user[59612]: (file & line not available) Dec 6 03:24:59 localhost puppet-user[59612]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 6 03:24:59 localhost puppet-user[59612]: (file & line not available) Dec 6 03:24:59 localhost puppet-user[59612]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Dec 6 03:24:59 localhost puppet-user[59612]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Dec 6 03:24:59 localhost puppet-user[59612]: Notice: Compiled catalog for np0005548789.localdomain in environment production in 0.11 seconds Dec 6 03:24:59 localhost puppet-user[59612]: Notice: Applied catalog in 0.04 seconds Dec 6 03:24:59 localhost puppet-user[59612]: Application: Dec 6 03:24:59 localhost puppet-user[59612]: Initial environment: production Dec 6 03:24:59 localhost puppet-user[59612]: Converged environment: production Dec 6 03:24:59 localhost puppet-user[59612]: Run mode: user Dec 6 03:24:59 localhost puppet-user[59612]: Changes: Dec 6 03:24:59 localhost puppet-user[59612]: Events: Dec 6 03:24:59 localhost puppet-user[59612]: Resources: Dec 6 03:24:59 localhost puppet-user[59612]: Total: 10 Dec 6 03:24:59 localhost puppet-user[59612]: Time: Dec 6 03:24:59 localhost puppet-user[59612]: Schedule: 0.00 Dec 6 03:24:59 localhost puppet-user[59612]: File: 0.00 Dec 6 03:24:59 localhost puppet-user[59612]: Augeas: 0.01 Dec 6 03:24:59 localhost puppet-user[59612]: Exec: 0.01 Dec 6 03:24:59 localhost puppet-user[59612]: Transaction evaluation: 0.03 Dec 6 03:24:59 localhost puppet-user[59612]: Catalog application: 0.04 Dec 6 03:24:59 localhost puppet-user[59612]: Config retrieval: 0.15 Dec 6 03:24:59 localhost puppet-user[59612]: Last run: 1765009499 Dec 6 03:24:59 localhost puppet-user[59612]: Filebucket: 0.00 Dec 6 03:24:59 localhost puppet-user[59612]: Total: 0.04 Dec 6 03:24:59 localhost puppet-user[59612]: Version: Dec 6 03:24:59 localhost puppet-user[59612]: Config: 1765009499 Dec 6 03:24:59 localhost puppet-user[59612]: Puppet: 7.10.0 Dec 6 03:24:59 localhost ansible-async_wrapper.py[59593]: Module complete (59593) Dec 6 03:25:00 localhost ansible-async_wrapper.py[59592]: Done in kid B. Dec 6 03:25:06 localhost python3[59845]: ansible-ansible.legacy.async_status Invoked with jid=965851980157.59589 mode=status _async_dir=/tmp/.ansible_async Dec 6 03:25:07 localhost python3[59861]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 6 03:25:07 localhost python3[59877]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 6 03:25:08 localhost python3[59927]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:25:08 localhost python3[59945]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmpr43d29l3 recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 6 03:25:08 localhost python3[59975]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:25:09 localhost python3[60078]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Dec 6 03:25:10 localhost python3[60097]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:25:11 localhost python3[60129]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 6 03:25:12 localhost python3[60179]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:25:12 localhost python3[60197]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:25:12 localhost python3[60259]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:25:13 localhost python3[60277]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:25:13 localhost python3[60339]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:25:14 localhost python3[60357]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:25:14 localhost python3[60419]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:25:14 localhost python3[60437]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:25:15 localhost python3[60467]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 03:25:15 localhost systemd[1]: Reloading. Dec 6 03:25:15 localhost systemd-rc-local-generator[60490]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:25:15 localhost systemd-sysv-generator[60494]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:25:15 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:25:16 localhost python3[60552]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:25:16 localhost python3[60570]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:25:17 localhost python3[60632]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:25:17 localhost python3[60650]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:25:18 localhost python3[60680]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 03:25:18 localhost systemd[1]: Reloading. Dec 6 03:25:18 localhost systemd-sysv-generator[60709]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:25:18 localhost systemd-rc-local-generator[60705]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:25:18 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:25:18 localhost systemd[1]: Starting Create netns directory... Dec 6 03:25:18 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Dec 6 03:25:18 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Dec 6 03:25:18 localhost systemd[1]: Finished Create netns directory. Dec 6 03:25:19 localhost python3[60736]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Dec 6 03:25:21 localhost python3[60795]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step3 config_dir=/var/lib/tripleo-config/container-startup-config/step_3 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Dec 6 03:25:21 localhost podman[60940]: 2025-12-06 08:25:21.412354083 +0000 UTC m=+0.068153200 container create afa881aa4ca20970e313be385764e9619d76190f61a0727580f0c398406f954f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-rsyslog-container, io.buildah.version=1.41.4, release=1761123044, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:49Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7a657a42c3cbd75086c59cf211d6fafe'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, version=17.1.12) Dec 6 03:25:21 localhost systemd[1]: Started libpod-conmon-afa881aa4ca20970e313be385764e9619d76190f61a0727580f0c398406f954f.scope. Dec 6 03:25:21 localhost podman[60948]: 2025-12-06 08:25:21.453450551 +0000 UTC m=+0.091858005 container create c55a3fa9476956f37d3ecfbe7a06aced3ea8b321c934918e4f504b9cf2d8fc82 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, build-date=2025-11-19T00:35:22Z, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, name=rhosp17/openstack-nova-libvirt, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_virtlogd_wrapper, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, config_id=tripleo_step3) Dec 6 03:25:21 localhost systemd[1]: Started libcrun container. Dec 6 03:25:21 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39b7e1aa52a8c5999fa4a60374f7871a9f2a511ae7fd70de9cb8a012d08ea2a8/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:21 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39b7e1aa52a8c5999fa4a60374f7871a9f2a511ae7fd70de9cb8a012d08ea2a8/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:21 localhost podman[60940]: 2025-12-06 08:25:21.383629882 +0000 UTC m=+0.039428999 image pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Dec 6 03:25:21 localhost podman[60940]: 2025-12-06 08:25:21.482280535 +0000 UTC m=+0.138079652 container init afa881aa4ca20970e313be385764e9619d76190f61a0727580f0c398406f954f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, container_name=rsyslog, version=17.1.12, release=1761123044, architecture=x86_64, build-date=2025-11-18T22:49:49Z, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7a657a42c3cbd75086c59cf211d6fafe'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, name=rhosp17/openstack-rsyslog, managed_by=tripleo_ansible, com.redhat.component=openstack-rsyslog-container, summary=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20251118.1) Dec 6 03:25:21 localhost podman[60940]: 2025-12-06 08:25:21.48930331 +0000 UTC m=+0.145102417 container start afa881aa4ca20970e313be385764e9619d76190f61a0727580f0c398406f954f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, build-date=2025-11-18T22:49:49Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., name=rhosp17/openstack-rsyslog, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, version=17.1.12, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, io.openshift.expose-services=, com.redhat.component=openstack-rsyslog-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, distribution-scope=public, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7a657a42c3cbd75086c59cf211d6fafe'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}) Dec 6 03:25:21 localhost python3[60795]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name rsyslog --conmon-pidfile /run/rsyslog.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=7a657a42c3cbd75086c59cf211d6fafe --label config_id=tripleo_step3 --label container_name=rsyslog --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7a657a42c3cbd75086c59cf211d6fafe'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/rsyslog.log --network host --privileged=True --security-opt label=disable --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro --volume /var/log/containers:/var/log/containers:ro --volume /var/log/containers/rsyslog:/var/log/rsyslog:rw,z --volume /var/log:/var/log/host:ro --volume /var/lib/rsyslog.container:/var/lib/rsyslog:rw,z registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Dec 6 03:25:21 localhost podman[61002]: 2025-12-06 08:25:21.49680929 +0000 UTC m=+0.075909976 container create ef28ead9699a3a453a7c64489f8848159f375497f80472e3334975e7b928c9e1 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, maintainer=OpenStack TripleO Team, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, container_name=ceilometer_init_log, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://www.redhat.com, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4) Dec 6 03:25:21 localhost systemd[1]: Started libpod-conmon-c55a3fa9476956f37d3ecfbe7a06aced3ea8b321c934918e4f504b9cf2d8fc82.scope. Dec 6 03:25:21 localhost podman[60948]: 2025-12-06 08:25:21.407747051 +0000 UTC m=+0.046154505 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 6 03:25:21 localhost systemd[1]: Started libcrun container. Dec 6 03:25:21 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef97a8ce410352459d2cd2d839f0c4f3d007fd27d6a886085c43dfe3ff9df394/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:21 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef97a8ce410352459d2cd2d839f0c4f3d007fd27d6a886085c43dfe3ff9df394/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:21 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef97a8ce410352459d2cd2d839f0c4f3d007fd27d6a886085c43dfe3ff9df394/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:21 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef97a8ce410352459d2cd2d839f0c4f3d007fd27d6a886085c43dfe3ff9df394/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:21 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef97a8ce410352459d2cd2d839f0c4f3d007fd27d6a886085c43dfe3ff9df394/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:21 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef97a8ce410352459d2cd2d839f0c4f3d007fd27d6a886085c43dfe3ff9df394/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:21 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef97a8ce410352459d2cd2d839f0c4f3d007fd27d6a886085c43dfe3ff9df394/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:21 localhost podman[60954]: 2025-12-06 08:25:21.42011953 +0000 UTC m=+0.058955288 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Dec 6 03:25:21 localhost podman[60954]: 2025-12-06 08:25:21.523669933 +0000 UTC m=+0.162505661 container create b995fb7c95a2e9bcba8263167f958b9d9a1aa5d19b80b46a281a635d52d8c08d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_statedir_owner, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, config_id=tripleo_step3, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, url=https://www.redhat.com, release=1761123044, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 03:25:21 localhost systemd[1]: Started libpod-conmon-ef28ead9699a3a453a7c64489f8848159f375497f80472e3334975e7b928c9e1.scope. Dec 6 03:25:21 localhost systemd[1]: Started libcrun container. Dec 6 03:25:21 localhost podman[60989]: 2025-12-06 08:25:21.552390434 +0000 UTC m=+0.143712076 container create 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, name=rhosp17/openstack-collectd, io.openshift.expose-services=, distribution-scope=public, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container) Dec 6 03:25:21 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e232d99afeeb95c94065c4aa6c90831e0f37d94aede849daf1e3af8b69b5b465/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:21 localhost systemd[1]: Started libpod-conmon-b995fb7c95a2e9bcba8263167f958b9d9a1aa5d19b80b46a281a635d52d8c08d.scope. Dec 6 03:25:21 localhost systemd[1]: libpod-afa881aa4ca20970e313be385764e9619d76190f61a0727580f0c398406f954f.scope: Deactivated successfully. Dec 6 03:25:21 localhost podman[61002]: 2025-12-06 08:25:21.463076707 +0000 UTC m=+0.042177383 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 Dec 6 03:25:21 localhost podman[61002]: 2025-12-06 08:25:21.565100122 +0000 UTC m=+0.144200808 container init ef28ead9699a3a453a7c64489f8848159f375497f80472e3334975e7b928c9e1 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, tcib_managed=true, container_name=ceilometer_init_log, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step3) Dec 6 03:25:21 localhost systemd[1]: Started libcrun container. Dec 6 03:25:21 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/761acef0a8624aecb4fcf3ced8fe890c9e02485b13878d736631aa63ee8f2874/merged/container-config-scripts supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:21 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/761acef0a8624aecb4fcf3ced8fe890c9e02485b13878d736631aa63ee8f2874/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:21 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/761acef0a8624aecb4fcf3ced8fe890c9e02485b13878d736631aa63ee8f2874/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:21 localhost podman[61002]: 2025-12-06 08:25:21.572718406 +0000 UTC m=+0.151819082 container start ef28ead9699a3a453a7c64489f8848159f375497f80472e3334975e7b928c9e1 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-11-19T00:12:45Z, vcs-type=git, url=https://www.redhat.com, batch=17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, container_name=ceilometer_init_log, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step3, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team) Dec 6 03:25:21 localhost systemd[1]: libpod-ef28ead9699a3a453a7c64489f8848159f375497f80472e3334975e7b928c9e1.scope: Deactivated successfully. Dec 6 03:25:21 localhost podman[60948]: 2025-12-06 08:25:21.577908315 +0000 UTC m=+0.216315759 container init c55a3fa9476956f37d3ecfbe7a06aced3ea8b321c934918e4f504b9cf2d8fc82 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, managed_by=tripleo_ansible, distribution-scope=public, batch=17.1_20251118.1, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_virtlogd_wrapper, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, tcib_managed=true) Dec 6 03:25:21 localhost podman[60954]: 2025-12-06 08:25:21.580534116 +0000 UTC m=+0.219369864 container init b995fb7c95a2e9bcba8263167f958b9d9a1aa5d19b80b46a281a635d52d8c08d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_statedir_owner, distribution-scope=public, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, io.openshift.expose-services=, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, architecture=x86_64, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:25:21 localhost systemd[1]: Started libpod-conmon-2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.scope. Dec 6 03:25:21 localhost podman[60948]: 2025-12-06 08:25:21.591141811 +0000 UTC m=+0.229549255 container start c55a3fa9476956f37d3ecfbe7a06aced3ea8b321c934918e4f504b9cf2d8fc82 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, com.redhat.component=openstack-nova-libvirt-container, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, container_name=nova_virtlogd_wrapper, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.12, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt) Dec 6 03:25:21 localhost systemd[1]: Started libcrun container. Dec 6 03:25:21 localhost python3[60795]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtlogd_wrapper --cgroupns=host --conmon-pidfile /run/nova_virtlogd_wrapper.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=179caa3982511c1fd3314b961771f96c --label config_id=tripleo_step3 --label container_name=nova_virtlogd_wrapper --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtlogd_wrapper.log --network host --pid host --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 6 03:25:21 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d980d54738e5f040d62ff40bb9abb4b1931da0f4c80c1ba3031e7feabd416146/merged/scripts supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:21 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d980d54738e5f040d62ff40bb9abb4b1931da0f4c80c1ba3031e7feabd416146/merged/var/log/collectd supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:21 localhost podman[60989]: 2025-12-06 08:25:21.507514189 +0000 UTC m=+0.098835821 image pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Dec 6 03:25:21 localhost systemd-logind[766]: Existing logind session ID 28 used by new audit session, ignoring. Dec 6 03:25:21 localhost systemd[1]: Created slice User Slice of UID 0. Dec 6 03:25:21 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Dec 6 03:25:21 localhost podman[61059]: 2025-12-06 08:25:21.633643903 +0000 UTC m=+0.056060919 container died afa881aa4ca20970e313be385764e9619d76190f61a0727580f0c398406f954f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.buildah.version=1.41.4, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-rsyslog, version=17.1.12, batch=17.1_20251118.1, build-date=2025-11-18T22:49:49Z, tcib_managed=true, release=1761123044, com.redhat.component=openstack-rsyslog-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, description=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7a657a42c3cbd75086c59cf211d6fafe'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, architecture=x86_64) Dec 6 03:25:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:25:21 localhost podman[60989]: 2025-12-06 08:25:21.638624655 +0000 UTC m=+0.229946277 container init 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, vcs-type=git, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://www.redhat.com, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, config_id=tripleo_step3) Dec 6 03:25:21 localhost systemd[1]: libpod-b995fb7c95a2e9bcba8263167f958b9d9a1aa5d19b80b46a281a635d52d8c08d.scope: Deactivated successfully. Dec 6 03:25:21 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Dec 6 03:25:21 localhost systemd[1]: Starting User Manager for UID 0... Dec 6 03:25:21 localhost systemd-logind[766]: Existing logind session ID 28 used by new audit session, ignoring. Dec 6 03:25:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:25:21 localhost podman[60989]: 2025-12-06 08:25:21.672203714 +0000 UTC m=+0.263525336 container start 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, io.openshift.expose-services=, url=https://www.redhat.com, vendor=Red Hat, Inc., batch=17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, architecture=x86_64, container_name=collectd, maintainer=OpenStack TripleO Team, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044) Dec 6 03:25:21 localhost python3[60795]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name collectd --cap-add IPC_LOCK --conmon-pidfile /run/collectd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=4767aaabc3de112d8791c290aa2b669d --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step3 --label container_name=collectd --label managed_by=tripleo_ansible --label config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/collectd.log --memory 512m --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro --volume /var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/collectd:/var/log/collectd:rw,z --volume /var/lib/container-config-scripts:/config-scripts:ro --volume /var/lib/container-user-scripts:/scripts:z --volume /run:/run:rw --volume /sys/fs/cgroup:/sys/fs/cgroup:ro registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Dec 6 03:25:21 localhost podman[61070]: 2025-12-06 08:25:21.684000756 +0000 UTC m=+0.096544769 container died ef28ead9699a3a453a7c64489f8848159f375497f80472e3334975e7b928c9e1 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_init_log, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, release=1761123044, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, config_id=tripleo_step3, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 03:25:21 localhost podman[61059]: 2025-12-06 08:25:21.761196941 +0000 UTC m=+0.183613927 container cleanup afa881aa4ca20970e313be385764e9619d76190f61a0727580f0c398406f954f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7a657a42c3cbd75086c59cf211d6fafe'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-rsyslog, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-11-18T22:49:49Z, com.redhat.component=openstack-rsyslog-container, version=17.1.12, config_id=tripleo_step3, batch=17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog) Dec 6 03:25:21 localhost systemd[1]: libpod-conmon-afa881aa4ca20970e313be385764e9619d76190f61a0727580f0c398406f954f.scope: Deactivated successfully. Dec 6 03:25:21 localhost systemd[61115]: Queued start job for default target Main User Target. Dec 6 03:25:21 localhost systemd[61115]: Created slice User Application Slice. Dec 6 03:25:21 localhost systemd[61115]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Dec 6 03:25:21 localhost systemd[61115]: Started Daily Cleanup of User's Temporary Directories. Dec 6 03:25:21 localhost systemd[61115]: Reached target Paths. Dec 6 03:25:21 localhost systemd[61115]: Reached target Timers. Dec 6 03:25:21 localhost podman[60954]: 2025-12-06 08:25:21.79184537 +0000 UTC m=+0.430681088 container start b995fb7c95a2e9bcba8263167f958b9d9a1aa5d19b80b46a281a635d52d8c08d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step3, batch=17.1_20251118.1, vcs-type=git, managed_by=tripleo_ansible, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_statedir_owner, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vendor=Red Hat, Inc., io.buildah.version=1.41.4, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public) Dec 6 03:25:21 localhost podman[60954]: 2025-12-06 08:25:21.792371176 +0000 UTC m=+0.431206904 container attach b995fb7c95a2e9bcba8263167f958b9d9a1aa5d19b80b46a281a635d52d8c08d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, distribution-scope=public, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, container_name=nova_statedir_owner, config_id=tripleo_step3, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 6 03:25:21 localhost systemd[61115]: Starting D-Bus User Message Bus Socket... Dec 6 03:25:21 localhost podman[61114]: 2025-12-06 08:25:21.793818221 +0000 UTC m=+0.137455153 container died b995fb7c95a2e9bcba8263167f958b9d9a1aa5d19b80b46a281a635d52d8c08d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, batch=17.1_20251118.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1761123044, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, managed_by=tripleo_ansible, io.buildah.version=1.41.4, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, architecture=x86_64, container_name=nova_statedir_owner, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z) Dec 6 03:25:21 localhost systemd[61115]: Starting Create User's Volatile Files and Directories... Dec 6 03:25:21 localhost systemd[61115]: Finished Create User's Volatile Files and Directories. Dec 6 03:25:21 localhost systemd[61115]: Listening on D-Bus User Message Bus Socket. Dec 6 03:25:21 localhost systemd[61115]: Reached target Sockets. Dec 6 03:25:21 localhost systemd[61115]: Reached target Basic System. Dec 6 03:25:21 localhost systemd[61115]: Reached target Main User Target. Dec 6 03:25:21 localhost systemd[61115]: Startup finished in 114ms. Dec 6 03:25:21 localhost systemd[1]: Started User Manager for UID 0. Dec 6 03:25:21 localhost podman[61070]: 2025-12-06 08:25:21.812607187 +0000 UTC m=+0.225151170 container cleanup ef28ead9699a3a453a7c64489f8848159f375497f80472e3334975e7b928c9e1 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, distribution-scope=public, build-date=2025-11-19T00:12:45Z, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, config_id=tripleo_step3, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_init_log, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:25:21 localhost systemd[1]: Started Session c1 of User root. Dec 6 03:25:21 localhost systemd[1]: Started Session c2 of User root. Dec 6 03:25:21 localhost systemd[1]: libpod-conmon-ef28ead9699a3a453a7c64489f8848159f375497f80472e3334975e7b928c9e1.scope: Deactivated successfully. Dec 6 03:25:21 localhost podman[61114]: 2025-12-06 08:25:21.884477468 +0000 UTC m=+0.228114420 container cleanup b995fb7c95a2e9bcba8263167f958b9d9a1aa5d19b80b46a281a635d52d8c08d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step3, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, container_name=nova_statedir_owner, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vcs-type=git, com.redhat.component=openstack-nova-compute-container, version=17.1.12, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., managed_by=tripleo_ansible) Dec 6 03:25:21 localhost python3[60795]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_statedir_owner --conmon-pidfile /run/nova_statedir_owner.pid --detach=False --env NOVA_STATEDIR_OWNERSHIP_SKIP=triliovault-mounts --env TRIPLEO_DEPLOY_IDENTIFIER=1765008053 --env __OS_DEBUG=true --label config_id=tripleo_step3 --label container_name=nova_statedir_owner --label managed_by=tripleo_ansible --label config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_statedir_owner.log --network none --privileged=False --security-opt label=disable --user root --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/container-config-scripts:/container-config-scripts:z registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 /container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py Dec 6 03:25:21 localhost systemd[1]: libpod-conmon-b995fb7c95a2e9bcba8263167f958b9d9a1aa5d19b80b46a281a635d52d8c08d.scope: Deactivated successfully. Dec 6 03:25:21 localhost systemd[1]: session-c1.scope: Deactivated successfully. Dec 6 03:25:21 localhost podman[61123]: 2025-12-06 08:25:21.801853407 +0000 UTC m=+0.123534476 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=starting, batch=17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, release=1761123044, vcs-type=git, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, container_name=collectd, io.openshift.expose-services=) Dec 6 03:25:21 localhost systemd[1]: session-c2.scope: Deactivated successfully. Dec 6 03:25:21 localhost podman[61123]: 2025-12-06 08:25:21.932575072 +0000 UTC m=+0.254256101 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, version=17.1.12, tcib_managed=true, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:25:21 localhost podman[61123]: unhealthy Dec 6 03:25:21 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Main process exited, code=exited, status=1/FAILURE Dec 6 03:25:21 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Failed with result 'exit-code'. Dec 6 03:25:22 localhost podman[61300]: 2025-12-06 08:25:22.230056298 +0000 UTC m=+0.075943848 container create 5519229fe01370c92f47b0e8e46cddb6cb973cc807c8388a053612d951244964 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, version=17.1.12, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20251118.1, build-date=2025-11-19T00:35:22Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, url=https://www.redhat.com, com.redhat.component=openstack-nova-libvirt-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt) Dec 6 03:25:22 localhost systemd[1]: Started libpod-conmon-5519229fe01370c92f47b0e8e46cddb6cb973cc807c8388a053612d951244964.scope. Dec 6 03:25:22 localhost systemd[1]: Started libcrun container. Dec 6 03:25:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a1de60586d08fb5b92518366abbb713eb2f96c306cc36f6078e1ea96b940056/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a1de60586d08fb5b92518366abbb713eb2f96c306cc36f6078e1ea96b940056/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a1de60586d08fb5b92518366abbb713eb2f96c306cc36f6078e1ea96b940056/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a1de60586d08fb5b92518366abbb713eb2f96c306cc36f6078e1ea96b940056/merged/var/log/swtpm/libvirt supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:22 localhost podman[61300]: 2025-12-06 08:25:22.201226074 +0000 UTC m=+0.047113584 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 6 03:25:22 localhost podman[61300]: 2025-12-06 08:25:22.303862829 +0000 UTC m=+0.149750339 container init 5519229fe01370c92f47b0e8e46cddb6cb973cc807c8388a053612d951244964 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, architecture=x86_64, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-libvirt-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, release=1761123044, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:35:22Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, batch=17.1_20251118.1, name=rhosp17/openstack-nova-libvirt, vcs-type=git, tcib_managed=true) Dec 6 03:25:22 localhost podman[61300]: 2025-12-06 08:25:22.313431992 +0000 UTC m=+0.159319492 container start 5519229fe01370c92f47b0e8e46cddb6cb973cc807c8388a053612d951244964 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, vcs-type=git, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, architecture=x86_64, vendor=Red Hat, Inc., url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.buildah.version=1.41.4) Dec 6 03:25:22 localhost systemd[1]: var-lib-containers-storage-overlay-39b7e1aa52a8c5999fa4a60374f7871a9f2a511ae7fd70de9cb8a012d08ea2a8-merged.mount: Deactivated successfully. Dec 6 03:25:22 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-afa881aa4ca20970e313be385764e9619d76190f61a0727580f0c398406f954f-userdata-shm.mount: Deactivated successfully. Dec 6 03:25:22 localhost python3[60795]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_init_log --conmon-pidfile /run/ceilometer_init_log.pid --detach=True --label config_id=tripleo_step3 --label container_name=ceilometer_init_log --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_init_log.log --network none --user root --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 /bin/bash -c chown -R ceilometer:ceilometer /var/log/ceilometer Dec 6 03:25:23 localhost podman[61381]: 2025-12-06 08:25:23.013460601 +0000 UTC m=+0.066925282 container create 2914dfad5be61e80048556735bb44e4f1907a2e2df52ff8faede941ddfde7367 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, batch=17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, architecture=x86_64, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_virtsecretd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, build-date=2025-11-19T00:35:22Z, vendor=Red Hat, Inc.) Dec 6 03:25:23 localhost systemd[1]: Started libpod-conmon-2914dfad5be61e80048556735bb44e4f1907a2e2df52ff8faede941ddfde7367.scope. Dec 6 03:25:23 localhost systemd[1]: Started libcrun container. Dec 6 03:25:23 localhost podman[61381]: 2025-12-06 08:25:22.984493674 +0000 UTC m=+0.037958365 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 6 03:25:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fc0aec4e92feba574efc8a2831ff4547cdf669c9ba74f30ce1106436a335beb/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fc0aec4e92feba574efc8a2831ff4547cdf669c9ba74f30ce1106436a335beb/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fc0aec4e92feba574efc8a2831ff4547cdf669c9ba74f30ce1106436a335beb/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fc0aec4e92feba574efc8a2831ff4547cdf669c9ba74f30ce1106436a335beb/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fc0aec4e92feba574efc8a2831ff4547cdf669c9ba74f30ce1106436a335beb/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fc0aec4e92feba574efc8a2831ff4547cdf669c9ba74f30ce1106436a335beb/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fc0aec4e92feba574efc8a2831ff4547cdf669c9ba74f30ce1106436a335beb/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:23 localhost podman[61381]: 2025-12-06 08:25:23.095163254 +0000 UTC m=+0.148627945 container init 2914dfad5be61e80048556735bb44e4f1907a2e2df52ff8faede941ddfde7367 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, architecture=x86_64, tcib_managed=true, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-libvirt-container, managed_by=tripleo_ansible, version=17.1.12, io.buildah.version=1.41.4, config_id=tripleo_step3, container_name=nova_virtsecretd, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:35:22Z, name=rhosp17/openstack-nova-libvirt, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 03:25:23 localhost podman[61381]: 2025-12-06 08:25:23.106993428 +0000 UTC m=+0.160458119 container start 2914dfad5be61e80048556735bb44e4f1907a2e2df52ff8faede941ddfde7367 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, managed_by=tripleo_ansible, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, version=17.1.12, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_virtsecretd, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, config_id=tripleo_step3, architecture=x86_64, batch=17.1_20251118.1, name=rhosp17/openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git) Dec 6 03:25:23 localhost python3[60795]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtsecretd --cgroupns=host --conmon-pidfile /run/nova_virtsecretd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=179caa3982511c1fd3314b961771f96c --label config_id=tripleo_step3 --label container_name=nova_virtsecretd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtsecretd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 6 03:25:23 localhost systemd-logind[766]: Existing logind session ID 28 used by new audit session, ignoring. Dec 6 03:25:23 localhost systemd[1]: Started Session c3 of User root. Dec 6 03:25:23 localhost systemd[1]: session-c3.scope: Deactivated successfully. Dec 6 03:25:23 localhost podman[61517]: 2025-12-06 08:25:23.498557345 +0000 UTC m=+0.061622119 container create 77cac28f3c09b9f832ae0c5e203a7ac268e6e556a22ddb4e08fa5fd08b32fce5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, io.openshift.expose-services=, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, container_name=nova_virtnodedevd, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., version=17.1.12, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-libvirt-container, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z) Dec 6 03:25:23 localhost podman[61524]: 2025-12-06 08:25:23.540884693 +0000 UTC m=+0.089998740 container create b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1761123044, com.redhat.component=openstack-iscsid-container, container_name=iscsid, architecture=x86_64, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, config_id=tripleo_step3, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, vcs-type=git, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 6 03:25:23 localhost systemd[1]: Started libpod-conmon-77cac28f3c09b9f832ae0c5e203a7ac268e6e556a22ddb4e08fa5fd08b32fce5.scope. Dec 6 03:25:23 localhost systemd[1]: Started libcrun container. Dec 6 03:25:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29aef16efd0d0d4913740c423de7a8c374d7bce415829f4c5401764f17811d20/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29aef16efd0d0d4913740c423de7a8c374d7bce415829f4c5401764f17811d20/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29aef16efd0d0d4913740c423de7a8c374d7bce415829f4c5401764f17811d20/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29aef16efd0d0d4913740c423de7a8c374d7bce415829f4c5401764f17811d20/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29aef16efd0d0d4913740c423de7a8c374d7bce415829f4c5401764f17811d20/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29aef16efd0d0d4913740c423de7a8c374d7bce415829f4c5401764f17811d20/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29aef16efd0d0d4913740c423de7a8c374d7bce415829f4c5401764f17811d20/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:23 localhost podman[61517]: 2025-12-06 08:25:23.56303715 +0000 UTC m=+0.126101924 container init 77cac28f3c09b9f832ae0c5e203a7ac268e6e556a22ddb4e08fa5fd08b32fce5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, batch=17.1_20251118.1, build-date=2025-11-19T00:35:22Z, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, container_name=nova_virtnodedevd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-libvirt, url=https://www.redhat.com, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, config_id=tripleo_step3, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vendor=Red Hat, Inc., version=17.1.12) Dec 6 03:25:23 localhost podman[61517]: 2025-12-06 08:25:23.571525851 +0000 UTC m=+0.134590635 container start 77cac28f3c09b9f832ae0c5e203a7ac268e6e556a22ddb4e08fa5fd08b32fce5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, url=https://www.redhat.com, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_id=tripleo_step3, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, vendor=Red Hat, Inc., container_name=nova_virtnodedevd, batch=17.1_20251118.1, build-date=2025-11-19T00:35:22Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-type=git) Dec 6 03:25:23 localhost podman[61517]: 2025-12-06 08:25:23.46834462 +0000 UTC m=+0.031409414 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 6 03:25:23 localhost python3[60795]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtnodedevd --cgroupns=host --conmon-pidfile /run/nova_virtnodedevd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=179caa3982511c1fd3314b961771f96c --label config_id=tripleo_step3 --label container_name=nova_virtnodedevd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtnodedevd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 6 03:25:23 localhost systemd[1]: Started libpod-conmon-b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.scope. Dec 6 03:25:23 localhost systemd[1]: Started libcrun container. Dec 6 03:25:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef5a2dfca972201470637fe24151a27f03799cbb5a942d988431f305c9ea334c/merged/etc/target supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef5a2dfca972201470637fe24151a27f03799cbb5a942d988431f305c9ea334c/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:23 localhost podman[61524]: 2025-12-06 08:25:23.493422888 +0000 UTC m=+0.042537005 image pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Dec 6 03:25:23 localhost systemd-logind[766]: Existing logind session ID 28 used by new audit session, ignoring. Dec 6 03:25:23 localhost systemd[1]: Started Session c4 of User root. Dec 6 03:25:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:25:23 localhost podman[61524]: 2025-12-06 08:25:23.621002697 +0000 UTC m=+0.170116754 container init b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, container_name=iscsid, distribution-scope=public, config_id=tripleo_step3, vcs-type=git, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, tcib_managed=true, maintainer=OpenStack TripleO Team) Dec 6 03:25:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:25:23 localhost podman[61524]: 2025-12-06 08:25:23.655711651 +0000 UTC m=+0.204825688 container start b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, io.buildah.version=1.41.4, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, vcs-type=git) Dec 6 03:25:23 localhost systemd-logind[766]: Existing logind session ID 28 used by new audit session, ignoring. Dec 6 03:25:23 localhost systemd[1]: Started Session c5 of User root. Dec 6 03:25:23 localhost python3[60795]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name iscsid --conmon-pidfile /run/iscsid.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=18576754feb36b85b5c8742ad9b5643d --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step3 --label container_name=iscsid --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/iscsid.log --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Dec 6 03:25:23 localhost systemd[1]: session-c4.scope: Deactivated successfully. Dec 6 03:25:23 localhost podman[61578]: 2025-12-06 08:25:23.724206919 +0000 UTC m=+0.060951259 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=starting, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, container_name=iscsid, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-type=git, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid) Dec 6 03:25:23 localhost systemd[1]: session-c5.scope: Deactivated successfully. Dec 6 03:25:23 localhost kernel: Loading iSCSI transport class v2.0-870. Dec 6 03:25:23 localhost podman[61578]: 2025-12-06 08:25:23.759257823 +0000 UTC m=+0.096002183 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, container_name=iscsid, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, release=1761123044, architecture=x86_64, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.buildah.version=1.41.4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git) Dec 6 03:25:23 localhost podman[61578]: unhealthy Dec 6 03:25:23 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Main process exited, code=exited, status=1/FAILURE Dec 6 03:25:23 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Failed with result 'exit-code'. Dec 6 03:25:25 localhost podman[61693]: 2025-12-06 08:25:25.117125989 +0000 UTC m=+0.101517271 container create 92a0134fb6ae7fa0506c791c4569a09e7c0cdb7fcb636d7ea6233a4978e1275d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, io.buildah.version=1.41.4, distribution-scope=public, build-date=2025-11-19T00:35:22Z, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, release=1761123044, com.redhat.component=openstack-nova-libvirt-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-libvirt, container_name=nova_virtstoraged, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt) Dec 6 03:25:25 localhost podman[61693]: 2025-12-06 08:25:25.074229895 +0000 UTC m=+0.058621187 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 6 03:25:25 localhost systemd[1]: Started libpod-conmon-92a0134fb6ae7fa0506c791c4569a09e7c0cdb7fcb636d7ea6233a4978e1275d.scope. Dec 6 03:25:25 localhost systemd[1]: Started libcrun container. Dec 6 03:25:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3718780a2d803326e6f6ad6743b607a5e1167128961c2a398f3fd685de43043/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3718780a2d803326e6f6ad6743b607a5e1167128961c2a398f3fd685de43043/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3718780a2d803326e6f6ad6743b607a5e1167128961c2a398f3fd685de43043/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3718780a2d803326e6f6ad6743b607a5e1167128961c2a398f3fd685de43043/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3718780a2d803326e6f6ad6743b607a5e1167128961c2a398f3fd685de43043/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3718780a2d803326e6f6ad6743b607a5e1167128961c2a398f3fd685de43043/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3718780a2d803326e6f6ad6743b607a5e1167128961c2a398f3fd685de43043/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:25 localhost podman[61693]: 2025-12-06 08:25:25.221978752 +0000 UTC m=+0.206370044 container init 92a0134fb6ae7fa0506c791c4569a09e7c0cdb7fcb636d7ea6233a4978e1275d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, name=rhosp17/openstack-nova-libvirt, url=https://www.redhat.com, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, build-date=2025-11-19T00:35:22Z, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step3, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-nova-libvirt-container, container_name=nova_virtstoraged, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team) Dec 6 03:25:25 localhost podman[61693]: 2025-12-06 08:25:25.229871733 +0000 UTC m=+0.214263025 container start 92a0134fb6ae7fa0506c791c4569a09e7c0cdb7fcb636d7ea6233a4978e1275d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-libvirt-container, name=rhosp17/openstack-nova-libvirt, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, url=https://www.redhat.com, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1761123044, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:35:22Z, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtstoraged, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, vendor=Red Hat, Inc.) Dec 6 03:25:25 localhost python3[60795]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtstoraged --cgroupns=host --conmon-pidfile /run/nova_virtstoraged.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=179caa3982511c1fd3314b961771f96c --label config_id=tripleo_step3 --label container_name=nova_virtstoraged --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtstoraged.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 6 03:25:25 localhost systemd-logind[766]: Existing logind session ID 28 used by new audit session, ignoring. Dec 6 03:25:25 localhost systemd[1]: Started Session c6 of User root. Dec 6 03:25:25 localhost systemd[1]: session-c6.scope: Deactivated successfully. Dec 6 03:25:25 localhost podman[61795]: 2025-12-06 08:25:25.692895272 +0000 UTC m=+0.063646372 container create e444006757a84d45c953d1ef31bc530b8507f3f86e52f8ba7761eaf744cfae6a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container, build-date=2025-11-19T00:35:22Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, name=rhosp17/openstack-nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_virtqemud, vcs-type=git, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 6 03:25:25 localhost systemd[1]: Started libpod-conmon-e444006757a84d45c953d1ef31bc530b8507f3f86e52f8ba7761eaf744cfae6a.scope. Dec 6 03:25:25 localhost podman[61795]: 2025-12-06 08:25:25.65532777 +0000 UTC m=+0.026078930 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 6 03:25:25 localhost systemd[1]: Started libcrun container. Dec 6 03:25:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9893e3b6825fe8589fe9eca74d23476479efd735e739d26cb203759d2b267e35/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9893e3b6825fe8589fe9eca74d23476479efd735e739d26cb203759d2b267e35/merged/var/log/swtpm supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9893e3b6825fe8589fe9eca74d23476479efd735e739d26cb203759d2b267e35/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9893e3b6825fe8589fe9eca74d23476479efd735e739d26cb203759d2b267e35/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9893e3b6825fe8589fe9eca74d23476479efd735e739d26cb203759d2b267e35/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9893e3b6825fe8589fe9eca74d23476479efd735e739d26cb203759d2b267e35/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9893e3b6825fe8589fe9eca74d23476479efd735e739d26cb203759d2b267e35/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9893e3b6825fe8589fe9eca74d23476479efd735e739d26cb203759d2b267e35/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:25 localhost podman[61795]: 2025-12-06 08:25:25.784085015 +0000 UTC m=+0.154836105 container init e444006757a84d45c953d1ef31bc530b8507f3f86e52f8ba7761eaf744cfae6a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, architecture=x86_64, version=17.1.12, vcs-type=git, config_id=tripleo_step3, com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.buildah.version=1.41.4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, name=rhosp17/openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, url=https://www.redhat.com, release=1761123044, container_name=nova_virtqemud, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt) Dec 6 03:25:25 localhost podman[61795]: 2025-12-06 08:25:25.802546381 +0000 UTC m=+0.173297481 container start e444006757a84d45c953d1ef31bc530b8507f3f86e52f8ba7761eaf744cfae6a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, config_id=tripleo_step3, url=https://www.redhat.com, distribution-scope=public, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, managed_by=tripleo_ansible, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-libvirt-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:35:22Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, release=1761123044, container_name=nova_virtqemud, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc.) Dec 6 03:25:25 localhost python3[60795]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtqemud --cgroupns=host --conmon-pidfile /run/nova_virtqemud.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=179caa3982511c1fd3314b961771f96c --label config_id=tripleo_step3 --label container_name=nova_virtqemud --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtqemud.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro --volume /var/log/containers/libvirt/swtpm:/var/log/swtpm:z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 6 03:25:25 localhost systemd-logind[766]: Existing logind session ID 28 used by new audit session, ignoring. Dec 6 03:25:25 localhost systemd[1]: Started Session c7 of User root. Dec 6 03:25:25 localhost systemd[1]: session-c7.scope: Deactivated successfully. Dec 6 03:25:26 localhost podman[61903]: 2025-12-06 08:25:26.313198178 +0000 UTC m=+0.087806751 container create abf33a7ce64d174f5aeca10ae9ef2b118248dbae4e0fd3e1c43527aa9d26cefa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, build-date=2025-11-19T00:35:22Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-libvirt, container_name=nova_virtproxyd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-libvirt-container, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1761123044, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, batch=17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:25:26 localhost systemd[1]: Started libpod-conmon-abf33a7ce64d174f5aeca10ae9ef2b118248dbae4e0fd3e1c43527aa9d26cefa.scope. Dec 6 03:25:26 localhost podman[61903]: 2025-12-06 08:25:26.269795158 +0000 UTC m=+0.044403781 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 6 03:25:26 localhost systemd[1]: Started libcrun container. Dec 6 03:25:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c3c28666d804509c2b20602368c8cd77799587988a2d1a1789a75cb16c60a47/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c3c28666d804509c2b20602368c8cd77799587988a2d1a1789a75cb16c60a47/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c3c28666d804509c2b20602368c8cd77799587988a2d1a1789a75cb16c60a47/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c3c28666d804509c2b20602368c8cd77799587988a2d1a1789a75cb16c60a47/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c3c28666d804509c2b20602368c8cd77799587988a2d1a1789a75cb16c60a47/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c3c28666d804509c2b20602368c8cd77799587988a2d1a1789a75cb16c60a47/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c3c28666d804509c2b20602368c8cd77799587988a2d1a1789a75cb16c60a47/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:26 localhost podman[61903]: 2025-12-06 08:25:26.386415661 +0000 UTC m=+0.161024254 container init abf33a7ce64d174f5aeca10ae9ef2b118248dbae4e0fd3e1c43527aa9d26cefa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, managed_by=tripleo_ansible, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, container_name=nova_virtproxyd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-libvirt-container, io.buildah.version=1.41.4, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-libvirt, batch=17.1_20251118.1, build-date=2025-11-19T00:35:22Z, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com) Dec 6 03:25:26 localhost podman[61903]: 2025-12-06 08:25:26.395794879 +0000 UTC m=+0.170403482 container start abf33a7ce64d174f5aeca10ae9ef2b118248dbae4e0fd3e1c43527aa9d26cefa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_virtproxyd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, name=rhosp17/openstack-nova-libvirt, architecture=x86_64, managed_by=tripleo_ansible, config_id=tripleo_step3, build-date=2025-11-19T00:35:22Z, version=17.1.12, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, distribution-scope=public) Dec 6 03:25:26 localhost python3[60795]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtproxyd --cgroupns=host --conmon-pidfile /run/nova_virtproxyd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=179caa3982511c1fd3314b961771f96c --label config_id=tripleo_step3 --label container_name=nova_virtproxyd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtproxyd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 6 03:25:26 localhost systemd[1]: tmp-crun.tP470m.mount: Deactivated successfully. Dec 6 03:25:26 localhost systemd-logind[766]: Existing logind session ID 28 used by new audit session, ignoring. Dec 6 03:25:26 localhost systemd[1]: Started Session c8 of User root. Dec 6 03:25:26 localhost systemd[1]: session-c8.scope: Deactivated successfully. Dec 6 03:25:27 localhost python3[61984]: ansible-file Invoked with path=/etc/systemd/system/tripleo_collectd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:25:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:25:27 localhost systemd[1]: tmp-crun.wVdgkN.mount: Deactivated successfully. Dec 6 03:25:27 localhost podman[62000]: 2025-12-06 08:25:27.309922828 +0000 UTC m=+0.103822182 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, vcs-type=git) Dec 6 03:25:27 localhost python3[62001]: ansible-file Invoked with path=/etc/systemd/system/tripleo_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:25:27 localhost podman[62000]: 2025-12-06 08:25:27.512978919 +0000 UTC m=+0.306878193 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, tcib_managed=true, release=1761123044, io.openshift.expose-services=, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12) Dec 6 03:25:27 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:25:27 localhost python3[62045]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:25:27 localhost python3[62061]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:25:28 localhost python3[62077]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:25:28 localhost python3[62093]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:25:28 localhost python3[62109]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:25:28 localhost python3[62125]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:25:29 localhost python3[62141]: ansible-file Invoked with path=/etc/systemd/system/tripleo_rsyslog.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:25:29 localhost python3[62157]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_collectd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 6 03:25:29 localhost python3[62173]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_iscsid_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 6 03:25:29 localhost sshd[62189]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:25:29 localhost python3[62190]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 6 03:25:30 localhost python3[62206]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 6 03:25:30 localhost python3[62222]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 6 03:25:30 localhost python3[62238]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 6 03:25:30 localhost python3[62254]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 6 03:25:31 localhost python3[62270]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 6 03:25:31 localhost python3[62286]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_rsyslog_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 6 03:25:31 localhost python3[62347]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009531.4189758-100154-124419602684867/source dest=/etc/systemd/system/tripleo_collectd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:25:32 localhost python3[62376]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009531.4189758-100154-124419602684867/source dest=/etc/systemd/system/tripleo_iscsid.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:25:33 localhost python3[62405]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009531.4189758-100154-124419602684867/source dest=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:25:33 localhost python3[62434]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009531.4189758-100154-124419602684867/source dest=/etc/systemd/system/tripleo_nova_virtnodedevd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:25:34 localhost python3[62463]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009531.4189758-100154-124419602684867/source dest=/etc/systemd/system/tripleo_nova_virtproxyd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:25:34 localhost python3[62492]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009531.4189758-100154-124419602684867/source dest=/etc/systemd/system/tripleo_nova_virtqemud.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:25:35 localhost python3[62521]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009531.4189758-100154-124419602684867/source dest=/etc/systemd/system/tripleo_nova_virtsecretd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:25:35 localhost python3[62550]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009531.4189758-100154-124419602684867/source dest=/etc/systemd/system/tripleo_nova_virtstoraged.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:25:36 localhost python3[62579]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009531.4189758-100154-124419602684867/source dest=/etc/systemd/system/tripleo_rsyslog.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:25:36 localhost python3[62595]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 6 03:25:36 localhost systemd[1]: Reloading. Dec 6 03:25:36 localhost systemd-rc-local-generator[62621]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:25:36 localhost systemd-sysv-generator[62624]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:25:36 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:25:36 localhost systemd[1]: Stopping User Manager for UID 0... Dec 6 03:25:36 localhost systemd[61115]: Activating special unit Exit the Session... Dec 6 03:25:36 localhost systemd[61115]: Stopped target Main User Target. Dec 6 03:25:36 localhost systemd[61115]: Stopped target Basic System. Dec 6 03:25:36 localhost systemd[61115]: Stopped target Paths. Dec 6 03:25:36 localhost systemd[61115]: Stopped target Sockets. Dec 6 03:25:36 localhost systemd[61115]: Stopped target Timers. Dec 6 03:25:36 localhost systemd[61115]: Stopped Daily Cleanup of User's Temporary Directories. Dec 6 03:25:36 localhost systemd[61115]: Closed D-Bus User Message Bus Socket. Dec 6 03:25:36 localhost systemd[61115]: Stopped Create User's Volatile Files and Directories. Dec 6 03:25:36 localhost systemd[61115]: Removed slice User Application Slice. Dec 6 03:25:36 localhost systemd[61115]: Reached target Shutdown. Dec 6 03:25:36 localhost systemd[61115]: Finished Exit the Session. Dec 6 03:25:36 localhost systemd[61115]: Reached target Exit the Session. Dec 6 03:25:36 localhost systemd[1]: user@0.service: Deactivated successfully. Dec 6 03:25:36 localhost systemd[1]: Stopped User Manager for UID 0. Dec 6 03:25:36 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Dec 6 03:25:36 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Dec 6 03:25:36 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Dec 6 03:25:36 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Dec 6 03:25:36 localhost systemd[1]: Removed slice User Slice of UID 0. Dec 6 03:25:37 localhost python3[62648]: ansible-systemd Invoked with state=restarted name=tripleo_collectd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 03:25:37 localhost systemd[1]: Reloading. Dec 6 03:25:37 localhost systemd-sysv-generator[62678]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:25:37 localhost systemd-rc-local-generator[62675]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:25:37 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:25:37 localhost systemd[1]: Starting collectd container... Dec 6 03:25:37 localhost systemd[1]: Started collectd container. Dec 6 03:25:38 localhost python3[62714]: ansible-systemd Invoked with state=restarted name=tripleo_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 03:25:38 localhost systemd[1]: Reloading. Dec 6 03:25:38 localhost systemd-sysv-generator[62743]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:25:38 localhost systemd-rc-local-generator[62739]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:25:38 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:25:38 localhost systemd[1]: Starting iscsid container... Dec 6 03:25:38 localhost systemd[1]: Started iscsid container. Dec 6 03:25:39 localhost python3[62780]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtlogd_wrapper.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 03:25:39 localhost systemd[1]: Reloading. Dec 6 03:25:39 localhost systemd-sysv-generator[62810]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:25:39 localhost systemd-rc-local-generator[62806]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:25:39 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:25:39 localhost systemd[1]: Starting nova_virtlogd_wrapper container... Dec 6 03:25:39 localhost systemd[1]: Started nova_virtlogd_wrapper container. Dec 6 03:25:40 localhost python3[62847]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtnodedevd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 03:25:40 localhost systemd[1]: Reloading. Dec 6 03:25:40 localhost systemd-rc-local-generator[62873]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:25:40 localhost systemd-sysv-generator[62879]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:25:40 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:25:40 localhost systemd[1]: Starting nova_virtnodedevd container... Dec 6 03:25:41 localhost tripleo-start-podman-container[62887]: Creating additional drop-in dependency for "nova_virtnodedevd" (77cac28f3c09b9f832ae0c5e203a7ac268e6e556a22ddb4e08fa5fd08b32fce5) Dec 6 03:25:41 localhost systemd[1]: Reloading. Dec 6 03:25:41 localhost systemd-rc-local-generator[62943]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:25:41 localhost systemd-sysv-generator[62946]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:25:41 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:25:41 localhost systemd[1]: Started nova_virtnodedevd container. Dec 6 03:25:42 localhost python3[62970]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtproxyd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 03:25:42 localhost systemd[1]: Reloading. Dec 6 03:25:42 localhost systemd-rc-local-generator[63001]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:25:42 localhost systemd-sysv-generator[63005]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:25:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:25:42 localhost systemd[1]: Starting nova_virtproxyd container... Dec 6 03:25:42 localhost tripleo-start-podman-container[63010]: Creating additional drop-in dependency for "nova_virtproxyd" (abf33a7ce64d174f5aeca10ae9ef2b118248dbae4e0fd3e1c43527aa9d26cefa) Dec 6 03:25:42 localhost systemd[1]: Reloading. Dec 6 03:25:42 localhost systemd-rc-local-generator[63072]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:25:42 localhost systemd-sysv-generator[63075]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:25:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:25:42 localhost systemd[1]: Started nova_virtproxyd container. Dec 6 03:25:43 localhost python3[63096]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtqemud.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 03:25:44 localhost systemd[1]: Reloading. Dec 6 03:25:44 localhost systemd-sysv-generator[63128]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:25:44 localhost systemd-rc-local-generator[63123]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:25:44 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:25:44 localhost systemd[1]: Starting nova_virtqemud container... Dec 6 03:25:45 localhost tripleo-start-podman-container[63136]: Creating additional drop-in dependency for "nova_virtqemud" (e444006757a84d45c953d1ef31bc530b8507f3f86e52f8ba7761eaf744cfae6a) Dec 6 03:25:45 localhost systemd[1]: Reloading. Dec 6 03:25:45 localhost systemd-rc-local-generator[63196]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:25:45 localhost systemd-sysv-generator[63200]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:25:45 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:25:45 localhost systemd[1]: Started nova_virtqemud container. Dec 6 03:25:45 localhost python3[63221]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtsecretd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 03:25:46 localhost systemd[1]: Reloading. Dec 6 03:25:46 localhost systemd-rc-local-generator[63246]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:25:46 localhost systemd-sysv-generator[63252]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:25:46 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:25:46 localhost systemd[1]: Starting nova_virtsecretd container... Dec 6 03:25:46 localhost tripleo-start-podman-container[63261]: Creating additional drop-in dependency for "nova_virtsecretd" (2914dfad5be61e80048556735bb44e4f1907a2e2df52ff8faede941ddfde7367) Dec 6 03:25:46 localhost systemd[1]: Reloading. Dec 6 03:25:46 localhost systemd-rc-local-generator[63317]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:25:46 localhost systemd-sysv-generator[63323]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:25:46 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:25:46 localhost systemd[1]: Started nova_virtsecretd container. Dec 6 03:25:47 localhost python3[63344]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtstoraged.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 03:25:47 localhost systemd[1]: Reloading. Dec 6 03:25:47 localhost systemd-sysv-generator[63370]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:25:47 localhost systemd-rc-local-generator[63367]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:25:47 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:25:47 localhost systemd[1]: Starting nova_virtstoraged container... Dec 6 03:25:48 localhost tripleo-start-podman-container[63384]: Creating additional drop-in dependency for "nova_virtstoraged" (92a0134fb6ae7fa0506c791c4569a09e7c0cdb7fcb636d7ea6233a4978e1275d) Dec 6 03:25:48 localhost systemd[1]: Reloading. Dec 6 03:25:48 localhost systemd-rc-local-generator[63440]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:25:48 localhost systemd-sysv-generator[63443]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:25:48 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:25:48 localhost systemd[1]: Started nova_virtstoraged container. Dec 6 03:25:49 localhost python3[63467]: ansible-systemd Invoked with state=restarted name=tripleo_rsyslog.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 03:25:49 localhost systemd[1]: Reloading. Dec 6 03:25:49 localhost systemd-rc-local-generator[63496]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:25:49 localhost systemd-sysv-generator[63501]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:25:49 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:25:49 localhost systemd[1]: Starting rsyslog container... Dec 6 03:25:49 localhost systemd[1]: tmp-crun.XJui79.mount: Deactivated successfully. Dec 6 03:25:49 localhost systemd[1]: Started libcrun container. Dec 6 03:25:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39b7e1aa52a8c5999fa4a60374f7871a9f2a511ae7fd70de9cb8a012d08ea2a8/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39b7e1aa52a8c5999fa4a60374f7871a9f2a511ae7fd70de9cb8a012d08ea2a8/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:49 localhost podman[63507]: 2025-12-06 08:25:49.685818382 +0000 UTC m=+0.136390400 container init afa881aa4ca20970e313be385764e9619d76190f61a0727580f0c398406f954f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.openshift.expose-services=, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7a657a42c3cbd75086c59cf211d6fafe'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, com.redhat.component=openstack-rsyslog-container, distribution-scope=public, build-date=2025-11-18T22:49:49Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, container_name=rsyslog, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., release=1761123044) Dec 6 03:25:49 localhost podman[63507]: 2025-12-06 08:25:49.694332993 +0000 UTC m=+0.144905011 container start afa881aa4ca20970e313be385764e9619d76190f61a0727580f0c398406f954f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-rsyslog-container, maintainer=OpenStack TripleO Team, release=1761123044, batch=17.1_20251118.1, build-date=2025-11-18T22:49:49Z, container_name=rsyslog, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7a657a42c3cbd75086c59cf211d6fafe'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., name=rhosp17/openstack-rsyslog, version=17.1.12, tcib_managed=true) Dec 6 03:25:49 localhost podman[63507]: rsyslog Dec 6 03:25:49 localhost systemd[1]: Started rsyslog container. Dec 6 03:25:49 localhost systemd[1]: libpod-afa881aa4ca20970e313be385764e9619d76190f61a0727580f0c398406f954f.scope: Deactivated successfully. Dec 6 03:25:49 localhost podman[63529]: 2025-12-06 08:25:49.842063249 +0000 UTC m=+0.050108636 container died afa881aa4ca20970e313be385764e9619d76190f61a0727580f0c398406f954f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7a657a42c3cbd75086c59cf211d6fafe'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, config_id=tripleo_step3, container_name=rsyslog, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-11-18T22:49:49Z, url=https://www.redhat.com, version=17.1.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-rsyslog-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-rsyslog, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git) Dec 6 03:25:49 localhost podman[63529]: 2025-12-06 08:25:49.91485839 +0000 UTC m=+0.122903737 container cleanup afa881aa4ca20970e313be385764e9619d76190f61a0727580f0c398406f954f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, tcib_managed=true, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20251118.1, container_name=rsyslog, name=rhosp17/openstack-rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044, io.buildah.version=1.41.4, config_id=tripleo_step3, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:49Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, com.redhat.component=openstack-rsyslog-container, maintainer=OpenStack TripleO Team, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7a657a42c3cbd75086c59cf211d6fafe'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog) Dec 6 03:25:49 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Dec 6 03:25:50 localhost podman[63557]: 2025-12-06 08:25:50.00654819 +0000 UTC m=+0.058460163 container cleanup afa881aa4ca20970e313be385764e9619d76190f61a0727580f0c398406f954f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7a657a42c3cbd75086c59cf211d6fafe'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, name=rhosp17/openstack-rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-11-18T22:49:49Z, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_id=tripleo_step3, io.buildah.version=1.41.4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, com.redhat.component=openstack-rsyslog-container, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, release=1761123044) Dec 6 03:25:50 localhost podman[63557]: rsyslog Dec 6 03:25:50 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Dec 6 03:25:50 localhost python3[63583]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks3.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:25:50 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 1. Dec 6 03:25:50 localhost systemd[1]: Stopped rsyslog container. Dec 6 03:25:50 localhost systemd[1]: Starting rsyslog container... Dec 6 03:25:50 localhost systemd[1]: Started libcrun container. Dec 6 03:25:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39b7e1aa52a8c5999fa4a60374f7871a9f2a511ae7fd70de9cb8a012d08ea2a8/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39b7e1aa52a8c5999fa4a60374f7871a9f2a511ae7fd70de9cb8a012d08ea2a8/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:50 localhost podman[63584]: 2025-12-06 08:25:50.451945487 +0000 UTC m=+0.108319100 container init afa881aa4ca20970e313be385764e9619d76190f61a0727580f0c398406f954f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-rsyslog, url=https://www.redhat.com, com.redhat.component=openstack-rsyslog-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7a657a42c3cbd75086c59cf211d6fafe'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, container_name=rsyslog, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:49Z, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044, distribution-scope=public, config_id=tripleo_step3) Dec 6 03:25:50 localhost podman[63584]: 2025-12-06 08:25:50.461229341 +0000 UTC m=+0.117602904 container start afa881aa4ca20970e313be385764e9619d76190f61a0727580f0c398406f954f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, name=rhosp17/openstack-rsyslog, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, release=1761123044, architecture=x86_64, io.buildah.version=1.41.4, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7a657a42c3cbd75086c59cf211d6fafe'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-11-18T22:49:49Z, summary=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, com.redhat.component=openstack-rsyslog-container) Dec 6 03:25:50 localhost podman[63584]: rsyslog Dec 6 03:25:50 localhost systemd[1]: Started rsyslog container. Dec 6 03:25:50 localhost systemd[1]: libpod-afa881aa4ca20970e313be385764e9619d76190f61a0727580f0c398406f954f.scope: Deactivated successfully. Dec 6 03:25:50 localhost podman[63613]: 2025-12-06 08:25:50.615008583 +0000 UTC m=+0.054754918 container died afa881aa4ca20970e313be385764e9619d76190f61a0727580f0c398406f954f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, container_name=rsyslog, build-date=2025-11-18T22:49:49Z, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7a657a42c3cbd75086c59cf211d6fafe'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vcs-type=git, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-rsyslog-container, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc.) Dec 6 03:25:50 localhost systemd[1]: tmp-crun.g0APtl.mount: Deactivated successfully. Dec 6 03:25:50 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-afa881aa4ca20970e313be385764e9619d76190f61a0727580f0c398406f954f-userdata-shm.mount: Deactivated successfully. Dec 6 03:25:50 localhost podman[63613]: 2025-12-06 08:25:50.650988836 +0000 UTC m=+0.090735151 container cleanup afa881aa4ca20970e313be385764e9619d76190f61a0727580f0c398406f954f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7a657a42c3cbd75086c59cf211d6fafe'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.buildah.version=1.41.4, io.openshift.expose-services=, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vcs-type=git, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044, name=rhosp17/openstack-rsyslog, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, com.redhat.component=openstack-rsyslog-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=rsyslog, build-date=2025-11-18T22:49:49Z, description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.) Dec 6 03:25:50 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Dec 6 03:25:50 localhost podman[63652]: 2025-12-06 08:25:50.72978835 +0000 UTC m=+0.047604049 container cleanup afa881aa4ca20970e313be385764e9619d76190f61a0727580f0c398406f954f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, com.redhat.component=openstack-rsyslog-container, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-type=git, name=rhosp17/openstack-rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7a657a42c3cbd75086c59cf211d6fafe'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, config_id=tripleo_step3, container_name=rsyslog, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:49Z, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public) Dec 6 03:25:50 localhost podman[63652]: rsyslog Dec 6 03:25:50 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Dec 6 03:25:51 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 2. Dec 6 03:25:51 localhost systemd[1]: Stopped rsyslog container. Dec 6 03:25:51 localhost systemd[1]: Starting rsyslog container... Dec 6 03:25:51 localhost systemd[1]: Started libcrun container. Dec 6 03:25:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39b7e1aa52a8c5999fa4a60374f7871a9f2a511ae7fd70de9cb8a012d08ea2a8/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39b7e1aa52a8c5999fa4a60374f7871a9f2a511ae7fd70de9cb8a012d08ea2a8/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:51 localhost podman[63709]: 2025-12-06 08:25:51.200096341 +0000 UTC m=+0.112749735 container init afa881aa4ca20970e313be385764e9619d76190f61a0727580f0c398406f954f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20251118.1, container_name=rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, com.redhat.component=openstack-rsyslog-container, url=https://www.redhat.com, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:49Z, version=17.1.12, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7a657a42c3cbd75086c59cf211d6fafe'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 6 03:25:51 localhost podman[63709]: 2025-12-06 08:25:51.208013594 +0000 UTC m=+0.120666998 container start afa881aa4ca20970e313be385764e9619d76190f61a0727580f0c398406f954f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7a657a42c3cbd75086c59cf211d6fafe'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, version=17.1.12, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, container_name=rsyslog, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, vendor=Red Hat, Inc., io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-rsyslog, build-date=2025-11-18T22:49:49Z, description=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044, url=https://www.redhat.com) Dec 6 03:25:51 localhost podman[63709]: rsyslog Dec 6 03:25:51 localhost systemd[1]: Started rsyslog container. Dec 6 03:25:51 localhost systemd[1]: libpod-afa881aa4ca20970e313be385764e9619d76190f61a0727580f0c398406f954f.scope: Deactivated successfully. Dec 6 03:25:51 localhost podman[63747]: 2025-12-06 08:25:51.347076004 +0000 UTC m=+0.036057075 container died afa881aa4ca20970e313be385764e9619d76190f61a0727580f0c398406f954f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:49Z, com.redhat.component=openstack-rsyslog-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7a657a42c3cbd75086c59cf211d6fafe'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, batch=17.1_20251118.1, name=rhosp17/openstack-rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, managed_by=tripleo_ansible, release=1761123044, container_name=rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:25:51 localhost podman[63747]: 2025-12-06 08:25:51.370519353 +0000 UTC m=+0.059500404 container cleanup afa881aa4ca20970e313be385764e9619d76190f61a0727580f0c398406f954f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, distribution-scope=public, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20251118.1, config_id=tripleo_step3, com.redhat.component=openstack-rsyslog-container, container_name=rsyslog, name=rhosp17/openstack-rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, build-date=2025-11-18T22:49:49Z, architecture=x86_64, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7a657a42c3cbd75086c59cf211d6fafe'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, url=https://www.redhat.com, version=17.1.12) Dec 6 03:25:51 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Dec 6 03:25:51 localhost podman[63761]: 2025-12-06 08:25:51.447840592 +0000 UTC m=+0.051260652 container cleanup afa881aa4ca20970e313be385764e9619d76190f61a0727580f0c398406f954f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:49Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, tcib_managed=true, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-rsyslog-container, release=1761123044, container_name=rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7a657a42c3cbd75086c59cf211d6fafe'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.buildah.version=1.41.4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20251118.1, config_id=tripleo_step3, name=rhosp17/openstack-rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog) Dec 6 03:25:51 localhost podman[63761]: rsyslog Dec 6 03:25:51 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Dec 6 03:25:51 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 3. Dec 6 03:25:51 localhost systemd[1]: Stopped rsyslog container. Dec 6 03:25:51 localhost systemd[1]: Starting rsyslog container... Dec 6 03:25:51 localhost systemd[1]: var-lib-containers-storage-overlay-39b7e1aa52a8c5999fa4a60374f7871a9f2a511ae7fd70de9cb8a012d08ea2a8-merged.mount: Deactivated successfully. Dec 6 03:25:51 localhost systemd[1]: Started libcrun container. Dec 6 03:25:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39b7e1aa52a8c5999fa4a60374f7871a9f2a511ae7fd70de9cb8a012d08ea2a8/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39b7e1aa52a8c5999fa4a60374f7871a9f2a511ae7fd70de9cb8a012d08ea2a8/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:51 localhost podman[63796]: 2025-12-06 08:25:51.716308729 +0000 UTC m=+0.125522427 container init afa881aa4ca20970e313be385764e9619d76190f61a0727580f0c398406f954f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, managed_by=tripleo_ansible, version=17.1.12, description=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7a657a42c3cbd75086c59cf211d6fafe'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.expose-services=, config_id=tripleo_step3, build-date=2025-11-18T22:49:49Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, com.redhat.component=openstack-rsyslog-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=rsyslog, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog) Dec 6 03:25:51 localhost podman[63796]: 2025-12-06 08:25:51.72515842 +0000 UTC m=+0.134372118 container start afa881aa4ca20970e313be385764e9619d76190f61a0727580f0c398406f954f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-11-18T22:49:49Z, managed_by=tripleo_ansible, container_name=rsyslog, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp17/openstack-rsyslog, tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7a657a42c3cbd75086c59cf211d6fafe'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, com.redhat.component=openstack-rsyslog-container, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 03:25:51 localhost podman[63796]: rsyslog Dec 6 03:25:51 localhost systemd[1]: Started rsyslog container. Dec 6 03:25:51 localhost python3[63813]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks3.json short_hostname=np0005548789 step=3 update_config_hash_only=False Dec 6 03:25:51 localhost systemd[1]: libpod-afa881aa4ca20970e313be385764e9619d76190f61a0727580f0c398406f954f.scope: Deactivated successfully. Dec 6 03:25:51 localhost podman[63824]: 2025-12-06 08:25:51.865273262 +0000 UTC m=+0.038223861 container died afa881aa4ca20970e313be385764e9619d76190f61a0727580f0c398406f954f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vcs-type=git, name=rhosp17/openstack-rsyslog, release=1761123044, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-11-18T22:49:49Z, version=17.1.12, container_name=rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20251118.1, com.redhat.component=openstack-rsyslog-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7a657a42c3cbd75086c59cf211d6fafe'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 6 03:25:51 localhost podman[63824]: 2025-12-06 08:25:51.889884806 +0000 UTC m=+0.062835355 container cleanup afa881aa4ca20970e313be385764e9619d76190f61a0727580f0c398406f954f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-rsyslog, build-date=2025-11-18T22:49:49Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, vcs-type=git, managed_by=tripleo_ansible, batch=17.1_20251118.1, tcib_managed=true, com.redhat.component=openstack-rsyslog-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7a657a42c3cbd75086c59cf211d6fafe'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 03:25:51 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Dec 6 03:25:51 localhost podman[63837]: 2025-12-06 08:25:51.978801471 +0000 UTC m=+0.057791492 container cleanup afa881aa4ca20970e313be385764e9619d76190f61a0727580f0c398406f954f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., release=1761123044, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-rsyslog, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, managed_by=tripleo_ansible, version=17.1.12, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7a657a42c3cbd75086c59cf211d6fafe'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, description=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=rsyslog, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:49Z, tcib_managed=true, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20251118.1) Dec 6 03:25:51 localhost podman[63837]: rsyslog Dec 6 03:25:51 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Dec 6 03:25:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:25:52 localhost podman[63850]: 2025-12-06 08:25:52.102573503 +0000 UTC m=+0.086088279 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=starting, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, config_id=tripleo_step3, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-collectd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com) Dec 6 03:25:52 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 4. Dec 6 03:25:52 localhost systemd[1]: Stopped rsyslog container. Dec 6 03:25:52 localhost systemd[1]: Starting rsyslog container... Dec 6 03:25:52 localhost podman[63850]: 2025-12-06 08:25:52.143206689 +0000 UTC m=+0.126721435 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, url=https://www.redhat.com, version=17.1.12, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Dec 6 03:25:52 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:25:52 localhost systemd[1]: Started libcrun container. Dec 6 03:25:52 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39b7e1aa52a8c5999fa4a60374f7871a9f2a511ae7fd70de9cb8a012d08ea2a8/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:52 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39b7e1aa52a8c5999fa4a60374f7871a9f2a511ae7fd70de9cb8a012d08ea2a8/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Dec 6 03:25:52 localhost podman[63870]: 2025-12-06 08:25:52.250814196 +0000 UTC m=+0.111834808 container init afa881aa4ca20970e313be385764e9619d76190f61a0727580f0c398406f954f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, config_id=tripleo_step3, name=rhosp17/openstack-rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7a657a42c3cbd75086c59cf211d6fafe'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, tcib_managed=true, build-date=2025-11-18T22:49:49Z, container_name=rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-rsyslog-container, architecture=x86_64, batch=17.1_20251118.1, vcs-type=git, distribution-scope=public, release=1761123044) Dec 6 03:25:52 localhost podman[63870]: 2025-12-06 08:25:52.2594366 +0000 UTC m=+0.120457222 container start afa881aa4ca20970e313be385764e9619d76190f61a0727580f0c398406f954f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, build-date=2025-11-18T22:49:49Z, version=17.1.12, name=rhosp17/openstack-rsyslog, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7a657a42c3cbd75086c59cf211d6fafe'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-rsyslog-container, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc.) Dec 6 03:25:52 localhost podman[63870]: rsyslog Dec 6 03:25:52 localhost systemd[1]: Started rsyslog container. Dec 6 03:25:52 localhost systemd[1]: libpod-afa881aa4ca20970e313be385764e9619d76190f61a0727580f0c398406f954f.scope: Deactivated successfully. Dec 6 03:25:52 localhost python3[63906]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:25:52 localhost podman[63909]: 2025-12-06 08:25:52.461681216 +0000 UTC m=+0.083588852 container died afa881aa4ca20970e313be385764e9619d76190f61a0727580f0c398406f954f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=rsyslog, batch=17.1_20251118.1, io.buildah.version=1.41.4, version=17.1.12, config_id=tripleo_step3, url=https://www.redhat.com, vcs-type=git, name=rhosp17/openstack-rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:49Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7a657a42c3cbd75086c59cf211d6fafe'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-rsyslog-container, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 03:25:52 localhost podman[63909]: 2025-12-06 08:25:52.486018833 +0000 UTC m=+0.107926419 container cleanup afa881aa4ca20970e313be385764e9619d76190f61a0727580f0c398406f954f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, container_name=rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, architecture=x86_64, name=rhosp17/openstack-rsyslog, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.component=openstack-rsyslog-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, vendor=Red Hat, Inc., url=https://www.redhat.com, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7a657a42c3cbd75086c59cf211d6fafe'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-11-18T22:49:49Z) Dec 6 03:25:52 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Dec 6 03:25:52 localhost podman[63922]: 2025-12-06 08:25:52.57861785 +0000 UTC m=+0.056601085 container cleanup afa881aa4ca20970e313be385764e9619d76190f61a0727580f0c398406f954f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20251118.1, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-rsyslog, distribution-scope=public, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '7a657a42c3cbd75086c59cf211d6fafe'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, build-date=2025-11-18T22:49:49Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3) Dec 6 03:25:52 localhost podman[63922]: rsyslog Dec 6 03:25:52 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Dec 6 03:25:52 localhost systemd[1]: var-lib-containers-storage-overlay-39b7e1aa52a8c5999fa4a60374f7871a9f2a511ae7fd70de9cb8a012d08ea2a8-merged.mount: Deactivated successfully. Dec 6 03:25:52 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-afa881aa4ca20970e313be385764e9619d76190f61a0727580f0c398406f954f-userdata-shm.mount: Deactivated successfully. Dec 6 03:25:52 localhost python3[63950]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_3 config_pattern=container-puppet-*.json config_overrides={} debug=True Dec 6 03:25:52 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 5. Dec 6 03:25:52 localhost systemd[1]: Stopped rsyslog container. Dec 6 03:25:52 localhost systemd[1]: tripleo_rsyslog.service: Start request repeated too quickly. Dec 6 03:25:52 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Dec 6 03:25:52 localhost systemd[1]: Failed to start rsyslog container. Dec 6 03:25:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:25:53 localhost podman[63951]: 2025-12-06 08:25:53.909454778 +0000 UTC m=+0.073898395 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=starting, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, io.buildah.version=1.41.4, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-iscsid, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, io.openshift.expose-services=) Dec 6 03:25:53 localhost podman[63951]: 2025-12-06 08:25:53.924074325 +0000 UTC m=+0.088517982 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, release=1761123044, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, distribution-scope=public, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:25:53 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:25:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:25:57 localhost podman[63970]: 2025-12-06 08:25:57.915073493 +0000 UTC m=+0.078387183 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, name=rhosp17/openstack-qdrouterd, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, version=17.1.12, container_name=metrics_qdr) Dec 6 03:25:58 localhost podman[63970]: 2025-12-06 08:25:58.11110138 +0000 UTC m=+0.274415080 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 03:25:58 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:26:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:26:22 localhost podman[64077]: 2025-12-06 08:26:22.917564408 +0000 UTC m=+0.079208378 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container) Dec 6 03:26:22 localhost podman[64077]: 2025-12-06 08:26:22.947597908 +0000 UTC m=+0.109241888 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, distribution-scope=public, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, tcib_managed=true, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, maintainer=OpenStack TripleO Team) Dec 6 03:26:22 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:26:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:26:24 localhost podman[64098]: 2025-12-06 08:26:24.914601658 +0000 UTC m=+0.077982170 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, tcib_managed=true, vcs-type=git, build-date=2025-11-18T23:44:13Z, container_name=iscsid, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible) Dec 6 03:26:24 localhost podman[64098]: 2025-12-06 08:26:24.949627591 +0000 UTC m=+0.113008103 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, container_name=iscsid, vendor=Red Hat, Inc., config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, distribution-scope=public, com.redhat.component=openstack-iscsid-container, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, release=1761123044, io.openshift.expose-services=, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team) Dec 6 03:26:24 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:26:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:26:28 localhost podman[64118]: 2025-12-06 08:26:28.904023727 +0000 UTC m=+0.066941582 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, release=1761123044, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_id=tripleo_step1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 6 03:26:29 localhost podman[64118]: 2025-12-06 08:26:29.12017932 +0000 UTC m=+0.283097175 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, release=1761123044, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, version=17.1.12, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, container_name=metrics_qdr, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 03:26:29 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:26:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:26:53 localhost podman[64146]: 2025-12-06 08:26:53.918719794 +0000 UTC m=+0.081659257 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, container_name=collectd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:26:53 localhost podman[64146]: 2025-12-06 08:26:53.932396402 +0000 UTC m=+0.095335865 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, url=https://www.redhat.com, managed_by=tripleo_ansible, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, version=17.1.12, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 03:26:53 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:26:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:26:55 localhost systemd[1]: tmp-crun.dYXxMM.mount: Deactivated successfully. Dec 6 03:26:55 localhost podman[64166]: 2025-12-06 08:26:55.913075302 +0000 UTC m=+0.067194415 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=tripleo_step3, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, container_name=iscsid, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid) Dec 6 03:26:55 localhost podman[64166]: 2025-12-06 08:26:55.950127437 +0000 UTC m=+0.104246530 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, container_name=iscsid, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public) Dec 6 03:26:55 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:26:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:26:59 localhost systemd[1]: tmp-crun.PZYkzZ.mount: Deactivated successfully. Dec 6 03:26:59 localhost podman[64185]: 2025-12-06 08:26:59.919980873 +0000 UTC m=+0.086510052 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, architecture=x86_64, batch=17.1_20251118.1, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, vendor=Red Hat, Inc., io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true) Dec 6 03:27:00 localhost podman[64185]: 2025-12-06 08:27:00.107515216 +0000 UTC m=+0.274044395 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 6 03:27:00 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:27:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:27:24 localhost podman[64290]: 2025-12-06 08:27:24.907876711 +0000 UTC m=+0.072253585 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, vendor=Red Hat, Inc., release=1761123044, distribution-scope=public, container_name=collectd, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.buildah.version=1.41.4, config_id=tripleo_step3, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:27:24 localhost podman[64290]: 2025-12-06 08:27:24.940860145 +0000 UTC m=+0.105237039 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, io.buildah.version=1.41.4, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, managed_by=tripleo_ansible, io.openshift.expose-services=, config_id=tripleo_step3, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, architecture=x86_64, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:27:24 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:27:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:27:26 localhost podman[64310]: 2025-12-06 08:27:26.91968838 +0000 UTC m=+0.085589033 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1761123044, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.12, batch=17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, container_name=iscsid, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:27:26 localhost podman[64310]: 2025-12-06 08:27:26.928901096 +0000 UTC m=+0.094801759 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.41.4, vcs-type=git, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, release=1761123044, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid) Dec 6 03:27:26 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:27:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:27:30 localhost systemd[1]: tmp-crun.1vZO6Y.mount: Deactivated successfully. Dec 6 03:27:30 localhost podman[64329]: 2025-12-06 08:27:30.940880816 +0000 UTC m=+0.106095025 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, vcs-type=git, tcib_managed=true, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 6 03:27:31 localhost podman[64329]: 2025-12-06 08:27:31.137153051 +0000 UTC m=+0.302367270 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, release=1761123044, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, batch=17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, architecture=x86_64, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 6 03:27:31 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:27:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:27:55 localhost podman[64358]: 2025-12-06 08:27:55.919003425 +0000 UTC m=+0.080803621 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, maintainer=OpenStack TripleO Team, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, release=1761123044, vendor=Red Hat, Inc., url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, build-date=2025-11-18T22:51:28Z) Dec 6 03:27:55 localhost podman[64358]: 2025-12-06 08:27:55.956290168 +0000 UTC m=+0.118090344 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, tcib_managed=true, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=17.1.12, vcs-type=git, distribution-scope=public, io.buildah.version=1.41.4, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com) Dec 6 03:27:55 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:27:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:27:57 localhost podman[64378]: 2025-12-06 08:27:57.914418556 +0000 UTC m=+0.077252246 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, vcs-type=git, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, release=1761123044, vendor=Red Hat, Inc., io.buildah.version=1.41.4, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1) Dec 6 03:27:57 localhost podman[64378]: 2025-12-06 08:27:57.953111269 +0000 UTC m=+0.115944909 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, release=1761123044, architecture=x86_64, config_id=tripleo_step3, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, distribution-scope=public, com.redhat.component=openstack-iscsid-container, vcs-type=git, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Dec 6 03:27:57 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:28:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:28:01 localhost podman[64396]: 2025-12-06 08:28:01.914399698 +0000 UTC m=+0.077046169 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.4, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, vendor=Red Hat, Inc., release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, container_name=metrics_qdr) Dec 6 03:28:02 localhost podman[64396]: 2025-12-06 08:28:02.132173534 +0000 UTC m=+0.294819965 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, version=17.1.12, release=1761123044, vcs-type=git, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, tcib_managed=true, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc.) Dec 6 03:28:02 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:28:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:28:26 localhost podman[64500]: 2025-12-06 08:28:26.915889186 +0000 UTC m=+0.078468482 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, config_id=tripleo_step3, vcs-type=git) Dec 6 03:28:26 localhost podman[64500]: 2025-12-06 08:28:26.929079869 +0000 UTC m=+0.091659155 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, architecture=x86_64, name=rhosp17/openstack-collectd, release=1761123044, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, version=17.1.12, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z) Dec 6 03:28:26 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:28:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:28:28 localhost podman[64521]: 2025-12-06 08:28:28.922279132 +0000 UTC m=+0.083849432 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 6 03:28:28 localhost podman[64521]: 2025-12-06 08:28:28.955789862 +0000 UTC m=+0.117360112 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, container_name=iscsid, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1761123044, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git) Dec 6 03:28:28 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:28:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:28:32 localhost podman[64539]: 2025-12-06 08:28:32.918079411 +0000 UTC m=+0.080078680 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20251118.1, release=1761123044, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, version=17.1.12, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.openshift.expose-services=, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64) Dec 6 03:28:33 localhost podman[64539]: 2025-12-06 08:28:33.113238152 +0000 UTC m=+0.275237471 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.expose-services=, version=17.1.12, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 03:28:33 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:28:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:28:57 localhost podman[64568]: 2025-12-06 08:28:57.915904467 +0000 UTC m=+0.078596526 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:28:57 localhost podman[64568]: 2025-12-06 08:28:57.923136033 +0000 UTC m=+0.085828102 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, vcs-type=git, vendor=Red Hat, Inc., container_name=collectd, release=1761123044, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, config_id=tripleo_step3, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12) Dec 6 03:28:57 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:28:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:28:59 localhost podman[64588]: 2025-12-06 08:28:59.919331605 +0000 UTC m=+0.077200533 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, version=17.1.12, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 03:28:59 localhost podman[64588]: 2025-12-06 08:28:59.956249026 +0000 UTC m=+0.114117954 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, tcib_managed=true, distribution-scope=public, batch=17.1_20251118.1, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.buildah.version=1.41.4, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, version=17.1.12, vcs-type=git) Dec 6 03:28:59 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:29:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:29:03 localhost podman[64606]: 2025-12-06 08:29:03.893103197 +0000 UTC m=+0.059613970 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, container_name=metrics_qdr, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_id=tripleo_step1, version=17.1.12) Dec 6 03:29:04 localhost podman[64606]: 2025-12-06 08:29:04.101265756 +0000 UTC m=+0.267776519 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, vendor=Red Hat, Inc., container_name=metrics_qdr, version=17.1.12) Dec 6 03:29:04 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:29:06 localhost sshd[64635]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:29:09 localhost sshd[64637]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:29:12 localhost sshd[64750]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:29:14 localhost sshd[64752]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:29:17 localhost sshd[64769]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:29:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:29:28 localhost podman[64772]: 2025-12-06 08:29:28.923346628 +0000 UTC m=+0.087531313 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1761123044, architecture=x86_64, vendor=Red Hat, Inc., batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, distribution-scope=public, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3) Dec 6 03:29:28 localhost podman[64772]: 2025-12-06 08:29:28.933390757 +0000 UTC m=+0.097575472 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, vcs-type=git, container_name=collectd, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 6 03:29:28 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:29:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:29:30 localhost podman[64793]: 2025-12-06 08:29:30.912461569 +0000 UTC m=+0.076820033 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, batch=17.1_20251118.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=iscsid, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:29:30 localhost podman[64793]: 2025-12-06 08:29:30.92156282 +0000 UTC m=+0.085921234 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team) Dec 6 03:29:30 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:29:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:29:34 localhost podman[64813]: 2025-12-06 08:29:34.928010827 +0000 UTC m=+0.088639515 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, container_name=metrics_qdr, config_id=tripleo_step1, release=1761123044, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.41.4) Dec 6 03:29:35 localhost podman[64813]: 2025-12-06 08:29:35.135191887 +0000 UTC m=+0.295820515 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, url=https://www.redhat.com, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, container_name=metrics_qdr, release=1761123044, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vcs-type=git) Dec 6 03:29:35 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:29:44 localhost python3[64890]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:29:45 localhost python3[64935]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009784.6719065-107269-30986519723805/source _original_basename=tmp1uftwivi follow=False checksum=ee48fb03297eb703b1954c8852d0f67fab51dac1 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:29:46 localhost python3[64997]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/recover_tripleo_nova_virtqemud.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:29:47 localhost python3[65040]: ansible-ansible.legacy.copy Invoked with dest=/usr/libexec/recover_tripleo_nova_virtqemud.sh mode=0755 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009786.3400743-107379-225241340822480/source _original_basename=tmphmniqcct follow=False checksum=922b8aa8342176110bffc2e39abdccc2b39e53a9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:29:47 localhost python3[65102]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:29:47 localhost python3[65145]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_virtqemud_recover.service mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009787.2976177-107430-92736625283838/source _original_basename=tmpujw47kzv follow=False checksum=92f73544b703afc85885fa63ab07bdf8f8671554 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:29:48 localhost python3[65207]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.timer follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:29:48 localhost python3[65250]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_virtqemud_recover.timer mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009788.2169046-107487-40054258302436/source _original_basename=tmpilrv0q1p follow=False checksum=c6e5f76a53c0d6ccaf46c4b48d813dc2891ad8e9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:29:49 localhost python3[65280]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_virtqemud_recover.service daemon_reexec=False scope=system no_block=False state=None force=None masked=None Dec 6 03:29:49 localhost systemd[1]: Reloading. Dec 6 03:29:49 localhost systemd-sysv-generator[65310]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:29:49 localhost systemd-rc-local-generator[65305]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:29:49 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:29:49 localhost systemd[1]: Reloading. Dec 6 03:29:49 localhost systemd-sysv-generator[65343]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:29:49 localhost systemd-rc-local-generator[65339]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:29:49 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:29:50 localhost python3[65369]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_virtqemud_recover.timer state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 03:29:50 localhost systemd[1]: Reloading. Dec 6 03:29:50 localhost systemd-sysv-generator[65400]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:29:50 localhost systemd-rc-local-generator[65395]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:29:50 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:29:51 localhost systemd[1]: Reloading. Dec 6 03:29:51 localhost systemd-sysv-generator[65439]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:29:51 localhost systemd-rc-local-generator[65436]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:29:51 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:29:51 localhost systemd[1]: Started Check and recover tripleo_nova_virtqemud every 10m. Dec 6 03:29:51 localhost python3[65461]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl enable --now tripleo_nova_virtqemud_recover.timer _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:29:51 localhost systemd[1]: Reloading. Dec 6 03:29:51 localhost systemd-sysv-generator[65488]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:29:51 localhost systemd-rc-local-generator[65485]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:29:51 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:29:52 localhost python3[65545]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:29:53 localhost python3[65588]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_libvirt.target group=root mode=0644 owner=root src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009792.268081-107625-97876280645722/source _original_basename=tmpsvnm9f6v follow=False checksum=c064b4a8e7d3d1d7c62d1f80a09e350659996afd backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:29:53 localhost python3[65618]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 03:29:53 localhost systemd[1]: Reloading. Dec 6 03:29:53 localhost systemd-sysv-generator[65644]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:29:53 localhost systemd-rc-local-generator[65640]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:29:53 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:29:53 localhost systemd[1]: Reached target tripleo_nova_libvirt.target. Dec 6 03:29:54 localhost python3[65672]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 6 03:29:56 localhost ansible-async_wrapper.py[65844]: Invoked with 436720557326 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009795.454371-107727-27560256888861/AnsiballZ_command.py _ Dec 6 03:29:56 localhost ansible-async_wrapper.py[65847]: Starting module and watcher Dec 6 03:29:56 localhost ansible-async_wrapper.py[65847]: Start watching 65848 (3600) Dec 6 03:29:56 localhost ansible-async_wrapper.py[65848]: Start module (65848) Dec 6 03:29:56 localhost ansible-async_wrapper.py[65844]: Return async_wrapper task started. Dec 6 03:29:56 localhost python3[65868]: ansible-ansible.legacy.async_status Invoked with jid=436720557326.65844 mode=status _async_dir=/tmp/.ansible_async Dec 6 03:29:59 localhost puppet-user[65867]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 6 03:29:59 localhost puppet-user[65867]: (file: /etc/puppet/hiera.yaml) Dec 6 03:29:59 localhost puppet-user[65867]: Warning: Undefined variable '::deploy_config_name'; Dec 6 03:29:59 localhost puppet-user[65867]: (file & line not available) Dec 6 03:29:59 localhost puppet-user[65867]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 6 03:29:59 localhost puppet-user[65867]: (file & line not available) Dec 6 03:29:59 localhost puppet-user[65867]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Dec 6 03:29:59 localhost puppet-user[65867]: Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at ["/etc/puppet/modules/snmp/manifests/params.pp", 310]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Dec 6 03:29:59 localhost puppet-user[65867]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Dec 6 03:29:59 localhost puppet-user[65867]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Dec 6 03:29:59 localhost puppet-user[65867]: with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 358]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Dec 6 03:29:59 localhost puppet-user[65867]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Dec 6 03:29:59 localhost puppet-user[65867]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Dec 6 03:29:59 localhost puppet-user[65867]: with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 367]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Dec 6 03:29:59 localhost puppet-user[65867]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Dec 6 03:29:59 localhost puppet-user[65867]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Dec 6 03:29:59 localhost puppet-user[65867]: with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 382]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Dec 6 03:29:59 localhost puppet-user[65867]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Dec 6 03:29:59 localhost puppet-user[65867]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Dec 6 03:29:59 localhost puppet-user[65867]: with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 388]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Dec 6 03:29:59 localhost puppet-user[65867]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Dec 6 03:29:59 localhost puppet-user[65867]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Dec 6 03:29:59 localhost puppet-user[65867]: with Pattern[]. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 393]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Dec 6 03:29:59 localhost puppet-user[65867]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Dec 6 03:29:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:29:59 localhost puppet-user[65867]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Dec 6 03:29:59 localhost puppet-user[65867]: Notice: Compiled catalog for np0005548789.localdomain in environment production in 0.20 seconds Dec 6 03:29:59 localhost systemd[1]: tmp-crun.Jmu2K1.mount: Deactivated successfully. Dec 6 03:29:59 localhost podman[65978]: 2025-12-06 08:29:59.921667627 +0000 UTC m=+0.086171241 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 03:29:59 localhost podman[65978]: 2025-12-06 08:29:59.963063762 +0000 UTC m=+0.127567376 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, config_id=tripleo_step3, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, vendor=Red Hat, Inc., io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, release=1761123044) Dec 6 03:29:59 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:30:01 localhost ansible-async_wrapper.py[65847]: 65848 still running (3600) Dec 6 03:30:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:30:01 localhost systemd[1]: tmp-crun.K8qj44.mount: Deactivated successfully. Dec 6 03:30:01 localhost podman[66006]: 2025-12-06 08:30:01.927175938 +0000 UTC m=+0.088540922 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, distribution-scope=public, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible) Dec 6 03:30:01 localhost podman[66006]: 2025-12-06 08:30:01.966185102 +0000 UTC m=+0.127550056 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.4, tcib_managed=true, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, batch=17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 6 03:30:01 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:30:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:30:05 localhost systemd[1]: tmp-crun.iHuGtZ.mount: Deactivated successfully. Dec 6 03:30:05 localhost podman[66092]: 2025-12-06 08:30:05.924453818 +0000 UTC m=+0.084751762 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vendor=Red Hat, Inc., batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, vcs-type=git, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team) Dec 6 03:30:06 localhost ansible-async_wrapper.py[65847]: 65848 still running (3595) Dec 6 03:30:06 localhost podman[66092]: 2025-12-06 08:30:06.122911205 +0000 UTC m=+0.283209099 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.4, release=1761123044, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, container_name=metrics_qdr, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:30:06 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:30:06 localhost python3[66137]: ansible-ansible.legacy.async_status Invoked with jid=436720557326.65844 mode=status _async_dir=/tmp/.ansible_async Dec 6 03:30:06 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 6 03:30:06 localhost systemd[1]: Starting man-db-cache-update.service... Dec 6 03:30:06 localhost systemd[1]: Reloading. Dec 6 03:30:07 localhost systemd-sysv-generator[66221]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:30:07 localhost systemd-rc-local-generator[66212]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:30:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:30:07 localhost systemd[1]: Queuing reload/restart jobs for marked units… Dec 6 03:30:07 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Dec 6 03:30:07 localhost systemd[1]: Finished man-db-cache-update.service. Dec 6 03:30:07 localhost systemd[1]: run-r37ea743efc694d4ba5d5352f0f592192.service: Deactivated successfully. Dec 6 03:30:08 localhost puppet-user[65867]: Notice: /Stage[main]/Snmp/Package[snmpd]/ensure: created Dec 6 03:30:08 localhost puppet-user[65867]: Notice: /Stage[main]/Snmp/File[snmpd.conf]/content: content changed '{sha256}2b743f970e80e2150759bfc66f2d8d0fbd8b31624f79e2991248d1a5ac57494e' to '{sha256}75e8aaaefd4e6fd8bd6c608f0585c92d05d76aa2865d7c4690c8ebae838476c5' Dec 6 03:30:08 localhost puppet-user[65867]: Notice: /Stage[main]/Snmp/File[snmpd.sysconfig]/content: content changed '{sha256}b63afb2dee7419b6834471f88581d981c8ae5c8b27b9d329ba67a02f3ddd8221' to '{sha256}3917ee8bbc680ad50d77186ad4a1d2705c2025c32fc32f823abbda7f2328dfbd' Dec 6 03:30:08 localhost puppet-user[65867]: Notice: /Stage[main]/Snmp/File[snmptrapd.conf]/content: content changed '{sha256}2e1ca894d609ef337b6243909bf5623c87fd5df98ecbd00c7d4c12cf12f03c4e' to '{sha256}3ecf18da1ba84ea3932607f2b903ee6a038b6f9ac4e1e371e48f3ef61c5052ea' Dec 6 03:30:08 localhost puppet-user[65867]: Notice: /Stage[main]/Snmp/File[snmptrapd.sysconfig]/content: content changed '{sha256}86ee5797ad10cb1ea0f631e9dfa6ae278ecf4f4d16f4c80f831cdde45601b23c' to '{sha256}2244553364afcca151958f8e2003e4c182f5e2ecfbe55405cec73fd818581e97' Dec 6 03:30:08 localhost puppet-user[65867]: Notice: /Stage[main]/Snmp/Service[snmptrapd]: Triggered 'refresh' from 2 events Dec 6 03:30:09 localhost ceph-osd[31726]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 6 03:30:09 localhost ceph-osd[31726]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 5168 writes, 22K keys, 5168 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5168 writes, 575 syncs, 8.99 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 138 writes, 383 keys, 138 commit groups, 1.0 writes per commit group, ingest: 0.37 MB, 0.00 MB/s#012Interval WAL: 138 writes, 69 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 6 03:30:11 localhost ansible-async_wrapper.py[65847]: 65848 still running (3590) Dec 6 03:30:12 localhost ceph-osd[32665]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 6 03:30:12 localhost ceph-osd[32665]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.2 total, 600.0 interval#012Cumulative writes: 4467 writes, 20K keys, 4467 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4467 writes, 521 syncs, 8.57 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 124 writes, 346 keys, 124 commit groups, 1.0 writes per commit group, ingest: 0.29 MB, 0.00 MB/s#012Interval WAL: 124 writes, 62 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 6 03:30:13 localhost puppet-user[65867]: Notice: /Stage[main]/Tripleo::Profile::Base::Snmp/Snmp::Snmpv3_user[ro_snmp_user]/Exec[create-snmpv3-user-ro_snmp_user]/returns: executed successfully Dec 6 03:30:13 localhost systemd[1]: Reloading. Dec 6 03:30:13 localhost systemd-rc-local-generator[67263]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:30:13 localhost systemd-sysv-generator[67267]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:30:13 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:30:14 localhost systemd[1]: Starting Simple Network Management Protocol (SNMP) Daemon.... Dec 6 03:30:14 localhost snmpd[67279]: Can't find directory of RPM packages Dec 6 03:30:14 localhost snmpd[67279]: Duplicate IPv4 address detected, some interfaces may not be visible in IP-MIB Dec 6 03:30:14 localhost systemd[1]: Started Simple Network Management Protocol (SNMP) Daemon.. Dec 6 03:30:14 localhost systemd[1]: Reloading. Dec 6 03:30:14 localhost systemd-rc-local-generator[67307]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:30:14 localhost systemd-sysv-generator[67310]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:30:14 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:30:14 localhost systemd[1]: Reloading. Dec 6 03:30:14 localhost systemd-sysv-generator[67348]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:30:14 localhost systemd-rc-local-generator[67342]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:30:14 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:30:14 localhost puppet-user[65867]: Notice: /Stage[main]/Snmp/Service[snmpd]/ensure: ensure changed 'stopped' to 'running' Dec 6 03:30:14 localhost puppet-user[65867]: Notice: Applied catalog in 14.94 seconds Dec 6 03:30:14 localhost puppet-user[65867]: Application: Dec 6 03:30:14 localhost puppet-user[65867]: Initial environment: production Dec 6 03:30:14 localhost puppet-user[65867]: Converged environment: production Dec 6 03:30:14 localhost puppet-user[65867]: Run mode: user Dec 6 03:30:14 localhost puppet-user[65867]: Changes: Dec 6 03:30:14 localhost puppet-user[65867]: Total: 8 Dec 6 03:30:14 localhost puppet-user[65867]: Events: Dec 6 03:30:14 localhost puppet-user[65867]: Success: 8 Dec 6 03:30:14 localhost puppet-user[65867]: Total: 8 Dec 6 03:30:14 localhost puppet-user[65867]: Resources: Dec 6 03:30:14 localhost puppet-user[65867]: Restarted: 1 Dec 6 03:30:14 localhost puppet-user[65867]: Changed: 8 Dec 6 03:30:14 localhost puppet-user[65867]: Out of sync: 8 Dec 6 03:30:14 localhost puppet-user[65867]: Total: 19 Dec 6 03:30:14 localhost puppet-user[65867]: Time: Dec 6 03:30:14 localhost puppet-user[65867]: Filebucket: 0.00 Dec 6 03:30:14 localhost puppet-user[65867]: Schedule: 0.00 Dec 6 03:30:14 localhost puppet-user[65867]: Augeas: 0.01 Dec 6 03:30:14 localhost puppet-user[65867]: File: 0.09 Dec 6 03:30:14 localhost puppet-user[65867]: Config retrieval: 0.26 Dec 6 03:30:14 localhost puppet-user[65867]: Service: 1.18 Dec 6 03:30:14 localhost puppet-user[65867]: Transaction evaluation: 14.93 Dec 6 03:30:14 localhost puppet-user[65867]: Catalog application: 14.94 Dec 6 03:30:14 localhost puppet-user[65867]: Last run: 1765009814 Dec 6 03:30:14 localhost puppet-user[65867]: Exec: 5.05 Dec 6 03:30:14 localhost puppet-user[65867]: Package: 8.42 Dec 6 03:30:14 localhost puppet-user[65867]: Total: 14.94 Dec 6 03:30:14 localhost puppet-user[65867]: Version: Dec 6 03:30:14 localhost puppet-user[65867]: Config: 1765009799 Dec 6 03:30:14 localhost puppet-user[65867]: Puppet: 7.10.0 Dec 6 03:30:14 localhost ansible-async_wrapper.py[65848]: Module complete (65848) Dec 6 03:30:16 localhost ansible-async_wrapper.py[65847]: Done in kid B. Dec 6 03:30:16 localhost python3[67369]: ansible-ansible.legacy.async_status Invoked with jid=436720557326.65844 mode=status _async_dir=/tmp/.ansible_async Dec 6 03:30:17 localhost python3[67399]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 6 03:30:17 localhost python3[67431]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 6 03:30:18 localhost python3[67531]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:30:18 localhost python3[67566]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmp56jj7vza recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 6 03:30:19 localhost python3[67611]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:30:20 localhost python3[67714]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Dec 6 03:30:20 localhost python3[67733]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:30:21 localhost python3[67765]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 6 03:30:22 localhost python3[67830]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:30:22 localhost python3[67848]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:30:23 localhost python3[67910]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:30:23 localhost python3[67928]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:30:24 localhost python3[67990]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:30:24 localhost python3[68008]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:30:24 localhost python3[68070]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:30:25 localhost python3[68088]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:30:25 localhost python3[68118]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 03:30:25 localhost systemd[1]: Reloading. Dec 6 03:30:25 localhost systemd-sysv-generator[68143]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:30:25 localhost systemd-rc-local-generator[68139]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:30:25 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:30:26 localhost python3[68204]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:30:26 localhost python3[68222]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:30:27 localhost python3[68284]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:30:27 localhost python3[68302]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:30:28 localhost python3[68332]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 03:30:28 localhost systemd[1]: Reloading. Dec 6 03:30:28 localhost systemd-sysv-generator[68361]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:30:28 localhost systemd-rc-local-generator[68358]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:30:28 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:30:28 localhost systemd[1]: Starting Create netns directory... Dec 6 03:30:28 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Dec 6 03:30:28 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Dec 6 03:30:28 localhost systemd[1]: Finished Create netns directory. Dec 6 03:30:28 localhost python3[68389]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Dec 6 03:30:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:30:30 localhost systemd[1]: tmp-crun.dzboxF.mount: Deactivated successfully. Dec 6 03:30:30 localhost podman[68433]: 2025-12-06 08:30:30.935868342 +0000 UTC m=+0.091181958 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, release=1761123044, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.12, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container) Dec 6 03:30:30 localhost podman[68433]: 2025-12-06 08:30:30.947023475 +0000 UTC m=+0.102337041 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, container_name=collectd, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, name=rhosp17/openstack-collectd, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, release=1761123044, batch=17.1_20251118.1) Dec 6 03:30:30 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:30:31 localhost python3[68469]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step4 config_dir=/var/lib/tripleo-config/container-startup-config/step_4 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Dec 6 03:30:31 localhost podman[68617]: 2025-12-06 08:30:31.437257104 +0000 UTC m=+0.070868185 container create b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.buildah.version=1.41.4, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, distribution-scope=public, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4) Dec 6 03:30:31 localhost podman[68614]: 2025-12-06 08:30:31.459550518 +0000 UTC m=+0.096289065 container create 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, name=rhosp17/openstack-cron, vcs-type=git, architecture=x86_64, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com) Dec 6 03:30:31 localhost systemd[1]: Started libpod-conmon-b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.scope. Dec 6 03:30:31 localhost systemd[1]: Started libpod-conmon-04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.scope. Dec 6 03:30:31 localhost podman[68614]: 2025-12-06 08:30:31.395102611 +0000 UTC m=+0.031841178 image pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Dec 6 03:30:31 localhost systemd[1]: Started libcrun container. Dec 6 03:30:31 localhost systemd[1]: Started libcrun container. Dec 6 03:30:31 localhost podman[68617]: 2025-12-06 08:30:31.401626901 +0000 UTC m=+0.035238012 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 Dec 6 03:30:31 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd6425452938e99a947d98ed440c416f97c1a47fc1a973380479b4612f15ab3d/merged/var/log/containers supports timestamps until 2038 (0x7fffffff) Dec 6 03:30:31 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7edd91eaf927f0cc6c745dda6c529d67c09cf793c73be3335bece938eeb713d/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff) Dec 6 03:30:31 localhost podman[68631]: 2025-12-06 08:30:31.513976428 +0000 UTC m=+0.132662511 container create 192809174d5f26e681c631d5f2ef11ed9ed7f1d8dce6529b3a87b11ae6c7b9dd (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, config_id=tripleo_step4, batch=17.1_20251118.1, name=rhosp17/openstack-nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, version=17.1.12, container_name=nova_libvirt_init_secret, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 03:30:31 localhost podman[68631]: 2025-12-06 08:30:31.430163116 +0000 UTC m=+0.048849209 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Dec 6 03:30:31 localhost podman[68653]: 2025-12-06 08:30:31.451278344 +0000 UTC m=+0.036982735 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 Dec 6 03:30:31 localhost podman[68669]: 2025-12-06 08:30:31.556523633 +0000 UTC m=+0.131508836 container create e15d86348a4b5f0d83945da433efd8fb855d49225082af18e8a77aab7058aff1 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, tcib_managed=true, io.openshift.expose-services=, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, container_name=configure_cms_options, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 6 03:30:31 localhost systemd[1]: Started libpod-conmon-192809174d5f26e681c631d5f2ef11ed9ed7f1d8dce6529b3a87b11ae6c7b9dd.scope. Dec 6 03:30:31 localhost systemd[1]: Started libcrun container. Dec 6 03:30:31 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6427be8e8aab7c88f5d6611b3222842e716badb64f49ed94370155c549e54807/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Dec 6 03:30:31 localhost systemd[1]: Started libpod-conmon-e15d86348a4b5f0d83945da433efd8fb855d49225082af18e8a77aab7058aff1.scope. Dec 6 03:30:31 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6427be8e8aab7c88f5d6611b3222842e716badb64f49ed94370155c549e54807/merged/etc/nova supports timestamps until 2038 (0x7fffffff) Dec 6 03:30:31 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6427be8e8aab7c88f5d6611b3222842e716badb64f49ed94370155c549e54807/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Dec 6 03:30:31 localhost podman[68631]: 2025-12-06 08:30:31.58642507 +0000 UTC m=+0.205111153 container init 192809174d5f26e681c631d5f2ef11ed9ed7f1d8dce6529b3a87b11ae6c7b9dd (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_libvirt_init_secret, description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-nova-libvirt, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, build-date=2025-11-19T00:35:22Z, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-libvirt-container) Dec 6 03:30:31 localhost systemd[1]: Started libcrun container. Dec 6 03:30:31 localhost podman[68631]: 2025-12-06 08:30:31.594908751 +0000 UTC m=+0.213594834 container start 192809174d5f26e681c631d5f2ef11ed9ed7f1d8dce6529b3a87b11ae6c7b9dd (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_libvirt_init_secret, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, build-date=2025-11-19T00:35:22Z, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, config_id=tripleo_step4, batch=17.1_20251118.1, tcib_managed=true, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-libvirt-container, vcs-type=git) Dec 6 03:30:31 localhost podman[68631]: 2025-12-06 08:30:31.595600131 +0000 UTC m=+0.214286214 container attach 192809174d5f26e681c631d5f2ef11ed9ed7f1d8dce6529b3a87b11ae6c7b9dd (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:35:22Z, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, tcib_managed=true, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, container_name=nova_libvirt_init_secret, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20251118.1, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, name=rhosp17/openstack-nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc.) Dec 6 03:30:31 localhost podman[68653]: 2025-12-06 08:30:31.611384616 +0000 UTC m=+0.197088987 container create a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 03:30:31 localhost podman[68669]: 2025-12-06 08:30:31.521338194 +0000 UTC m=+0.096323427 image pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Dec 6 03:30:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:30:31 localhost podman[68617]: 2025-12-06 08:30:31.631338108 +0000 UTC m=+0.264949189 container init b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, managed_by=tripleo_ansible, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, vendor=Red Hat, Inc.) Dec 6 03:30:31 localhost podman[68669]: 2025-12-06 08:30:31.648529685 +0000 UTC m=+0.223514878 container init e15d86348a4b5f0d83945da433efd8fb855d49225082af18e8a77aab7058aff1 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=configure_cms_options, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git) Dec 6 03:30:31 localhost podman[68669]: 2025-12-06 08:30:31.654127237 +0000 UTC m=+0.229112430 container start e15d86348a4b5f0d83945da433efd8fb855d49225082af18e8a77aab7058aff1 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, version=17.1.12, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, io.buildah.version=1.41.4, container_name=configure_cms_options, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044) Dec 6 03:30:31 localhost podman[68669]: 2025-12-06 08:30:31.654523169 +0000 UTC m=+0.229508382 container attach e15d86348a4b5f0d83945da433efd8fb855d49225082af18e8a77aab7058aff1 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, config_id=tripleo_step4, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=configure_cms_options, release=1761123044, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}) Dec 6 03:30:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:30:31 localhost podman[68617]: 2025-12-06 08:30:31.659598395 +0000 UTC m=+0.293209476 container start b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1761123044, vcs-type=git, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 03:30:31 localhost python3[68469]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=728090aef247cfdd273031dadf6d1125 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ceilometer_agent_ipmi --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_agent_ipmi.log --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 Dec 6 03:30:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:30:31 localhost podman[68614]: 2025-12-06 08:30:31.680837417 +0000 UTC m=+0.317575984 container init 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, batch=17.1_20251118.1, tcib_managed=true, managed_by=tripleo_ansible, release=1761123044, container_name=logrotate_crond) Dec 6 03:30:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:30:31 localhost systemd[1]: libpod-192809174d5f26e681c631d5f2ef11ed9ed7f1d8dce6529b3a87b11ae6c7b9dd.scope: Deactivated successfully. Dec 6 03:30:31 localhost podman[68631]: 2025-12-06 08:30:31.718381688 +0000 UTC m=+0.337067781 container died 192809174d5f26e681c631d5f2ef11ed9ed7f1d8dce6529b3a87b11ae6c7b9dd (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, managed_by=tripleo_ansible, version=17.1.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, container_name=nova_libvirt_init_secret, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, name=rhosp17/openstack-nova-libvirt, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-19T00:35:22Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 03:30:31 localhost podman[68730]: 2025-12-06 08:30:31.744882492 +0000 UTC m=+0.079379787 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=starting, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_id=tripleo_step4, version=17.1.12, build-date=2025-11-19T00:12:45Z, release=1761123044, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 6 03:30:31 localhost ovs-vsctl[68785]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . external_ids ovn-cms-options Dec 6 03:30:31 localhost systemd[1]: libpod-e15d86348a4b5f0d83945da433efd8fb855d49225082af18e8a77aab7058aff1.scope: Deactivated successfully. Dec 6 03:30:31 localhost podman[68669]: 2025-12-06 08:30:31.752438373 +0000 UTC m=+0.327423596 container died e15d86348a4b5f0d83945da433efd8fb855d49225082af18e8a77aab7058aff1 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=configure_cms_options, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, release=1761123044, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z) Dec 6 03:30:31 localhost podman[68614]: 2025-12-06 08:30:31.762613986 +0000 UTC m=+0.399352543 container start 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.41.4, distribution-scope=public, batch=17.1_20251118.1, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, version=17.1.12, release=1761123044, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, config_id=tripleo_step4, vcs-type=git, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron) Dec 6 03:30:31 localhost python3[68469]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name logrotate_crond --conmon-pidfile /run/logrotate_crond.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=53ed83bb0cae779ff95edb2002262c6f --healthcheck-command /usr/share/openstack-tripleo-common/healthcheck/cron --label config_id=tripleo_step4 --label container_name=logrotate_crond --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/logrotate_crond.log --network none --pid host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro --volume /var/log/containers:/var/log/containers:z registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Dec 6 03:30:31 localhost systemd[1]: Started libpod-conmon-a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.scope. Dec 6 03:30:31 localhost systemd[1]: Started libcrun container. Dec 6 03:30:31 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15de5573c617e73fedd1daaecfac821d4b4021582e250a3cae6d24e4b8e4cd51/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff) Dec 6 03:30:31 localhost podman[68756]: 2025-12-06 08:30:31.825271097 +0000 UTC m=+0.112956756 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=starting, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, container_name=logrotate_crond, vcs-type=git, release=1761123044, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, managed_by=tripleo_ansible) Dec 6 03:30:31 localhost podman[68730]: 2025-12-06 08:30:31.860574841 +0000 UTC m=+0.195072156 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.41.4, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4) Dec 6 03:30:31 localhost podman[68730]: unhealthy Dec 6 03:30:31 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Main process exited, code=exited, status=1/FAILURE Dec 6 03:30:31 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Failed with result 'exit-code'. Dec 6 03:30:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:30:31 localhost podman[68653]: 2025-12-06 08:30:31.886041992 +0000 UTC m=+0.471746383 container init a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1) Dec 6 03:30:31 localhost podman[68756]: 2025-12-06 08:30:31.912470043 +0000 UTC m=+0.200155752 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, release=1761123044, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, url=https://www.redhat.com, architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron) Dec 6 03:30:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:30:31 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:30:31 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-192809174d5f26e681c631d5f2ef11ed9ed7f1d8dce6529b3a87b11ae6c7b9dd-userdata-shm.mount: Deactivated successfully. Dec 6 03:30:31 localhost systemd[1]: var-lib-containers-storage-overlay-6427be8e8aab7c88f5d6611b3222842e716badb64f49ed94370155c549e54807-merged.mount: Deactivated successfully. Dec 6 03:30:31 localhost podman[68768]: 2025-12-06 08:30:31.965121068 +0000 UTC m=+0.235209337 container cleanup 192809174d5f26e681c631d5f2ef11ed9ed7f1d8dce6529b3a87b11ae6c7b9dd (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., container_name=nova_libvirt_init_secret, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-libvirt, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-nova-libvirt-container, tcib_managed=true, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, build-date=2025-11-19T00:35:22Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 03:30:31 localhost systemd[1]: libpod-conmon-192809174d5f26e681c631d5f2ef11ed9ed7f1d8dce6529b3a87b11ae6c7b9dd.scope: Deactivated successfully. Dec 6 03:30:31 localhost python3[68469]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_libvirt_init_secret --cgroupns=host --conmon-pidfile /run/nova_libvirt_init_secret.pid --detach=False --env LIBVIRT_DEFAULT_URI=qemu:///system --env TRIPLEO_CONFIG_HASH=179caa3982511c1fd3314b961771f96c --label config_id=tripleo_step4 --label container_name=nova_libvirt_init_secret --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_libvirt_init_secret.log --network host --privileged=False --security-opt label=disable --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova --volume /etc/libvirt:/etc/libvirt --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro --volume /var/lib/tripleo-config/ceph:/etc/ceph:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /nova_libvirt_init_secret.sh ceph:openstack Dec 6 03:30:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:30:32 localhost systemd[1]: var-lib-containers-storage-overlay-30c5044896505b9166e77885065d3af47cd1d5cde049e01332c2cc6c18ba5026-merged.mount: Deactivated successfully. Dec 6 03:30:32 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e15d86348a4b5f0d83945da433efd8fb855d49225082af18e8a77aab7058aff1-userdata-shm.mount: Deactivated successfully. Dec 6 03:30:32 localhost podman[68789]: 2025-12-06 08:30:32.024244742 +0000 UTC m=+0.258200322 container cleanup e15d86348a4b5f0d83945da433efd8fb855d49225082af18e8a77aab7058aff1 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, url=https://www.redhat.com, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, container_name=configure_cms_options, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, version=17.1.12, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:30:32 localhost systemd[1]: libpod-conmon-e15d86348a4b5f0d83945da433efd8fb855d49225082af18e8a77aab7058aff1.scope: Deactivated successfully. Dec 6 03:30:32 localhost python3[68469]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name configure_cms_options --conmon-pidfile /run/configure_cms_options.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1765008053 --label config_id=tripleo_step4 --label container_name=configure_cms_options --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/configure_cms_options.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 /bin/bash -c CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi Dec 6 03:30:32 localhost podman[68653]: 2025-12-06 08:30:32.132610577 +0000 UTC m=+0.718314948 container start a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, version=17.1.12, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, url=https://www.redhat.com, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, managed_by=tripleo_ansible) Dec 6 03:30:32 localhost python3[68469]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=728090aef247cfdd273031dadf6d1125 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ceilometer_agent_compute --label managed_by=tripleo_ansible --label config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_agent_compute.log --network host --privileged=False --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 Dec 6 03:30:32 localhost podman[68861]: 2025-12-06 08:30:32.158612374 +0000 UTC m=+0.226681945 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=starting, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, container_name=ceilometer_agent_compute, release=1761123044, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, url=https://www.redhat.com, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team) Dec 6 03:30:32 localhost podman[68861]: 2025-12-06 08:30:32.168121916 +0000 UTC m=+0.236191467 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, build-date=2025-11-19T00:11:48Z, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:30:32 localhost podman[68861]: unhealthy Dec 6 03:30:32 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Main process exited, code=exited, status=1/FAILURE Dec 6 03:30:32 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Failed with result 'exit-code'. Dec 6 03:30:32 localhost podman[68901]: 2025-12-06 08:30:32.081632433 +0000 UTC m=+0.056696850 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, io.openshift.expose-services=, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, version=17.1.12, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, architecture=x86_64, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, tcib_managed=true, container_name=iscsid, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid) Dec 6 03:30:32 localhost podman[69010]: 2025-12-06 08:30:32.289980484 +0000 UTC m=+0.076503438 container create 237adc5bc2c172820a03ac5be6390ecc453f7c348da1b610d2cf6f305320a6af (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, url=https://www.redhat.com, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, vcs-type=git, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, container_name=setup_ovs_manager, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 6 03:30:32 localhost podman[68901]: 2025-12-06 08:30:32.317042635 +0000 UTC m=+0.292107102 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20251118.1, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, managed_by=tripleo_ansible, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, architecture=x86_64, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.buildah.version=1.41.4) Dec 6 03:30:32 localhost systemd[1]: Started libpod-conmon-237adc5bc2c172820a03ac5be6390ecc453f7c348da1b610d2cf6f305320a6af.scope. Dec 6 03:30:32 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:30:32 localhost systemd[1]: Started libcrun container. Dec 6 03:30:32 localhost podman[69010]: 2025-12-06 08:30:32.34885005 +0000 UTC m=+0.135372994 container init 237adc5bc2c172820a03ac5be6390ecc453f7c348da1b610d2cf6f305320a6af (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, container_name=setup_ovs_manager, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, version=17.1.12, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Dec 6 03:30:32 localhost podman[69010]: 2025-12-06 08:30:32.356396122 +0000 UTC m=+0.142919066 container start 237adc5bc2c172820a03ac5be6390ecc453f7c348da1b610d2cf6f305320a6af (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, container_name=setup_ovs_manager, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, config_id=tripleo_step4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, release=1761123044, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 6 03:30:32 localhost podman[69010]: 2025-12-06 08:30:32.356543146 +0000 UTC m=+0.143066110 container attach 237adc5bc2c172820a03ac5be6390ecc453f7c348da1b610d2cf6f305320a6af (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, tcib_managed=true, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=setup_ovs_manager, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, release=1761123044, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64) Dec 6 03:30:32 localhost podman[69010]: 2025-12-06 08:30:32.257618442 +0000 UTC m=+0.044141426 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Dec 6 03:30:32 localhost podman[69042]: 2025-12-06 08:30:32.394835052 +0000 UTC m=+0.062368846 container create 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.openshift.expose-services=, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, tcib_managed=true, release=1761123044, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 03:30:32 localhost systemd[1]: Started libpod-conmon-38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.scope. Dec 6 03:30:32 localhost systemd[1]: Started libcrun container. Dec 6 03:30:32 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16e4342af8bf5958b38bc295034feee0dd1522d1c796e48d3acbadb880cc49ff/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 6 03:30:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:30:32 localhost podman[69042]: 2025-12-06 08:30:32.453417739 +0000 UTC m=+0.120951523 container init 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, version=17.1.12, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, config_id=tripleo_step4, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container) Dec 6 03:30:32 localhost podman[69042]: 2025-12-06 08:30:32.367501183 +0000 UTC m=+0.035034967 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Dec 6 03:30:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:30:32 localhost podman[69042]: 2025-12-06 08:30:32.483990886 +0000 UTC m=+0.151524700 container start 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com) Dec 6 03:30:32 localhost python3[68469]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_migration_target --conmon-pidfile /run/nova_migration_target.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=179caa3982511c1fd3314b961771f96c --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=nova_migration_target --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_migration_target.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /etc/ssh:/host-ssh:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Dec 6 03:30:32 localhost podman[69071]: 2025-12-06 08:30:32.577365221 +0000 UTC m=+0.086430893 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=starting, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.12, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, architecture=x86_64, tcib_managed=true, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public) Dec 6 03:30:32 localhost podman[69071]: 2025-12-06 08:30:32.934137676 +0000 UTC m=+0.443203328 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_id=tripleo_step4, distribution-scope=public, architecture=x86_64, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, io.buildah.version=1.41.4, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.openshift.expose-services=, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 03:30:32 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:30:33 localhost kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure Dec 6 03:30:35 localhost ovs-vsctl[69249]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager Dec 6 03:30:35 localhost systemd[1]: libpod-237adc5bc2c172820a03ac5be6390ecc453f7c348da1b610d2cf6f305320a6af.scope: Deactivated successfully. Dec 6 03:30:35 localhost systemd[1]: libpod-237adc5bc2c172820a03ac5be6390ecc453f7c348da1b610d2cf6f305320a6af.scope: Consumed 2.881s CPU time. Dec 6 03:30:35 localhost podman[69010]: 2025-12-06 08:30:35.237384826 +0000 UTC m=+3.023907830 container died 237adc5bc2c172820a03ac5be6390ecc453f7c348da1b610d2cf6f305320a6af (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, version=17.1.12, batch=17.1_20251118.1, tcib_managed=true, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=setup_ovs_manager, vcs-type=git, release=1761123044, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z) Dec 6 03:30:35 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-237adc5bc2c172820a03ac5be6390ecc453f7c348da1b610d2cf6f305320a6af-userdata-shm.mount: Deactivated successfully. Dec 6 03:30:35 localhost systemd[1]: var-lib-containers-storage-overlay-9a3d7308d621b6c88232a02a8b3043ca87b4d41445011f4603e8c51c260b2591-merged.mount: Deactivated successfully. Dec 6 03:30:35 localhost podman[69250]: 2025-12-06 08:30:35.344043778 +0000 UTC m=+0.096352346 container cleanup 237adc5bc2c172820a03ac5be6390ecc453f7c348da1b610d2cf6f305320a6af (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=setup_ovs_manager, managed_by=tripleo_ansible, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:30:35 localhost systemd[1]: libpod-conmon-237adc5bc2c172820a03ac5be6390ecc453f7c348da1b610d2cf6f305320a6af.scope: Deactivated successfully. Dec 6 03:30:35 localhost python3[68469]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name setup_ovs_manager --conmon-pidfile /run/setup_ovs_manager.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1765008053 --label config_id=tripleo_step4 --label container_name=setup_ovs_manager --label managed_by=tripleo_ansible --label config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/setup_ovs_manager.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 /container_puppet_apply.sh 4 exec include tripleo::profile::base::neutron::ovn_metadata Dec 6 03:30:35 localhost podman[69362]: 2025-12-06 08:30:35.717862537 +0000 UTC m=+0.072837665 container create 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, architecture=x86_64, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, managed_by=tripleo_ansible, batch=17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., distribution-scope=public) Dec 6 03:30:35 localhost podman[69363]: 2025-12-06 08:30:35.751877961 +0000 UTC m=+0.097371159 container create 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, release=1761123044, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, managed_by=tripleo_ansible, distribution-scope=public, tcib_managed=true, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 6 03:30:35 localhost systemd[1]: Started libpod-conmon-87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.scope. Dec 6 03:30:35 localhost systemd[1]: Started libpod-conmon-1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.scope. Dec 6 03:30:35 localhost systemd[1]: Started libcrun container. Dec 6 03:30:35 localhost podman[69362]: 2025-12-06 08:30:35.67655975 +0000 UTC m=+0.031534948 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Dec 6 03:30:35 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31be4bfa33f1fd50cae755746783d85d8683e10cd2caa7fdf7edb677e543b7f9/merged/var/log/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 03:30:35 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31be4bfa33f1fd50cae755746783d85d8683e10cd2caa7fdf7edb677e543b7f9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 03:30:35 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31be4bfa33f1fd50cae755746783d85d8683e10cd2caa7fdf7edb677e543b7f9/merged/etc/neutron/kill_scripts supports timestamps until 2038 (0x7fffffff) Dec 6 03:30:35 localhost systemd[1]: Started libcrun container. Dec 6 03:30:35 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c9f3dee13341691faa8d64763052b5a13a4b5ff224f1a26e18c82d56bd99001/merged/run/ovn supports timestamps until 2038 (0x7fffffff) Dec 6 03:30:35 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c9f3dee13341691faa8d64763052b5a13a4b5ff224f1a26e18c82d56bd99001/merged/var/log/openvswitch supports timestamps until 2038 (0x7fffffff) Dec 6 03:30:35 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c9f3dee13341691faa8d64763052b5a13a4b5ff224f1a26e18c82d56bd99001/merged/var/log/ovn supports timestamps until 2038 (0x7fffffff) Dec 6 03:30:35 localhost podman[69363]: 2025-12-06 08:30:35.699769002 +0000 UTC m=+0.045262220 image pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Dec 6 03:30:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:30:35 localhost podman[69362]: 2025-12-06 08:30:35.809095645 +0000 UTC m=+0.164070853 container init 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=) Dec 6 03:30:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:30:35 localhost podman[69363]: 2025-12-06 08:30:35.827930194 +0000 UTC m=+0.173423422 container init 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, name=rhosp17/openstack-ovn-controller, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1) Dec 6 03:30:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:30:35 localhost systemd-logind[766]: Existing logind session ID 28 used by new audit session, ignoring. Dec 6 03:30:35 localhost podman[69362]: 2025-12-06 08:30:35.850605639 +0000 UTC m=+0.205580807 container start 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, vcs-type=git, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 03:30:35 localhost systemd[1]: Created slice User Slice of UID 0. Dec 6 03:30:35 localhost python3[68469]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=270cf6e6b67cba1ef197c7fa89d5bb20 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ovn_metadata_agent --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ovn_metadata_agent.log --network host --pid host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/neutron:/var/log/neutron:z --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /run/netns:/run/netns:shared --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Dec 6 03:30:35 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Dec 6 03:30:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:30:35 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Dec 6 03:30:35 localhost systemd[1]: Starting User Manager for UID 0... Dec 6 03:30:35 localhost podman[69363]: 2025-12-06 08:30:35.894671861 +0000 UTC m=+0.240165059 container start 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, config_id=tripleo_step4, build-date=2025-11-18T23:34:05Z, release=1761123044, maintainer=OpenStack TripleO Team, tcib_managed=true) Dec 6 03:30:35 localhost python3[68469]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck 6642 --label config_id=tripleo_step4 --label container_name=ovn_controller --label managed_by=tripleo_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ovn_controller.log --network host --privileged=True --user root --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/log/containers/openvswitch:/var/log/openvswitch:z --volume /var/log/containers/openvswitch:/var/log/ovn:z registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Dec 6 03:30:35 localhost podman[69405]: 2025-12-06 08:30:35.966862706 +0000 UTC m=+0.107889511 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=starting, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, batch=17.1_20251118.1, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, architecture=x86_64, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:30:36 localhost podman[69405]: 2025-12-06 08:30:36.006007557 +0000 UTC m=+0.147034352 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, batch=17.1_20251118.1, distribution-scope=public) Dec 6 03:30:36 localhost podman[69405]: unhealthy Dec 6 03:30:36 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Main process exited, code=exited, status=1/FAILURE Dec 6 03:30:36 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Failed with result 'exit-code'. Dec 6 03:30:36 localhost systemd[69420]: Queued start job for default target Main User Target. Dec 6 03:30:36 localhost systemd[69420]: Created slice User Application Slice. Dec 6 03:30:36 localhost systemd[69420]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Dec 6 03:30:36 localhost systemd[69420]: Started Daily Cleanup of User's Temporary Directories. Dec 6 03:30:36 localhost systemd[69420]: Reached target Paths. Dec 6 03:30:36 localhost systemd[69420]: Reached target Timers. Dec 6 03:30:36 localhost systemd[69420]: Starting D-Bus User Message Bus Socket... Dec 6 03:30:36 localhost systemd[69420]: Starting Create User's Volatile Files and Directories... Dec 6 03:30:36 localhost systemd[69420]: Finished Create User's Volatile Files and Directories. Dec 6 03:30:36 localhost systemd[69420]: Listening on D-Bus User Message Bus Socket. Dec 6 03:30:36 localhost systemd[69420]: Reached target Sockets. Dec 6 03:30:36 localhost systemd[69420]: Reached target Basic System. Dec 6 03:30:36 localhost systemd[69420]: Reached target Main User Target. Dec 6 03:30:36 localhost systemd[69420]: Startup finished in 120ms. Dec 6 03:30:36 localhost systemd[1]: Started User Manager for UID 0. Dec 6 03:30:36 localhost systemd[1]: Started Session c9 of User root. Dec 6 03:30:36 localhost podman[69419]: 2025-12-06 08:30:36.090975714 +0000 UTC m=+0.198873913 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=starting, version=17.1.12, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible) Dec 6 03:30:36 localhost podman[69419]: 2025-12-06 08:30:36.103978413 +0000 UTC m=+0.211876652 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, url=https://www.redhat.com, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc.) Dec 6 03:30:36 localhost podman[69419]: unhealthy Dec 6 03:30:36 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Main process exited, code=exited, status=1/FAILURE Dec 6 03:30:36 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Failed with result 'exit-code'. Dec 6 03:30:36 localhost systemd[1]: session-c9.scope: Deactivated successfully. Dec 6 03:30:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:30:36 localhost kernel: device br-int entered promiscuous mode Dec 6 03:30:36 localhost NetworkManager[5973]: [1765009836.1762] manager: (br-int): new Generic device (/org/freedesktop/NetworkManager/Devices/11) Dec 6 03:30:36 localhost systemd-udevd[69524]: Network interface NamePolicy= disabled on kernel command line. Dec 6 03:30:36 localhost podman[69508]: 2025-12-06 08:30:36.208064226 +0000 UTC m=+0.049665565 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, vendor=Red Hat, Inc., io.openshift.expose-services=, batch=17.1_20251118.1, vcs-type=git, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, distribution-scope=public, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12) Dec 6 03:30:36 localhost kernel: device genev_sys_6081 entered promiscuous mode Dec 6 03:30:36 localhost NetworkManager[5973]: [1765009836.2598] device (genev_sys_6081): carrier: link connected Dec 6 03:30:36 localhost systemd-udevd[69530]: Network interface NamePolicy= disabled on kernel command line. Dec 6 03:30:36 localhost NetworkManager[5973]: [1765009836.2601] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/12) Dec 6 03:30:36 localhost podman[69508]: 2025-12-06 08:30:36.390161212 +0000 UTC m=+0.231762571 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, tcib_managed=true, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 6 03:30:36 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:30:36 localhost python3[69565]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:30:36 localhost python3[69581]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:30:37 localhost python3[69597]: ansible-file Invoked with path=/etc/systemd/system/tripleo_logrotate_crond.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:30:37 localhost python3[69613]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:30:37 localhost python3[69629]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:30:37 localhost python3[69648]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:30:38 localhost python3[69665]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 6 03:30:38 localhost python3[69681]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 6 03:30:38 localhost python3[69699]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_logrotate_crond_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 6 03:30:38 localhost python3[69717]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_migration_target_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 6 03:30:38 localhost python3[69733]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ovn_controller_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 6 03:30:39 localhost python3[69749]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 6 03:30:39 localhost python3[69810]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009839.2854874-109029-179926828752600/source dest=/etc/systemd/system/tripleo_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:30:40 localhost python3[69839]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009839.2854874-109029-179926828752600/source dest=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:30:40 localhost python3[69868]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009839.2854874-109029-179926828752600/source dest=/etc/systemd/system/tripleo_logrotate_crond.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:30:41 localhost python3[69897]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009839.2854874-109029-179926828752600/source dest=/etc/systemd/system/tripleo_nova_migration_target.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:30:41 localhost python3[69926]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009839.2854874-109029-179926828752600/source dest=/etc/systemd/system/tripleo_ovn_controller.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:30:42 localhost python3[69955]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009839.2854874-109029-179926828752600/source dest=/etc/systemd/system/tripleo_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:30:42 localhost python3[69971]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 6 03:30:42 localhost systemd[1]: Reloading. Dec 6 03:30:42 localhost systemd-sysv-generator[69999]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:30:42 localhost systemd-rc-local-generator[69996]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:30:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:30:43 localhost python3[70023]: ansible-systemd Invoked with state=restarted name=tripleo_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 03:30:43 localhost systemd[1]: Reloading. Dec 6 03:30:43 localhost systemd-rc-local-generator[70052]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:30:43 localhost systemd-sysv-generator[70057]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:30:43 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:30:44 localhost systemd[1]: Starting ceilometer_agent_compute container... Dec 6 03:30:44 localhost tripleo-start-podman-container[70063]: Creating additional drop-in dependency for "ceilometer_agent_compute" (a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9) Dec 6 03:30:44 localhost systemd[1]: Reloading. Dec 6 03:30:44 localhost systemd-rc-local-generator[70111]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:30:44 localhost systemd-sysv-generator[70117]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:30:44 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:30:44 localhost systemd[1]: Started ceilometer_agent_compute container. Dec 6 03:30:45 localhost python3[70146]: ansible-systemd Invoked with state=restarted name=tripleo_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 03:30:45 localhost systemd[1]: Reloading. Dec 6 03:30:45 localhost systemd-sysv-generator[70176]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:30:45 localhost systemd-rc-local-generator[70168]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:30:45 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:30:45 localhost systemd[1]: Starting ceilometer_agent_ipmi container... Dec 6 03:30:45 localhost systemd[1]: Started ceilometer_agent_ipmi container. Dec 6 03:30:46 localhost python3[70212]: ansible-systemd Invoked with state=restarted name=tripleo_logrotate_crond.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 03:30:46 localhost systemd[1]: Stopping User Manager for UID 0... Dec 6 03:30:46 localhost systemd[69420]: Activating special unit Exit the Session... Dec 6 03:30:46 localhost systemd[69420]: Stopped target Main User Target. Dec 6 03:30:46 localhost systemd[69420]: Stopped target Basic System. Dec 6 03:30:46 localhost systemd[69420]: Stopped target Paths. Dec 6 03:30:46 localhost systemd[69420]: Stopped target Sockets. Dec 6 03:30:46 localhost systemd[69420]: Stopped target Timers. Dec 6 03:30:46 localhost systemd[69420]: Stopped Daily Cleanup of User's Temporary Directories. Dec 6 03:30:46 localhost systemd[69420]: Closed D-Bus User Message Bus Socket. Dec 6 03:30:46 localhost systemd[69420]: Stopped Create User's Volatile Files and Directories. Dec 6 03:30:46 localhost systemd[69420]: Removed slice User Application Slice. Dec 6 03:30:46 localhost systemd[69420]: Reached target Shutdown. Dec 6 03:30:46 localhost systemd[69420]: Finished Exit the Session. Dec 6 03:30:46 localhost systemd[69420]: Reached target Exit the Session. Dec 6 03:30:46 localhost systemd[1]: user@0.service: Deactivated successfully. Dec 6 03:30:46 localhost systemd[1]: Stopped User Manager for UID 0. Dec 6 03:30:46 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Dec 6 03:30:46 localhost systemd[1]: Reloading. Dec 6 03:30:46 localhost systemd-rc-local-generator[70238]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:30:46 localhost systemd-sysv-generator[70241]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:30:46 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:30:46 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Dec 6 03:30:46 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Dec 6 03:30:46 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Dec 6 03:30:46 localhost systemd[1]: Removed slice User Slice of UID 0. Dec 6 03:30:46 localhost systemd[1]: Starting logrotate_crond container... Dec 6 03:30:46 localhost systemd[1]: Started logrotate_crond container. Dec 6 03:30:47 localhost python3[70281]: ansible-systemd Invoked with state=restarted name=tripleo_nova_migration_target.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 03:30:47 localhost systemd[1]: Reloading. Dec 6 03:30:47 localhost systemd-rc-local-generator[70305]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:30:47 localhost systemd-sysv-generator[70308]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:30:47 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:30:47 localhost systemd[1]: Starting nova_migration_target container... Dec 6 03:30:47 localhost systemd[1]: Started nova_migration_target container. Dec 6 03:30:48 localhost python3[70348]: ansible-systemd Invoked with state=restarted name=tripleo_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 03:30:48 localhost systemd[1]: Reloading. Dec 6 03:30:48 localhost systemd-rc-local-generator[70373]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:30:48 localhost systemd-sysv-generator[70378]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:30:48 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:30:48 localhost systemd[1]: Starting ovn_controller container... Dec 6 03:30:49 localhost tripleo-start-podman-container[70388]: Creating additional drop-in dependency for "ovn_controller" (1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076) Dec 6 03:30:49 localhost systemd[1]: Reloading. Dec 6 03:30:49 localhost systemd-sysv-generator[70451]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:30:49 localhost systemd-rc-local-generator[70448]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:30:49 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:30:49 localhost systemd[1]: Started ovn_controller container. Dec 6 03:30:50 localhost python3[70472]: ansible-systemd Invoked with state=restarted name=tripleo_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 03:30:50 localhost systemd[1]: Reloading. Dec 6 03:30:50 localhost systemd-sysv-generator[70504]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:30:50 localhost systemd-rc-local-generator[70496]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:30:50 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:30:50 localhost systemd[1]: Starting ovn_metadata_agent container... Dec 6 03:30:50 localhost systemd[1]: Started ovn_metadata_agent container. Dec 6 03:30:50 localhost python3[70554]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks4.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:30:52 localhost python3[70675]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks4.json short_hostname=np0005548789 step=4 update_config_hash_only=False Dec 6 03:30:53 localhost python3[70691]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:30:53 localhost python3[70707]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_4 config_pattern=container-puppet-*.json config_overrides={} debug=True Dec 6 03:30:56 localhost sshd[70709]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:31:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:31:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:31:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:31:01 localhost podman[70712]: 2025-12-06 08:31:01.945341334 +0000 UTC m=+0.107858032 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:31:01 localhost podman[70712]: 2025-12-06 08:31:01.984040566 +0000 UTC m=+0.146557184 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.buildah.version=1.41.4, version=17.1.12, io.openshift.expose-services=, batch=17.1_20251118.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, vcs-type=git, container_name=collectd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, name=rhosp17/openstack-collectd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container) Dec 6 03:31:01 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:31:02 localhost systemd[1]: tmp-crun.oZvhxE.mount: Deactivated successfully. Dec 6 03:31:02 localhost podman[70729]: 2025-12-06 08:31:02.039084107 +0000 UTC m=+0.084584184 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=starting, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, managed_by=tripleo_ansible, url=https://www.redhat.com, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 6 03:31:02 localhost podman[70729]: 2025-12-06 08:31:02.072008561 +0000 UTC m=+0.117508648 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, build-date=2025-11-19T00:12:45Z, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, io.openshift.expose-services=) Dec 6 03:31:02 localhost podman[70730]: 2025-12-06 08:31:02.084807772 +0000 UTC m=+0.129275637 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, name=rhosp17/openstack-cron, distribution-scope=public, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.openshift.expose-services=, vcs-type=git, container_name=logrotate_crond, architecture=x86_64, io.buildah.version=1.41.4) Dec 6 03:31:02 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:31:02 localhost podman[70730]: 2025-12-06 08:31:02.092905029 +0000 UTC m=+0.137372914 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., version=17.1.12, release=1761123044, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, tcib_managed=true, url=https://www.redhat.com, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-type=git, container_name=logrotate_crond, distribution-scope=public, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Dec 6 03:31:02 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:31:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:31:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:31:02 localhost podman[70778]: 2025-12-06 08:31:02.896923783 +0000 UTC m=+0.060414996 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=starting, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., release=1761123044, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, url=https://www.redhat.com, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, batch=17.1_20251118.1) Dec 6 03:31:02 localhost podman[70778]: 2025-12-06 08:31:02.922423791 +0000 UTC m=+0.085914994 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, distribution-scope=public, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Dec 6 03:31:02 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:31:03 localhost systemd[1]: tmp-crun.i3Nmrg.mount: Deactivated successfully. Dec 6 03:31:03 localhost podman[70779]: 2025-12-06 08:31:03.002582068 +0000 UTC m=+0.163190223 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20251118.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, managed_by=tripleo_ansible) Dec 6 03:31:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:31:03 localhost podman[70779]: 2025-12-06 08:31:03.036202055 +0000 UTC m=+0.196810220 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.12, name=rhosp17/openstack-iscsid, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, release=1761123044, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, tcib_managed=true) Dec 6 03:31:03 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:31:03 localhost podman[70826]: 2025-12-06 08:31:03.087966044 +0000 UTC m=+0.063124578 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc.) Dec 6 03:31:03 localhost podman[70826]: 2025-12-06 08:31:03.484355225 +0000 UTC m=+0.459513769 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, release=1761123044, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, config_id=tripleo_step4, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 6 03:31:03 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:31:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:31:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:31:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:31:06 localhost systemd[1]: tmp-crun.UAQdFI.mount: Deactivated successfully. Dec 6 03:31:06 localhost podman[70850]: 2025-12-06 08:31:06.908553481 +0000 UTC m=+0.071486233 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, release=1761123044, tcib_managed=true, vendor=Red Hat, Inc., container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=) Dec 6 03:31:06 localhost systemd[1]: tmp-crun.upn2SK.mount: Deactivated successfully. Dec 6 03:31:06 localhost podman[70849]: 2025-12-06 08:31:06.970157042 +0000 UTC m=+0.133094095 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=starting, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, batch=17.1_20251118.1, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, url=https://www.redhat.com) Dec 6 03:31:07 localhost podman[70851]: 2025-12-06 08:31:07.01370203 +0000 UTC m=+0.172431674 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=starting, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, vendor=Red Hat, Inc., config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.buildah.version=1.41.4, tcib_managed=true, container_name=ovn_metadata_agent, io.openshift.expose-services=, version=17.1.12) Dec 6 03:31:07 localhost podman[70849]: 2025-12-06 08:31:07.024231302 +0000 UTC m=+0.187168365 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, tcib_managed=true, batch=17.1_20251118.1, config_id=tripleo_step4, release=1761123044, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Dec 6 03:31:07 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:31:07 localhost podman[70851]: 2025-12-06 08:31:07.05919074 +0000 UTC m=+0.217920394 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, release=1761123044, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.12, config_id=tripleo_step4, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 6 03:31:07 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:31:07 localhost podman[70850]: 2025-12-06 08:31:07.110545957 +0000 UTC m=+0.273478719 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, architecture=x86_64, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 03:31:07 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:31:14 localhost snmpd[67279]: empty variable list in _query Dec 6 03:31:14 localhost snmpd[67279]: empty variable list in _query Dec 6 03:31:23 localhost podman[71054]: Dec 6 03:31:23 localhost podman[71054]: 2025-12-06 08:31:23.597646981 +0000 UTC m=+0.076750754 container create 06eded59e56a478e3b341c8b5aa6fd712a3fe774dfa303980f3425e4ff3242df (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_hypatia, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, distribution-scope=public, release=1763362218, name=rhceph, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, architecture=x86_64, version=7, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux ) Dec 6 03:31:23 localhost systemd[1]: Started libpod-conmon-06eded59e56a478e3b341c8b5aa6fd712a3fe774dfa303980f3425e4ff3242df.scope. Dec 6 03:31:23 localhost podman[71054]: 2025-12-06 08:31:23.566000185 +0000 UTC m=+0.045103988 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 03:31:23 localhost systemd[1]: Started libcrun container. Dec 6 03:31:23 localhost podman[71054]: 2025-12-06 08:31:23.688597658 +0000 UTC m=+0.167701421 container init 06eded59e56a478e3b341c8b5aa6fd712a3fe774dfa303980f3425e4ff3242df (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_hypatia, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , distribution-scope=public, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, CEPH_POINT_RELEASE=, architecture=x86_64, io.buildah.version=1.41.4, vendor=Red Hat, Inc., RELEASE=main, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, release=1763362218, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 03:31:23 localhost podman[71054]: 2025-12-06 08:31:23.702198483 +0000 UTC m=+0.181302246 container start 06eded59e56a478e3b341c8b5aa6fd712a3fe774dfa303980f3425e4ff3242df (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_hypatia, distribution-scope=public, release=1763362218, GIT_BRANCH=main, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, architecture=x86_64, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git) Dec 6 03:31:23 localhost podman[71054]: 2025-12-06 08:31:23.702479341 +0000 UTC m=+0.181583154 container attach 06eded59e56a478e3b341c8b5aa6fd712a3fe774dfa303980f3425e4ff3242df (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_hypatia, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, GIT_BRANCH=main, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, vcs-type=git, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, io.openshift.expose-services=, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public) Dec 6 03:31:23 localhost relaxed_hypatia[71069]: 167 167 Dec 6 03:31:23 localhost systemd[1]: libpod-06eded59e56a478e3b341c8b5aa6fd712a3fe774dfa303980f3425e4ff3242df.scope: Deactivated successfully. Dec 6 03:31:23 localhost podman[71054]: 2025-12-06 08:31:23.70505231 +0000 UTC m=+0.184156073 container died 06eded59e56a478e3b341c8b5aa6fd712a3fe774dfa303980f3425e4ff3242df (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_hypatia, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, vcs-type=git, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, GIT_CLEAN=True, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 6 03:31:23 localhost podman[71074]: 2025-12-06 08:31:23.791823809 +0000 UTC m=+0.074744673 container remove 06eded59e56a478e3b341c8b5aa6fd712a3fe774dfa303980f3425e4ff3242df (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_hypatia, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, io.buildah.version=1.41.4, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., RELEASE=main, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=) Dec 6 03:31:23 localhost systemd[1]: libpod-conmon-06eded59e56a478e3b341c8b5aa6fd712a3fe774dfa303980f3425e4ff3242df.scope: Deactivated successfully. Dec 6 03:31:23 localhost podman[71095]: Dec 6 03:31:23 localhost podman[71095]: 2025-12-06 08:31:23.998570599 +0000 UTC m=+0.069436640 container create 2e4b9aa03f61b6547271efbb4f49ca3ed56f6495247ff4adbe6ad3f8b1fa5e58 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_carson, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, CEPH_POINT_RELEASE=, io.openshift.expose-services=, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, GIT_CLEAN=True, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, vendor=Red Hat, Inc., version=7, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, distribution-scope=public, io.buildah.version=1.41.4) Dec 6 03:31:24 localhost systemd[1]: Started libpod-conmon-2e4b9aa03f61b6547271efbb4f49ca3ed56f6495247ff4adbe6ad3f8b1fa5e58.scope. Dec 6 03:31:24 localhost systemd[1]: Started libcrun container. Dec 6 03:31:24 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c85c724648715eadbccbb0bc5f6d6728d35df2e52420deaaae74774b2b67e410/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 6 03:31:24 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c85c724648715eadbccbb0bc5f6d6728d35df2e52420deaaae74774b2b67e410/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 6 03:31:24 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c85c724648715eadbccbb0bc5f6d6728d35df2e52420deaaae74774b2b67e410/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 6 03:31:24 localhost podman[71095]: 2025-12-06 08:31:24.054747364 +0000 UTC m=+0.125613385 container init 2e4b9aa03f61b6547271efbb4f49ca3ed56f6495247ff4adbe6ad3f8b1fa5e58 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_carson, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, GIT_CLEAN=True, version=7, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, build-date=2025-11-26T19:44:28Z, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True) Dec 6 03:31:24 localhost podman[71095]: 2025-12-06 08:31:24.064226704 +0000 UTC m=+0.135092725 container start 2e4b9aa03f61b6547271efbb4f49ca3ed56f6495247ff4adbe6ad3f8b1fa5e58 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_carson, RELEASE=main, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, vcs-type=git, maintainer=Guillaume Abrioux , version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z) Dec 6 03:31:24 localhost podman[71095]: 2025-12-06 08:31:24.064395979 +0000 UTC m=+0.135262050 container attach 2e4b9aa03f61b6547271efbb4f49ca3ed56f6495247ff4adbe6ad3f8b1fa5e58 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_carson, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, ceph=True, release=1763362218, architecture=x86_64, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 6 03:31:24 localhost podman[71095]: 2025-12-06 08:31:23.970032348 +0000 UTC m=+0.040898419 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 03:31:24 localhost systemd[1]: var-lib-containers-storage-overlay-ae677c7704d873443cf126f0407d68f81442eed3a10be4e81453f59c6f47f9d4-merged.mount: Deactivated successfully. Dec 6 03:31:24 localhost determined_carson[71110]: [ Dec 6 03:31:24 localhost determined_carson[71110]: { Dec 6 03:31:24 localhost determined_carson[71110]: "available": false, Dec 6 03:31:24 localhost determined_carson[71110]: "ceph_device": false, Dec 6 03:31:24 localhost determined_carson[71110]: "device_id": "QEMU_DVD-ROM_QM00001", Dec 6 03:31:24 localhost determined_carson[71110]: "lsm_data": {}, Dec 6 03:31:24 localhost determined_carson[71110]: "lvs": [], Dec 6 03:31:24 localhost determined_carson[71110]: "path": "/dev/sr0", Dec 6 03:31:24 localhost determined_carson[71110]: "rejected_reasons": [ Dec 6 03:31:24 localhost determined_carson[71110]: "Insufficient space (<5GB)", Dec 6 03:31:24 localhost determined_carson[71110]: "Has a FileSystem" Dec 6 03:31:24 localhost determined_carson[71110]: ], Dec 6 03:31:24 localhost determined_carson[71110]: "sys_api": { Dec 6 03:31:24 localhost determined_carson[71110]: "actuators": null, Dec 6 03:31:24 localhost determined_carson[71110]: "device_nodes": "sr0", Dec 6 03:31:24 localhost determined_carson[71110]: "human_readable_size": "482.00 KB", Dec 6 03:31:24 localhost determined_carson[71110]: "id_bus": "ata", Dec 6 03:31:24 localhost determined_carson[71110]: "model": "QEMU DVD-ROM", Dec 6 03:31:24 localhost determined_carson[71110]: "nr_requests": "2", Dec 6 03:31:24 localhost determined_carson[71110]: "partitions": {}, Dec 6 03:31:24 localhost determined_carson[71110]: "path": "/dev/sr0", Dec 6 03:31:24 localhost determined_carson[71110]: "removable": "1", Dec 6 03:31:24 localhost determined_carson[71110]: "rev": "2.5+", Dec 6 03:31:24 localhost determined_carson[71110]: "ro": "0", Dec 6 03:31:24 localhost determined_carson[71110]: "rotational": "1", Dec 6 03:31:24 localhost determined_carson[71110]: "sas_address": "", Dec 6 03:31:24 localhost determined_carson[71110]: "sas_device_handle": "", Dec 6 03:31:24 localhost determined_carson[71110]: "scheduler_mode": "mq-deadline", Dec 6 03:31:24 localhost determined_carson[71110]: "sectors": 0, Dec 6 03:31:24 localhost determined_carson[71110]: "sectorsize": "2048", Dec 6 03:31:24 localhost determined_carson[71110]: "size": 493568.0, Dec 6 03:31:24 localhost determined_carson[71110]: "support_discard": "0", Dec 6 03:31:24 localhost determined_carson[71110]: "type": "disk", Dec 6 03:31:24 localhost determined_carson[71110]: "vendor": "QEMU" Dec 6 03:31:24 localhost determined_carson[71110]: } Dec 6 03:31:24 localhost determined_carson[71110]: } Dec 6 03:31:24 localhost determined_carson[71110]: ] Dec 6 03:31:25 localhost systemd[1]: libpod-2e4b9aa03f61b6547271efbb4f49ca3ed56f6495247ff4adbe6ad3f8b1fa5e58.scope: Deactivated successfully. Dec 6 03:31:25 localhost podman[73127]: 2025-12-06 08:31:25.083989593 +0000 UTC m=+0.048181441 container died 2e4b9aa03f61b6547271efbb4f49ca3ed56f6495247ff4adbe6ad3f8b1fa5e58 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_carson, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, GIT_BRANCH=main, ceph=True, io.openshift.tags=rhceph ceph, version=7, distribution-scope=public, architecture=x86_64, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux ) Dec 6 03:31:25 localhost systemd[1]: var-lib-containers-storage-overlay-c85c724648715eadbccbb0bc5f6d6728d35df2e52420deaaae74774b2b67e410-merged.mount: Deactivated successfully. Dec 6 03:31:25 localhost podman[73127]: 2025-12-06 08:31:25.123066106 +0000 UTC m=+0.087257934 container remove 2e4b9aa03f61b6547271efbb4f49ca3ed56f6495247ff4adbe6ad3f8b1fa5e58 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_carson, vcs-type=git, GIT_BRANCH=main, architecture=x86_64, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, name=rhceph, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, release=1763362218, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , ceph=True, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 6 03:31:25 localhost systemd[1]: libpod-conmon-2e4b9aa03f61b6547271efbb4f49ca3ed56f6495247ff4adbe6ad3f8b1fa5e58.scope: Deactivated successfully. Dec 6 03:31:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:31:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:31:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:31:32 localhost podman[73159]: 2025-12-06 08:31:32.923471891 +0000 UTC m=+0.082981734 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, version=17.1.12, vendor=Red Hat, Inc., release=1761123044, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z) Dec 6 03:31:32 localhost podman[73159]: 2025-12-06 08:31:32.953076925 +0000 UTC m=+0.112586798 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com) Dec 6 03:31:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:31:32 localhost podman[73157]: 2025-12-06 08:31:32.972988193 +0000 UTC m=+0.133645001 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.12, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vcs-type=git, url=https://www.redhat.com, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, config_id=tripleo_step4, release=1761123044, build-date=2025-11-18T22:49:32Z, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron) Dec 6 03:31:32 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:31:32 localhost podman[73157]: 2025-12-06 08:31:32.98011982 +0000 UTC m=+0.140776638 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, vendor=Red Hat, Inc., container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, distribution-scope=public, url=https://www.redhat.com, architecture=x86_64, io.buildah.version=1.41.4) Dec 6 03:31:32 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:31:33 localhost systemd[1]: tmp-crun.qmMigW.mount: Deactivated successfully. Dec 6 03:31:33 localhost podman[73208]: 2025-12-06 08:31:33.052299123 +0000 UTC m=+0.075145595 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, version=17.1.12, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, tcib_managed=true, url=https://www.redhat.com, managed_by=tripleo_ansible, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, distribution-scope=public) Dec 6 03:31:33 localhost podman[73208]: 2025-12-06 08:31:33.079960028 +0000 UTC m=+0.102806470 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, io.buildah.version=1.41.4, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, release=1761123044, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team) Dec 6 03:31:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:31:33 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:31:33 localhost podman[73158]: 2025-12-06 08:31:33.032375085 +0000 UTC m=+0.193284371 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, distribution-scope=public, tcib_managed=true, url=https://www.redhat.com) Dec 6 03:31:33 localhost podman[73158]: 2025-12-06 08:31:33.16945153 +0000 UTC m=+0.330360846 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vcs-type=git, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.41.4, architecture=x86_64, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 03:31:33 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:31:33 localhost podman[73245]: 2025-12-06 08:31:33.187003115 +0000 UTC m=+0.084202631 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, release=1761123044, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, vcs-type=git, url=https://www.redhat.com, io.buildah.version=1.41.4, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid) Dec 6 03:31:33 localhost podman[73245]: 2025-12-06 08:31:33.19533537 +0000 UTC m=+0.092534856 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, config_id=tripleo_step3, io.buildah.version=1.41.4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, distribution-scope=public, version=17.1.12, build-date=2025-11-18T23:44:13Z, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64) Dec 6 03:31:33 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:31:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:31:33 localhost podman[73268]: 2025-12-06 08:31:33.906891841 +0000 UTC m=+0.069733199 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, version=17.1.12, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.41.4, tcib_managed=true, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1) Dec 6 03:31:34 localhost podman[73268]: 2025-12-06 08:31:34.286249602 +0000 UTC m=+0.449090960 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp17/openstack-nova-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, config_id=tripleo_step4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, managed_by=tripleo_ansible) Dec 6 03:31:34 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:31:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:31:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:31:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:31:37 localhost podman[73293]: 2025-12-06 08:31:37.934119586 +0000 UTC m=+0.090399770 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, io.buildah.version=1.41.4, io.openshift.expose-services=, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:31:37 localhost podman[73292]: 2025-12-06 08:31:37.913048632 +0000 UTC m=+0.074459403 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, tcib_managed=true, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Dec 6 03:31:37 localhost podman[73291]: 2025-12-06 08:31:37.97324212 +0000 UTC m=+0.137098406 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-type=git, container_name=ovn_controller, config_id=tripleo_step4, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 6 03:31:37 localhost podman[73293]: 2025-12-06 08:31:37.990077574 +0000 UTC m=+0.146357788 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 03:31:37 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:31:38 localhost podman[73291]: 2025-12-06 08:31:38.019391159 +0000 UTC m=+0.183247395 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, url=https://www.redhat.com, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, release=1761123044, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 03:31:38 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:31:38 localhost podman[73292]: 2025-12-06 08:31:38.095136821 +0000 UTC m=+0.256547612 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, distribution-scope=public, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044) Dec 6 03:31:38 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:32:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:32:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:32:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:32:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:32:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:32:03 localhost systemd[1]: tmp-crun.JGxccm.mount: Deactivated successfully. Dec 6 03:32:03 localhost podman[73367]: 2025-12-06 08:32:03.897821284 +0000 UTC m=+0.061790927 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1761123044, distribution-scope=public, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, url=https://www.redhat.com, name=rhosp17/openstack-cron, version=17.1.12, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc.) Dec 6 03:32:03 localhost podman[73373]: 2025-12-06 08:32:03.973294108 +0000 UTC m=+0.126317167 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1761123044, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20251118.1, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Dec 6 03:32:04 localhost podman[73368]: 2025-12-06 08:32:04.008520914 +0000 UTC m=+0.170558408 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, release=1761123044, distribution-scope=public, name=rhosp17/openstack-collectd, vcs-type=git, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, architecture=x86_64, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, tcib_managed=true, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 03:32:04 localhost podman[73379]: 2025-12-06 08:32:03.960928741 +0000 UTC m=+0.116014723 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.12, batch=17.1_20251118.1, vcs-type=git, distribution-scope=public, config_id=tripleo_step4, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, url=https://www.redhat.com) Dec 6 03:32:04 localhost podman[73368]: 2025-12-06 08:32:04.019995654 +0000 UTC m=+0.182033158 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, version=17.1.12, url=https://www.redhat.com, vcs-type=git, com.redhat.component=openstack-collectd-container, container_name=collectd, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044) Dec 6 03:32:04 localhost podman[73369]: 2025-12-06 08:32:03.929325026 +0000 UTC m=+0.085578163 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, release=1761123044, vendor=Red Hat, Inc., tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, batch=17.1_20251118.1) Dec 6 03:32:04 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:32:04 localhost podman[73367]: 2025-12-06 08:32:04.032714642 +0000 UTC m=+0.196684285 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, batch=17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=, distribution-scope=public, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 6 03:32:04 localhost podman[73379]: 2025-12-06 08:32:04.040061076 +0000 UTC m=+0.195147028 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, distribution-scope=public, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, architecture=x86_64, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, batch=17.1_20251118.1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 6 03:32:04 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:32:04 localhost podman[73369]: 2025-12-06 08:32:04.061081408 +0000 UTC m=+0.217334565 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, version=17.1.12, release=1761123044, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 6 03:32:04 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:32:04 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:32:04 localhost podman[73373]: 2025-12-06 08:32:04.083546143 +0000 UTC m=+0.236569222 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vendor=Red Hat, Inc., url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.openshift.expose-services=, version=17.1.12, build-date=2025-11-18T23:44:13Z, vcs-type=git, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, container_name=iscsid, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4) Dec 6 03:32:04 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:32:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:32:04 localhost podman[73476]: 2025-12-06 08:32:04.913501919 +0000 UTC m=+0.075041102 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, version=17.1.12, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, distribution-scope=public, tcib_managed=true) Dec 6 03:32:05 localhost podman[73476]: 2025-12-06 08:32:05.338192453 +0000 UTC m=+0.499731616 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, release=1761123044, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, version=17.1.12, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4) Dec 6 03:32:05 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:32:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:32:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:32:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:32:08 localhost systemd[1]: tmp-crun.xPJYBR.mount: Deactivated successfully. Dec 6 03:32:08 localhost podman[73501]: 2025-12-06 08:32:08.923550169 +0000 UTC m=+0.082422776 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, config_id=tripleo_step1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 6 03:32:08 localhost podman[73500]: 2025-12-06 08:32:08.968710218 +0000 UTC m=+0.131402273 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, container_name=ovn_controller, batch=17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:32:09 localhost podman[73502]: 2025-12-06 08:32:09.018350963 +0000 UTC m=+0.174330612 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, vendor=Red Hat, Inc., release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, io.openshift.expose-services=, managed_by=tripleo_ansible) Dec 6 03:32:09 localhost podman[73500]: 2025-12-06 08:32:09.044413928 +0000 UTC m=+0.207105993 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, tcib_managed=true, version=17.1.12, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 03:32:09 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:32:09 localhost podman[73502]: 2025-12-06 08:32:09.075234979 +0000 UTC m=+0.231214628 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20251118.1, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, io.openshift.expose-services=) Dec 6 03:32:09 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:32:09 localhost podman[73501]: 2025-12-06 08:32:09.118987265 +0000 UTC m=+0.277859822 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, version=17.1.12, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 6 03:32:09 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:32:09 localhost systemd[1]: tmp-crun.JuIaC0.mount: Deactivated successfully. Dec 6 03:32:15 localhost sshd[73576]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:32:21 localhost sshd[73578]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:32:23 localhost sshd[73580]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:32:26 localhost sshd[73694]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:32:26 localhost podman[73683]: 2025-12-06 08:32:26.690669676 +0000 UTC m=+0.144561323 container exec ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, io.openshift.tags=rhceph ceph, release=1763362218, io.openshift.expose-services=, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, build-date=2025-11-26T19:44:28Z, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, GIT_BRANCH=main, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., RELEASE=main, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, architecture=x86_64, version=7, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 6 03:32:26 localhost podman[73683]: 2025-12-06 08:32:26.793460104 +0000 UTC m=+0.247351771 container exec_died ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , version=7, io.openshift.expose-services=, name=rhceph, vcs-type=git, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, release=1763362218) Dec 6 03:32:28 localhost sshd[73825]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:32:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:32:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:32:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:32:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:32:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:32:34 localhost systemd[1]: tmp-crun.HcpkEd.mount: Deactivated successfully. Dec 6 03:32:34 localhost podman[73830]: 2025-12-06 08:32:34.929416852 +0000 UTC m=+0.087696738 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, vcs-type=git, io.buildah.version=1.41.4, managed_by=tripleo_ansible, batch=17.1_20251118.1, version=17.1.12, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Dec 6 03:32:34 localhost podman[73828]: 2025-12-06 08:32:34.974960452 +0000 UTC m=+0.132577058 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.buildah.version=1.41.4, container_name=collectd, architecture=x86_64, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container) Dec 6 03:32:34 localhost podman[73830]: 2025-12-06 08:32:34.994441076 +0000 UTC m=+0.152721012 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, maintainer=OpenStack TripleO Team) Dec 6 03:32:35 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:32:35 localhost podman[73828]: 2025-12-06 08:32:35.008516486 +0000 UTC m=+0.166133112 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-11-18T22:51:28Z, distribution-scope=public, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.41.4, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, version=17.1.12, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Dec 6 03:32:35 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:32:35 localhost podman[73829]: 2025-12-06 08:32:35.069679733 +0000 UTC m=+0.227653189 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, config_id=tripleo_step4, managed_by=tripleo_ansible, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, io.buildah.version=1.41.4, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container) Dec 6 03:32:35 localhost podman[73829]: 2025-12-06 08:32:35.121281479 +0000 UTC m=+0.279254925 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, architecture=x86_64, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12) Dec 6 03:32:35 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:32:35 localhost podman[73831]: 2025-12-06 08:32:35.135259756 +0000 UTC m=+0.288474937 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, batch=17.1_20251118.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 6 03:32:35 localhost podman[73831]: 2025-12-06 08:32:35.163975722 +0000 UTC m=+0.317190883 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, build-date=2025-11-19T00:12:45Z, tcib_managed=true, config_id=tripleo_step4, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 03:32:35 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:32:35 localhost podman[73827]: 2025-12-06 08:32:35.235546726 +0000 UTC m=+0.392653936 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vcs-type=git, container_name=logrotate_crond, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4) Dec 6 03:32:35 localhost podman[73827]: 2025-12-06 08:32:35.267914105 +0000 UTC m=+0.425021285 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, release=1761123044, version=17.1.12, name=rhosp17/openstack-cron, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:32:35 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:32:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:32:35 localhost podman[73937]: 2025-12-06 08:32:35.916879436 +0000 UTC m=+0.077854648 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., batch=17.1_20251118.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, distribution-scope=public, vcs-type=git, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container) Dec 6 03:32:36 localhost podman[73937]: 2025-12-06 08:32:36.286282542 +0000 UTC m=+0.447257804 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vcs-type=git, name=rhosp17/openstack-nova-compute, release=1761123044, container_name=nova_migration_target, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z) Dec 6 03:32:36 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:32:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:32:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:32:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:32:39 localhost podman[73961]: 2025-12-06 08:32:39.893018151 +0000 UTC m=+0.058582900 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, io.buildah.version=1.41.4, config_id=tripleo_step4, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, release=1761123044, container_name=ovn_controller) Dec 6 03:32:39 localhost podman[73961]: 2025-12-06 08:32:39.935090845 +0000 UTC m=+0.100655604 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.buildah.version=1.41.4, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, config_id=tripleo_step4) Dec 6 03:32:39 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:32:39 localhost podman[73968]: 2025-12-06 08:32:39.947202344 +0000 UTC m=+0.101595372 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, version=17.1.12, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=) Dec 6 03:32:40 localhost podman[73968]: 2025-12-06 08:32:40.008165785 +0000 UTC m=+0.162558823 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20251118.1, config_id=tripleo_step4, io.buildah.version=1.41.4, io.openshift.expose-services=, tcib_managed=true, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc.) Dec 6 03:32:40 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:32:40 localhost podman[73962]: 2025-12-06 08:32:40.025417912 +0000 UTC m=+0.187442393 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_step1, distribution-scope=public, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, url=https://www.redhat.com, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd) Dec 6 03:32:40 localhost podman[73962]: 2025-12-06 08:32:40.219206747 +0000 UTC m=+0.381231268 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.) Dec 6 03:32:40 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:32:40 localhost systemd[1]: tmp-crun.uGMz2E.mount: Deactivated successfully. Dec 6 03:33:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:33:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:33:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:33:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:33:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:33:05 localhost systemd[1]: tmp-crun.IjPAEY.mount: Deactivated successfully. Dec 6 03:33:05 localhost podman[74037]: 2025-12-06 08:33:05.912729327 +0000 UTC m=+0.074438472 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, batch=17.1_20251118.1, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, release=1761123044, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, name=rhosp17/openstack-cron, version=17.1.12, container_name=logrotate_crond, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=) Dec 6 03:33:05 localhost podman[74038]: 2025-12-06 08:33:05.96750809 +0000 UTC m=+0.125557324 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, architecture=x86_64, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:33:05 localhost podman[74038]: 2025-12-06 08:33:05.973515883 +0000 UTC m=+0.131565107 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, io.buildah.version=1.41.4, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, name=rhosp17/openstack-collectd, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 03:33:05 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:33:05 localhost podman[74037]: 2025-12-06 08:33:05.997238627 +0000 UTC m=+0.158947752 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, tcib_managed=true, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, url=https://www.redhat.com, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron) Dec 6 03:33:06 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:33:06 localhost podman[74040]: 2025-12-06 08:33:06.077442316 +0000 UTC m=+0.231392925 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.4, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.12, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 6 03:33:06 localhost podman[74041]: 2025-12-06 08:33:05.949893461 +0000 UTC m=+0.108383118 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, distribution-scope=public, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, architecture=x86_64) Dec 6 03:33:06 localhost podman[74040]: 2025-12-06 08:33:06.111050541 +0000 UTC m=+0.265001150 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, config_id=tripleo_step3, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 6 03:33:06 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:33:06 localhost podman[74041]: 2025-12-06 08:33:06.13427791 +0000 UTC m=+0.292767587 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, url=https://www.redhat.com, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, build-date=2025-11-19T00:12:45Z, vcs-type=git, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 6 03:33:06 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:33:06 localhost podman[74039]: 2025-12-06 08:33:05.900238956 +0000 UTC m=+0.063083527 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, config_id=tripleo_step4, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute) Dec 6 03:33:06 localhost podman[74039]: 2025-12-06 08:33:06.182538693 +0000 UTC m=+0.345383314 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, architecture=x86_64, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, version=17.1.12) Dec 6 03:33:06 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:33:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:33:06 localhost systemd[1]: tmp-crun.kLeSDz.mount: Deactivated successfully. Dec 6 03:33:06 localhost podman[74142]: 2025-12-06 08:33:06.953455227 +0000 UTC m=+0.110067351 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, version=17.1.12, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Dec 6 03:33:07 localhost podman[74142]: 2025-12-06 08:33:07.306808783 +0000 UTC m=+0.463420937 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, tcib_managed=true, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, com.redhat.component=openstack-nova-compute-container, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vendor=Red Hat, Inc., config_id=tripleo_step4) Dec 6 03:33:07 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:33:07 localhost sshd[74165]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:33:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:33:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:33:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:33:10 localhost podman[74167]: 2025-12-06 08:33:10.93180075 +0000 UTC m=+0.094148476 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, url=https://www.redhat.com, version=17.1.12, name=rhosp17/openstack-ovn-controller, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, distribution-scope=public, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true) Dec 6 03:33:10 localhost podman[74167]: 2025-12-06 08:33:10.958113693 +0000 UTC m=+0.120461449 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, build-date=2025-11-18T23:34:05Z, architecture=x86_64, version=17.1.12, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 6 03:33:10 localhost systemd[1]: tmp-crun.KZaxrX.mount: Deactivated successfully. Dec 6 03:33:10 localhost podman[74168]: 2025-12-06 08:33:10.979428234 +0000 UTC m=+0.138167450 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, url=https://www.redhat.com, architecture=x86_64, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 6 03:33:10 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:33:11 localhost podman[74169]: 2025-12-06 08:33:11.032516274 +0000 UTC m=+0.187823574 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, batch=17.1_20251118.1, config_id=tripleo_step4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, container_name=ovn_metadata_agent, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64) Dec 6 03:33:11 localhost podman[74169]: 2025-12-06 08:33:11.07462773 +0000 UTC m=+0.229934990 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_id=tripleo_step4, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1) Dec 6 03:33:11 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:33:11 localhost podman[74168]: 2025-12-06 08:33:11.157626103 +0000 UTC m=+0.316365349 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-11-18T22:49:46Z, vcs-type=git, architecture=x86_64, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, version=17.1.12, release=1761123044, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 6 03:33:11 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:33:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:33:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:33:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:33:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:33:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:33:36 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 6 03:33:36 localhost recover_tripleo_nova_virtqemud[74349]: 61814 Dec 6 03:33:36 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 6 03:33:36 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 6 03:33:36 localhost systemd[1]: tmp-crun.6LEVvV.mount: Deactivated successfully. Dec 6 03:33:36 localhost podman[74322]: 2025-12-06 08:33:36.950904537 +0000 UTC m=+0.105292405 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=tripleo_step3, version=17.1.12, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible) Dec 6 03:33:36 localhost systemd[1]: tmp-crun.e1OaA0.mount: Deactivated successfully. Dec 6 03:33:36 localhost podman[74321]: 2025-12-06 08:33:36.992264029 +0000 UTC m=+0.146400409 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, container_name=logrotate_crond, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1) Dec 6 03:33:37 localhost podman[74321]: 2025-12-06 08:33:37.030120525 +0000 UTC m=+0.184256895 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, release=1761123044, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible) Dec 6 03:33:37 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:33:37 localhost podman[74323]: 2025-12-06 08:33:37.045546916 +0000 UTC m=+0.197567132 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, url=https://www.redhat.com, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.openshift.expose-services=, release=1761123044, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:33:37 localhost podman[74322]: 2025-12-06 08:33:37.085335211 +0000 UTC m=+0.239723049 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, io.buildah.version=1.41.4, vcs-type=git, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, name=rhosp17/openstack-collectd, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, distribution-scope=public) Dec 6 03:33:37 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:33:37 localhost podman[74323]: 2025-12-06 08:33:37.150448038 +0000 UTC m=+0.302468204 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1761123044, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, version=17.1.12, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, architecture=x86_64, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 6 03:33:37 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:33:37 localhost podman[74324]: 2025-12-06 08:33:37.130104358 +0000 UTC m=+0.278090051 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1761123044, batch=17.1_20251118.1, container_name=iscsid, io.buildah.version=1.41.4, version=17.1.12, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container) Dec 6 03:33:37 localhost podman[74327]: 2025-12-06 08:33:37.151002495 +0000 UTC m=+0.288506108 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, distribution-scope=public, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 6 03:33:37 localhost podman[74324]: 2025-12-06 08:33:37.211048269 +0000 UTC m=+0.359033952 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, vendor=Red Hat, Inc., batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, distribution-scope=public, tcib_managed=true, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, com.redhat.component=openstack-iscsid-container, vcs-type=git, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Dec 6 03:33:37 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:33:37 localhost podman[74327]: 2025-12-06 08:33:37.23467838 +0000 UTC m=+0.372182033 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, config_id=tripleo_step4, tcib_managed=true, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12) Dec 6 03:33:37 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:33:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:33:37 localhost podman[74438]: 2025-12-06 08:33:37.925429226 +0000 UTC m=+0.081098066 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, version=17.1.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, batch=17.1_20251118.1, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, name=rhosp17/openstack-nova-compute, tcib_managed=true, release=1761123044, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible) Dec 6 03:33:38 localhost podman[74438]: 2025-12-06 08:33:38.331735879 +0000 UTC m=+0.487404699 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, distribution-scope=public, io.buildah.version=1.41.4, config_id=tripleo_step4, batch=17.1_20251118.1, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 03:33:38 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:33:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:33:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:33:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:33:41 localhost podman[74462]: 2025-12-06 08:33:41.930617878 +0000 UTC m=+0.089979938 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., url=https://www.redhat.com, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:33:41 localhost systemd[1]: tmp-crun.80YkBg.mount: Deactivated successfully. Dec 6 03:33:41 localhost podman[74463]: 2025-12-06 08:33:41.97589092 +0000 UTC m=+0.134358323 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, release=1761123044, build-date=2025-11-19T00:14:25Z, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., version=17.1.12) Dec 6 03:33:42 localhost podman[74461]: 2025-12-06 08:33:42.037369636 +0000 UTC m=+0.199632844 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, release=1761123044, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, version=17.1.12, vcs-type=git) Dec 6 03:33:42 localhost podman[74461]: 2025-12-06 08:33:42.086393883 +0000 UTC m=+0.248657091 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, tcib_managed=true, config_id=tripleo_step4, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, release=1761123044, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team) Dec 6 03:33:42 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:33:42 localhost podman[74463]: 2025-12-06 08:33:42.140840645 +0000 UTC m=+0.299308038 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true) Dec 6 03:33:42 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:33:42 localhost podman[74462]: 2025-12-06 08:33:42.197176384 +0000 UTC m=+0.356538384 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, vcs-type=git, container_name=metrics_qdr, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1) Dec 6 03:33:42 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:33:42 localhost systemd[1]: tmp-crun.V4Edor.mount: Deactivated successfully. Dec 6 03:33:48 localhost python3[74585]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:33:48 localhost python3[74630]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765010027.6962512-113371-280628189266036/source _original_basename=tmpc84d5vut follow=False checksum=039e0b234f00fbd1242930f0d5dc67e8b4c067fe backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:33:49 localhost python3[74660]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 6 03:33:51 localhost ansible-async_wrapper.py[74832]: Invoked with 923341096259 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1765010030.7180123-113613-108250585370319/AnsiballZ_command.py _ Dec 6 03:33:51 localhost ansible-async_wrapper.py[74835]: Starting module and watcher Dec 6 03:33:51 localhost ansible-async_wrapper.py[74835]: Start watching 74836 (3600) Dec 6 03:33:51 localhost ansible-async_wrapper.py[74836]: Start module (74836) Dec 6 03:33:51 localhost ansible-async_wrapper.py[74832]: Return async_wrapper task started. Dec 6 03:33:51 localhost python3[74856]: ansible-ansible.legacy.async_status Invoked with jid=923341096259.74832 mode=status _async_dir=/tmp/.ansible_async Dec 6 03:33:55 localhost puppet-user[74855]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Dec 6 03:33:55 localhost puppet-user[74855]: (file: /etc/puppet/hiera.yaml) Dec 6 03:33:55 localhost puppet-user[74855]: Warning: Undefined variable '::deploy_config_name'; Dec 6 03:33:55 localhost puppet-user[74855]: (file & line not available) Dec 6 03:33:55 localhost puppet-user[74855]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Dec 6 03:33:55 localhost puppet-user[74855]: (file & line not available) Dec 6 03:33:55 localhost puppet-user[74855]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Dec 6 03:33:55 localhost puppet-user[74855]: Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at ["/etc/puppet/modules/snmp/manifests/params.pp", 310]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Dec 6 03:33:55 localhost puppet-user[74855]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Dec 6 03:33:55 localhost puppet-user[74855]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Dec 6 03:33:55 localhost puppet-user[74855]: with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 358]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Dec 6 03:33:55 localhost puppet-user[74855]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Dec 6 03:33:55 localhost puppet-user[74855]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Dec 6 03:33:55 localhost puppet-user[74855]: with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 367]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Dec 6 03:33:55 localhost puppet-user[74855]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Dec 6 03:33:55 localhost puppet-user[74855]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Dec 6 03:33:55 localhost puppet-user[74855]: with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 382]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Dec 6 03:33:55 localhost puppet-user[74855]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Dec 6 03:33:55 localhost puppet-user[74855]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Dec 6 03:33:55 localhost puppet-user[74855]: with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 388]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Dec 6 03:33:55 localhost puppet-user[74855]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Dec 6 03:33:55 localhost puppet-user[74855]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Dec 6 03:33:55 localhost puppet-user[74855]: with Pattern[]. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 393]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Dec 6 03:33:55 localhost puppet-user[74855]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Dec 6 03:33:55 localhost puppet-user[74855]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Dec 6 03:33:55 localhost puppet-user[74855]: Notice: Compiled catalog for np0005548789.localdomain in environment production in 0.22 seconds Dec 6 03:33:55 localhost puppet-user[74855]: Notice: Applied catalog in 0.31 seconds Dec 6 03:33:55 localhost puppet-user[74855]: Application: Dec 6 03:33:55 localhost puppet-user[74855]: Initial environment: production Dec 6 03:33:55 localhost puppet-user[74855]: Converged environment: production Dec 6 03:33:55 localhost puppet-user[74855]: Run mode: user Dec 6 03:33:55 localhost puppet-user[74855]: Changes: Dec 6 03:33:55 localhost puppet-user[74855]: Events: Dec 6 03:33:55 localhost puppet-user[74855]: Resources: Dec 6 03:33:55 localhost puppet-user[74855]: Total: 19 Dec 6 03:33:55 localhost puppet-user[74855]: Time: Dec 6 03:33:55 localhost puppet-user[74855]: Filebucket: 0.00 Dec 6 03:33:55 localhost puppet-user[74855]: Package: 0.00 Dec 6 03:33:55 localhost puppet-user[74855]: Schedule: 0.00 Dec 6 03:33:55 localhost puppet-user[74855]: Exec: 0.01 Dec 6 03:33:55 localhost puppet-user[74855]: Augeas: 0.01 Dec 6 03:33:55 localhost puppet-user[74855]: File: 0.02 Dec 6 03:33:55 localhost puppet-user[74855]: Service: 0.06 Dec 6 03:33:55 localhost puppet-user[74855]: Config retrieval: 0.29 Dec 6 03:33:55 localhost puppet-user[74855]: Transaction evaluation: 0.30 Dec 6 03:33:55 localhost puppet-user[74855]: Catalog application: 0.31 Dec 6 03:33:55 localhost puppet-user[74855]: Last run: 1765010035 Dec 6 03:33:55 localhost puppet-user[74855]: Total: 0.32 Dec 6 03:33:55 localhost puppet-user[74855]: Version: Dec 6 03:33:55 localhost puppet-user[74855]: Config: 1765010035 Dec 6 03:33:55 localhost puppet-user[74855]: Puppet: 7.10.0 Dec 6 03:33:55 localhost ansible-async_wrapper.py[74836]: Module complete (74836) Dec 6 03:33:56 localhost ansible-async_wrapper.py[74835]: Done in kid B. Dec 6 03:34:01 localhost python3[74994]: ansible-ansible.legacy.async_status Invoked with jid=923341096259.74832 mode=status _async_dir=/tmp/.ansible_async Dec 6 03:34:02 localhost python3[75010]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 6 03:34:02 localhost python3[75026]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 6 03:34:03 localhost python3[75076]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:34:03 localhost python3[75094]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmpke4vg_sj recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Dec 6 03:34:04 localhost python3[75124]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:34:05 localhost python3[75229]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Dec 6 03:34:05 localhost python3[75248]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:34:06 localhost python3[75280]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 6 03:34:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:34:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:34:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:34:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:34:07 localhost systemd[1]: tmp-crun.PfyWMK.mount: Deactivated successfully. Dec 6 03:34:07 localhost podman[75332]: 2025-12-06 08:34:07.366638338 +0000 UTC m=+0.109906116 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, io.openshift.expose-services=, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, version=17.1.12, vendor=Red Hat, Inc., config_id=tripleo_step3, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-collectd) Dec 6 03:34:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:34:07 localhost python3[75330]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:34:07 localhost podman[75331]: 2025-12-06 08:34:07.393009382 +0000 UTC m=+0.134672042 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, architecture=x86_64, vcs-type=git, name=rhosp17/openstack-cron, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4) Dec 6 03:34:07 localhost podman[75332]: 2025-12-06 08:34:07.48005915 +0000 UTC m=+0.223326918 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container) Dec 6 03:34:07 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:34:07 localhost podman[75333]: 2025-12-06 08:34:07.454630603 +0000 UTC m=+0.193668332 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://www.redhat.com, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc.) Dec 6 03:34:07 localhost podman[75334]: 2025-12-06 08:34:07.470943561 +0000 UTC m=+0.205026919 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, config_id=tripleo_step3, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid) Dec 6 03:34:07 localhost podman[75385]: 2025-12-06 08:34:07.554932695 +0000 UTC m=+0.170592918 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step4, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 6 03:34:07 localhost podman[75334]: 2025-12-06 08:34:07.557834664 +0000 UTC m=+0.291918052 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, managed_by=tripleo_ansible, url=https://www.redhat.com, distribution-scope=public, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-iscsid) Dec 6 03:34:07 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:34:07 localhost podman[75331]: 2025-12-06 08:34:07.583315252 +0000 UTC m=+0.324977912 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, container_name=logrotate_crond, tcib_managed=true, vcs-type=git, url=https://www.redhat.com, io.buildah.version=1.41.4, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12) Dec 6 03:34:07 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:34:07 localhost podman[75385]: 2025-12-06 08:34:07.604579301 +0000 UTC m=+0.220239544 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container) Dec 6 03:34:07 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:34:07 localhost python3[75439]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:34:07 localhost podman[75333]: 2025-12-06 08:34:07.635279008 +0000 UTC m=+0.374316737 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_compute, config_id=tripleo_step4, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, vendor=Red Hat, Inc.) Dec 6 03:34:07 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:34:08 localhost python3[75521]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:34:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:34:08 localhost python3[75539]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:34:08 localhost podman[75540]: 2025-12-06 08:34:08.46183677 +0000 UTC m=+0.071422131 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-type=git, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, release=1761123044, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target) Dec 6 03:34:08 localhost podman[75540]: 2025-12-06 08:34:08.876357753 +0000 UTC m=+0.485943174 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, batch=17.1_20251118.1, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, release=1761123044, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com) Dec 6 03:34:08 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:34:09 localhost python3[75624]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:34:09 localhost python3[75642]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:34:09 localhost python3[75704]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:34:10 localhost python3[75722]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:34:10 localhost python3[75752]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 03:34:10 localhost systemd[1]: Reloading. Dec 6 03:34:10 localhost systemd-rc-local-generator[75775]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:34:10 localhost systemd-sysv-generator[75779]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:34:10 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:34:11 localhost python3[75838]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:34:11 localhost python3[75856]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:34:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:34:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:34:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:34:12 localhost podman[75920]: 2025-12-06 08:34:12.296623841 +0000 UTC m=+0.092301289 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, io.openshift.expose-services=, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible) Dec 6 03:34:12 localhost python3[75918]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Dec 6 03:34:12 localhost podman[75919]: 2025-12-06 08:34:12.344809621 +0000 UTC m=+0.139930422 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-type=git, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, config_id=tripleo_step4, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, distribution-scope=public) Dec 6 03:34:12 localhost podman[75919]: 2025-12-06 08:34:12.379354266 +0000 UTC m=+0.174475047 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, tcib_managed=true, architecture=x86_64, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 6 03:34:12 localhost systemd[1]: tmp-crun.FmSKgv.mount: Deactivated successfully. Dec 6 03:34:12 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:34:12 localhost podman[75950]: 2025-12-06 08:34:12.40205761 +0000 UTC m=+0.099324454 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, distribution-scope=public, name=rhosp17/openstack-qdrouterd, vcs-type=git, io.openshift.expose-services=, container_name=metrics_qdr, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044) Dec 6 03:34:12 localhost podman[75920]: 2025-12-06 08:34:12.425499895 +0000 UTC m=+0.221177403 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, tcib_managed=true, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, batch=17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 03:34:12 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:34:12 localhost python3[76012]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:34:12 localhost podman[75950]: 2025-12-06 08:34:12.597101043 +0000 UTC m=+0.294367927 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_id=tripleo_step1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 6 03:34:12 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:34:13 localhost python3[76042]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 03:34:13 localhost systemd[1]: Reloading. Dec 6 03:34:13 localhost systemd-rc-local-generator[76064]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:34:13 localhost systemd-sysv-generator[76070]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:34:13 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:34:13 localhost systemd[1]: Starting Create netns directory... Dec 6 03:34:13 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Dec 6 03:34:13 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Dec 6 03:34:13 localhost systemd[1]: Finished Create netns directory. Dec 6 03:34:14 localhost python3[76099]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Dec 6 03:34:16 localhost python3[76157]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step5 config_dir=/var/lib/tripleo-config/container-startup-config/step_5 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Dec 6 03:34:16 localhost podman[76196]: 2025-12-06 08:34:16.479312631 +0000 UTC m=+0.087398358 container create 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20251118.1, io.buildah.version=1.41.4, container_name=nova_compute, name=rhosp17/openstack-nova-compute, version=17.1.12, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., release=1761123044, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 03:34:16 localhost systemd[1]: Started libpod-conmon-41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.scope. Dec 6 03:34:16 localhost podman[76196]: 2025-12-06 08:34:16.431069529 +0000 UTC m=+0.039155286 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Dec 6 03:34:16 localhost systemd[1]: Started libcrun container. Dec 6 03:34:16 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fa26257b93de11d5a1a515bc1294a83a2d0558581107701a6e94f33d44fcb5e/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Dec 6 03:34:16 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fa26257b93de11d5a1a515bc1294a83a2d0558581107701a6e94f33d44fcb5e/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Dec 6 03:34:16 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fa26257b93de11d5a1a515bc1294a83a2d0558581107701a6e94f33d44fcb5e/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 6 03:34:16 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fa26257b93de11d5a1a515bc1294a83a2d0558581107701a6e94f33d44fcb5e/merged/var/log/nova supports timestamps until 2038 (0x7fffffff) Dec 6 03:34:16 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fa26257b93de11d5a1a515bc1294a83a2d0558581107701a6e94f33d44fcb5e/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Dec 6 03:34:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:34:16 localhost podman[76196]: 2025-12-06 08:34:16.595422205 +0000 UTC m=+0.203507942 container init 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, vendor=Red Hat, Inc., io.buildah.version=1.41.4, release=1761123044, maintainer=OpenStack TripleO Team, version=17.1.12, managed_by=tripleo_ansible, container_name=nova_compute) Dec 6 03:34:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:34:16 localhost systemd-logind[766]: Existing logind session ID 28 used by new audit session, ignoring. Dec 6 03:34:16 localhost podman[76196]: 2025-12-06 08:34:16.643647208 +0000 UTC m=+0.251732935 container start 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, version=17.1.12, release=1761123044, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 6 03:34:16 localhost systemd[1]: Created slice User Slice of UID 0. Dec 6 03:34:16 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Dec 6 03:34:16 localhost python3[76157]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_compute --conmon-pidfile /run/nova_compute.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env LIBGUESTFS_BACKEND=direct --env TRIPLEO_CONFIG_HASH=18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c --healthcheck-command /openstack/healthcheck 5672 --ipc host --label config_id=tripleo_step5 --label container_name=nova_compute --label managed_by=tripleo_ansible --label config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_compute.log --network host --privileged=True --ulimit nofile=131072 --ulimit memlock=67108864 --user nova --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/nova:/var/log/nova --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /dev:/dev --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /run/nova:/run/nova:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /sys/class/net:/sys/class/net --volume /sys/bus/pci:/sys/bus/pci --volume /boot:/boot:ro --volume /var/lib/nova:/var/lib/nova:shared registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Dec 6 03:34:16 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Dec 6 03:34:16 localhost systemd[1]: Starting User Manager for UID 0... Dec 6 03:34:16 localhost podman[76217]: 2025-12-06 08:34:16.753411659 +0000 UTC m=+0.104791241 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, io.buildah.version=1.41.4, url=https://www.redhat.com, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container) Dec 6 03:34:16 localhost podman[76217]: 2025-12-06 08:34:16.810074398 +0000 UTC m=+0.161453920 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, container_name=nova_compute, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1) Dec 6 03:34:16 localhost podman[76217]: unhealthy Dec 6 03:34:16 localhost systemd[76225]: Queued start job for default target Main User Target. Dec 6 03:34:16 localhost systemd[76225]: Created slice User Application Slice. Dec 6 03:34:16 localhost systemd[76225]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Dec 6 03:34:16 localhost systemd[76225]: Started Daily Cleanup of User's Temporary Directories. Dec 6 03:34:16 localhost systemd[76225]: Reached target Paths. Dec 6 03:34:16 localhost systemd[76225]: Reached target Timers. Dec 6 03:34:16 localhost systemd[76225]: Starting D-Bus User Message Bus Socket... Dec 6 03:34:16 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Main process exited, code=exited, status=1/FAILURE Dec 6 03:34:16 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Failed with result 'exit-code'. Dec 6 03:34:16 localhost systemd[76225]: Starting Create User's Volatile Files and Directories... Dec 6 03:34:16 localhost systemd[76225]: Listening on D-Bus User Message Bus Socket. Dec 6 03:34:16 localhost systemd[76225]: Reached target Sockets. Dec 6 03:34:16 localhost systemd[76225]: Finished Create User's Volatile Files and Directories. Dec 6 03:34:16 localhost systemd[76225]: Reached target Basic System. Dec 6 03:34:16 localhost systemd[76225]: Reached target Main User Target. Dec 6 03:34:16 localhost systemd[76225]: Startup finished in 138ms. Dec 6 03:34:16 localhost systemd[1]: Started User Manager for UID 0. Dec 6 03:34:16 localhost systemd[1]: Started Session c10 of User root. Dec 6 03:34:16 localhost systemd[1]: session-c10.scope: Deactivated successfully. Dec 6 03:34:17 localhost podman[76317]: 2025-12-06 08:34:17.201351862 +0000 UTC m=+0.105407169 container create b526ff5c18e8d1e347c23ef14b01140c0e908977002aa81cbae031036fc67931 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, vendor=Red Hat, Inc., version=17.1.12, managed_by=tripleo_ansible, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_wait_for_compute_service, batch=17.1_20251118.1, architecture=x86_64, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, build-date=2025-11-19T00:36:58Z, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team) Dec 6 03:34:17 localhost systemd[1]: Started libpod-conmon-b526ff5c18e8d1e347c23ef14b01140c0e908977002aa81cbae031036fc67931.scope. Dec 6 03:34:17 localhost podman[76317]: 2025-12-06 08:34:17.150561262 +0000 UTC m=+0.054616579 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Dec 6 03:34:17 localhost systemd[1]: Started libcrun container. Dec 6 03:34:17 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55528a480405885214081d53ccb18f2317128ddef63aa2fac6c624569bda37fd/merged/container-config-scripts supports timestamps until 2038 (0x7fffffff) Dec 6 03:34:17 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55528a480405885214081d53ccb18f2317128ddef63aa2fac6c624569bda37fd/merged/var/log/nova supports timestamps until 2038 (0x7fffffff) Dec 6 03:34:17 localhost podman[76317]: 2025-12-06 08:34:17.278101505 +0000 UTC m=+0.182156782 container init b526ff5c18e8d1e347c23ef14b01140c0e908977002aa81cbae031036fc67931 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, release=1761123044, io.buildah.version=1.41.4, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, distribution-scope=public, config_id=tripleo_step5, batch=17.1_20251118.1, container_name=nova_wait_for_compute_service, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 03:34:17 localhost podman[76317]: 2025-12-06 08:34:17.288161822 +0000 UTC m=+0.192217069 container start b526ff5c18e8d1e347c23ef14b01140c0e908977002aa81cbae031036fc67931 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, io.openshift.expose-services=, release=1761123044, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.4, tcib_managed=true, container_name=nova_wait_for_compute_service, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, version=17.1.12, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible) Dec 6 03:34:17 localhost podman[76317]: 2025-12-06 08:34:17.288464692 +0000 UTC m=+0.192519949 container attach b526ff5c18e8d1e347c23ef14b01140c0e908977002aa81cbae031036fc67931 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, container_name=nova_wait_for_compute_service, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, batch=17.1_20251118.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, io.buildah.version=1.41.4, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public) Dec 6 03:34:27 localhost systemd[1]: Stopping User Manager for UID 0... Dec 6 03:34:27 localhost systemd[76225]: Activating special unit Exit the Session... Dec 6 03:34:27 localhost systemd[76225]: Stopped target Main User Target. Dec 6 03:34:27 localhost systemd[76225]: Stopped target Basic System. Dec 6 03:34:27 localhost systemd[76225]: Stopped target Paths. Dec 6 03:34:27 localhost systemd[76225]: Stopped target Sockets. Dec 6 03:34:27 localhost systemd[76225]: Stopped target Timers. Dec 6 03:34:27 localhost systemd[76225]: Stopped Daily Cleanup of User's Temporary Directories. Dec 6 03:34:27 localhost systemd[76225]: Closed D-Bus User Message Bus Socket. Dec 6 03:34:27 localhost systemd[76225]: Stopped Create User's Volatile Files and Directories. Dec 6 03:34:27 localhost systemd[76225]: Removed slice User Application Slice. Dec 6 03:34:27 localhost systemd[76225]: Reached target Shutdown. Dec 6 03:34:27 localhost systemd[76225]: Finished Exit the Session. Dec 6 03:34:27 localhost systemd[76225]: Reached target Exit the Session. Dec 6 03:34:27 localhost systemd[1]: user@0.service: Deactivated successfully. Dec 6 03:34:27 localhost systemd[1]: Stopped User Manager for UID 0. Dec 6 03:34:27 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Dec 6 03:34:27 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Dec 6 03:34:27 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Dec 6 03:34:27 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Dec 6 03:34:27 localhost systemd[1]: Removed slice User Slice of UID 0. Dec 6 03:34:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:34:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:34:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:34:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:34:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:34:37 localhost podman[76420]: 2025-12-06 08:34:37.956700492 +0000 UTC m=+0.106077329 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, vcs-type=git, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute) Dec 6 03:34:37 localhost systemd[1]: tmp-crun.68PmCD.mount: Deactivated successfully. Dec 6 03:34:38 localhost podman[76421]: 2025-12-06 08:34:38.003337715 +0000 UTC m=+0.153696833 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1761123044, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, version=17.1.12, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4) Dec 6 03:34:38 localhost podman[76420]: 2025-12-06 08:34:38.01231427 +0000 UTC m=+0.161691167 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, url=https://www.redhat.com, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.expose-services=, release=1761123044) Dec 6 03:34:38 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:34:38 localhost podman[76421]: 2025-12-06 08:34:38.047178094 +0000 UTC m=+0.197537202 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, io.openshift.expose-services=, release=1761123044) Dec 6 03:34:38 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:34:38 localhost podman[76429]: 2025-12-06 08:34:38.100861762 +0000 UTC m=+0.242336888 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., version=17.1.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 6 03:34:38 localhost podman[76419]: 2025-12-06 08:34:38.152906541 +0000 UTC m=+0.305578139 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, release=1761123044, io.buildah.version=1.41.4, tcib_managed=true, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_id=tripleo_step3, container_name=collectd, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com) Dec 6 03:34:38 localhost podman[76419]: 2025-12-06 08:34:38.168060564 +0000 UTC m=+0.320732132 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., version=17.1.12, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, batch=17.1_20251118.1, container_name=collectd, managed_by=tripleo_ansible) Dec 6 03:34:38 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:34:38 localhost podman[76429]: 2025-12-06 08:34:38.184126904 +0000 UTC m=+0.325602090 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.buildah.version=1.41.4, url=https://www.redhat.com, batch=17.1_20251118.1, architecture=x86_64, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, distribution-scope=public, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi) Dec 6 03:34:38 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:34:38 localhost podman[76418]: 2025-12-06 08:34:38.25738294 +0000 UTC m=+0.412046969 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, managed_by=tripleo_ansible, architecture=x86_64, release=1761123044, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, container_name=logrotate_crond, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 03:34:38 localhost podman[76418]: 2025-12-06 08:34:38.294182704 +0000 UTC m=+0.448846803 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, distribution-scope=public, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 6 03:34:38 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:34:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:34:39 localhost podman[76530]: 2025-12-06 08:34:39.05604498 +0000 UTC m=+0.083102758 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, container_name=nova_migration_target, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, vcs-type=git, version=17.1.12, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.buildah.version=1.41.4) Dec 6 03:34:39 localhost podman[76530]: 2025-12-06 08:34:39.432152331 +0000 UTC m=+0.459210089 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, batch=17.1_20251118.1, architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, managed_by=tripleo_ansible, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, distribution-scope=public, io.buildah.version=1.41.4, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, tcib_managed=true, vendor=Red Hat, Inc., url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 03:34:39 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:34:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:34:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:34:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:34:42 localhost systemd[1]: tmp-crun.FPuwRg.mount: Deactivated successfully. Dec 6 03:34:42 localhost podman[76555]: 2025-12-06 08:34:42.928587013 +0000 UTC m=+0.090744820 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, architecture=x86_64, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, distribution-scope=public, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:34:42 localhost podman[76554]: 2025-12-06 08:34:42.970832034 +0000 UTC m=+0.132880608 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_id=tripleo_step1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, batch=17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, container_name=metrics_qdr, tcib_managed=true) Dec 6 03:34:42 localhost podman[76555]: 2025-12-06 08:34:42.975164536 +0000 UTC m=+0.137322333 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, release=1761123044, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, batch=17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git) Dec 6 03:34:42 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:34:43 localhost podman[76553]: 2025-12-06 08:34:43.059470929 +0000 UTC m=+0.224228435 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z) Dec 6 03:34:43 localhost podman[76553]: 2025-12-06 08:34:43.111120916 +0000 UTC m=+0.275878412 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, version=17.1.12, io.openshift.expose-services=, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, release=1761123044, vcs-type=git, architecture=x86_64) Dec 6 03:34:43 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:34:43 localhost podman[76554]: 2025-12-06 08:34:43.212318045 +0000 UTC m=+0.374366569 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, io.buildah.version=1.41.4, config_id=tripleo_step1, architecture=x86_64, version=17.1.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 6 03:34:43 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:34:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:34:47 localhost podman[76630]: 2025-12-06 08:34:47.890480121 +0000 UTC m=+0.050229904 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, version=17.1.12, vcs-type=git, build-date=2025-11-19T00:36:58Z, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 03:34:47 localhost podman[76630]: 2025-12-06 08:34:47.923796938 +0000 UTC m=+0.083546741 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, vcs-type=git, io.buildah.version=1.41.4, managed_by=tripleo_ansible, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z) Dec 6 03:34:47 localhost podman[76630]: unhealthy Dec 6 03:34:47 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Main process exited, code=exited, status=1/FAILURE Dec 6 03:34:47 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Failed with result 'exit-code'. Dec 6 03:35:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:35:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:35:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:35:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:35:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:35:08 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 6 03:35:08 localhost recover_tripleo_nova_virtqemud[76683]: 61814 Dec 6 03:35:08 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 6 03:35:08 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 6 03:35:08 localhost podman[76663]: 2025-12-06 08:35:08.955479871 +0000 UTC m=+0.096122645 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, tcib_managed=true, architecture=x86_64, release=1761123044, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 6 03:35:08 localhost systemd[1]: tmp-crun.RFfAvf.mount: Deactivated successfully. Dec 6 03:35:09 localhost podman[76651]: 2025-12-06 08:35:09.00458582 +0000 UTC m=+0.160368686 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp17/openstack-cron, vcs-type=git, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, release=1761123044, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, version=17.1.12, com.redhat.component=openstack-cron-container, architecture=x86_64, tcib_managed=true, config_id=tripleo_step4, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:35:09 localhost podman[76663]: 2025-12-06 08:35:09.010297555 +0000 UTC m=+0.150940389 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_id=tripleo_step4, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, release=1761123044, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, managed_by=tripleo_ansible, version=17.1.12) Dec 6 03:35:09 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:35:09 localhost podman[76662]: 2025-12-06 08:35:09.05372639 +0000 UTC m=+0.198618784 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, release=1761123044, architecture=x86_64, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, batch=17.1_20251118.1, url=https://www.redhat.com, io.buildah.version=1.41.4, vendor=Red Hat, Inc.) Dec 6 03:35:09 localhost podman[76662]: 2025-12-06 08:35:09.063653313 +0000 UTC m=+0.208545697 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, managed_by=tripleo_ansible, release=1761123044, container_name=iscsid, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, version=17.1.12, config_id=tripleo_step3, distribution-scope=public) Dec 6 03:35:09 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:35:09 localhost podman[76653]: 2025-12-06 08:35:09.109308097 +0000 UTC m=+0.257058528 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.buildah.version=1.41.4, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z, distribution-scope=public, managed_by=tripleo_ansible) Dec 6 03:35:09 localhost podman[76651]: 2025-12-06 08:35:09.114875647 +0000 UTC m=+0.270658513 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-type=git, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, architecture=x86_64, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-cron, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Dec 6 03:35:09 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:35:09 localhost podman[76652]: 2025-12-06 08:35:09.156735715 +0000 UTC m=+0.302574107 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-collectd-container, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.openshift.expose-services=) Dec 6 03:35:09 localhost podman[76653]: 2025-12-06 08:35:09.163189011 +0000 UTC m=+0.310939512 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1761123044, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.12, container_name=ceilometer_agent_compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com) Dec 6 03:35:09 localhost podman[76652]: 2025-12-06 08:35:09.169163394 +0000 UTC m=+0.315001786 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, version=17.1.12, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1) Dec 6 03:35:09 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:35:09 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:35:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:35:09 localhost podman[76760]: 2025-12-06 08:35:09.916647391 +0000 UTC m=+0.079596390 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., container_name=nova_migration_target, architecture=x86_64, url=https://www.redhat.com, maintainer=OpenStack TripleO Team) Dec 6 03:35:10 localhost podman[76760]: 2025-12-06 08:35:10.285207402 +0000 UTC m=+0.448156441 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., architecture=x86_64, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, url=https://www.redhat.com, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute) Dec 6 03:35:10 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:35:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:35:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:35:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:35:13 localhost podman[76785]: 2025-12-06 08:35:13.927833574 +0000 UTC m=+0.081526948 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, version=17.1.12, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, tcib_managed=true, container_name=ovn_metadata_agent, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible) Dec 6 03:35:13 localhost podman[76785]: 2025-12-06 08:35:13.984096707 +0000 UTC m=+0.137790011 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12) Dec 6 03:35:13 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:35:14 localhost systemd[1]: tmp-crun.bcrFLn.mount: Deactivated successfully. Dec 6 03:35:14 localhost podman[76783]: 2025-12-06 08:35:14.033129589 +0000 UTC m=+0.190655910 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, url=https://www.redhat.com, vendor=Red Hat, Inc., container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, version=17.1.12, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container) Dec 6 03:35:14 localhost podman[76784]: 2025-12-06 08:35:13.986354866 +0000 UTC m=+0.139452752 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, batch=17.1_20251118.1, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., tcib_managed=true, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, io.openshift.expose-services=, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z) Dec 6 03:35:14 localhost podman[76783]: 2025-12-06 08:35:14.092682553 +0000 UTC m=+0.250208874 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vcs-type=git, version=17.1.12, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 6 03:35:14 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:35:14 localhost podman[76784]: 2025-12-06 08:35:14.202840236 +0000 UTC m=+0.355938122 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, release=1761123044, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible) Dec 6 03:35:14 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:35:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:35:18 localhost podman[76860]: 2025-12-06 08:35:18.920576636 +0000 UTC m=+0.081622211 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, distribution-scope=public, url=https://www.redhat.com, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 03:35:18 localhost podman[76860]: 2025-12-06 08:35:18.97816856 +0000 UTC m=+0.139214105 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, version=17.1.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.) Dec 6 03:35:18 localhost podman[76860]: unhealthy Dec 6 03:35:18 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Main process exited, code=exited, status=1/FAILURE Dec 6 03:35:18 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Failed with result 'exit-code'. Dec 6 03:35:19 localhost systemd[1]: session-27.scope: Deactivated successfully. Dec 6 03:35:19 localhost systemd[1]: session-27.scope: Consumed 3.071s CPU time. Dec 6 03:35:19 localhost systemd-logind[766]: Session 27 logged out. Waiting for processes to exit. Dec 6 03:35:19 localhost systemd-logind[766]: Removed session 27. Dec 6 03:35:21 localhost sshd[76883]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:35:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:35:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:35:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:35:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:35:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:35:39 localhost systemd[1]: tmp-crun.OJKSYn.mount: Deactivated successfully. Dec 6 03:35:39 localhost podman[76961]: 2025-12-06 08:35:39.928200603 +0000 UTC m=+0.086828920 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp17/openstack-cron, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_id=tripleo_step4, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, vendor=Red Hat, Inc., version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true) Dec 6 03:35:39 localhost podman[76963]: 2025-12-06 08:35:39.937299812 +0000 UTC m=+0.086674945 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, io.buildah.version=1.41.4, vcs-type=git, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute) Dec 6 03:35:39 localhost systemd[1]: tmp-crun.CBAl2S.mount: Deactivated successfully. Dec 6 03:35:39 localhost podman[76961]: 2025-12-06 08:35:39.9431228 +0000 UTC m=+0.101751127 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, io.buildah.version=1.41.4, vendor=Red Hat, Inc., container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, vcs-type=git, name=rhosp17/openstack-cron) Dec 6 03:35:39 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:35:40 localhost podman[76962]: 2025-12-06 08:35:40.002668144 +0000 UTC m=+0.157873856 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, tcib_managed=true, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, release=1761123044, version=17.1.12, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:35:40 localhost podman[76962]: 2025-12-06 08:35:40.013394162 +0000 UTC m=+0.168599904 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://www.redhat.com, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, architecture=x86_64, container_name=collectd, version=17.1.12, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Dec 6 03:35:40 localhost podman[76963]: 2025-12-06 08:35:40.013806425 +0000 UTC m=+0.163181608 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true) Dec 6 03:35:40 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:35:40 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:35:40 localhost podman[76964]: 2025-12-06 08:35:40.067390596 +0000 UTC m=+0.219136122 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, version=17.1.12, url=https://www.redhat.com, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, release=1761123044) Dec 6 03:35:40 localhost podman[76970]: 2025-12-06 08:35:40.141862327 +0000 UTC m=+0.287874527 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, version=17.1.12, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 6 03:35:40 localhost podman[76964]: 2025-12-06 08:35:40.202142302 +0000 UTC m=+0.353887848 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, release=1761123044, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.openshift.expose-services=, container_name=iscsid, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:35:40 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:35:40 localhost podman[76970]: 2025-12-06 08:35:40.223206918 +0000 UTC m=+0.369219118 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., tcib_managed=true, vcs-type=git, version=17.1.12, architecture=x86_64, distribution-scope=public, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com) Dec 6 03:35:40 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:35:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:35:40 localhost podman[77069]: 2025-12-06 08:35:40.911352582 +0000 UTC m=+0.077895127 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.expose-services=, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, version=17.1.12) Dec 6 03:35:41 localhost podman[77069]: 2025-12-06 08:35:41.31815871 +0000 UTC m=+0.484701215 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, distribution-scope=public, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, release=1761123044, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, architecture=x86_64, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 03:35:41 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:35:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:35:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:35:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:35:44 localhost podman[77092]: 2025-12-06 08:35:44.919032587 +0000 UTC m=+0.083505338 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, architecture=x86_64, name=rhosp17/openstack-ovn-controller, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, version=17.1.12, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 03:35:44 localhost podman[77092]: 2025-12-06 08:35:44.963673574 +0000 UTC m=+0.128146275 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step4, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:35:44 localhost systemd[1]: tmp-crun.itCvUa.mount: Deactivated successfully. Dec 6 03:35:44 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:35:44 localhost podman[77093]: 2025-12-06 08:35:44.993544968 +0000 UTC m=+0.154319847 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vendor=Red Hat, Inc., io.buildah.version=1.41.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, tcib_managed=true, config_id=tripleo_step1, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044) Dec 6 03:35:45 localhost podman[77094]: 2025-12-06 08:35:45.034814793 +0000 UTC m=+0.190611749 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, url=https://www.redhat.com, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, container_name=ovn_metadata_agent, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, architecture=x86_64) Dec 6 03:35:45 localhost podman[77094]: 2025-12-06 08:35:45.091055825 +0000 UTC m=+0.246852751 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, architecture=x86_64, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12) Dec 6 03:35:45 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:35:45 localhost podman[77093]: 2025-12-06 08:35:45.217292661 +0000 UTC m=+0.378067620 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, release=1761123044, url=https://www.redhat.com, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, vcs-type=git) Dec 6 03:35:45 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:35:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:35:49 localhost systemd[1]: tmp-crun.wzsXga.mount: Deactivated successfully. Dec 6 03:35:49 localhost podman[77168]: 2025-12-06 08:35:49.91634894 +0000 UTC m=+0.073522384 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-type=git, release=1761123044, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, distribution-scope=public, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 03:35:49 localhost podman[77168]: 2025-12-06 08:35:49.99603529 +0000 UTC m=+0.153208724 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, version=17.1.12, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, release=1761123044, vendor=Red Hat, Inc.) Dec 6 03:35:50 localhost podman[77168]: unhealthy Dec 6 03:35:50 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Main process exited, code=exited, status=1/FAILURE Dec 6 03:35:50 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Failed with result 'exit-code'. Dec 6 03:36:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:36:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:36:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:36:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:36:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:36:10 localhost podman[77192]: 2025-12-06 08:36:10.911171534 +0000 UTC m=+0.073595795 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4, vcs-type=git, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, version=17.1.12, architecture=x86_64, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc.) Dec 6 03:36:10 localhost podman[77191]: 2025-12-06 08:36:10.923925995 +0000 UTC m=+0.085894093 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 6 03:36:10 localhost podman[77191]: 2025-12-06 08:36:10.928410722 +0000 UTC m=+0.090378860 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, vcs-type=git, version=17.1.12, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, managed_by=tripleo_ansible) Dec 6 03:36:10 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:36:10 localhost podman[77190]: 2025-12-06 08:36:10.975115073 +0000 UTC m=+0.137669378 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, config_id=tripleo_step4, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, io.buildah.version=1.41.4, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container) Dec 6 03:36:10 localhost podman[77192]: 2025-12-06 08:36:10.982970204 +0000 UTC m=+0.145394435 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z) Dec 6 03:36:11 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:36:11 localhost podman[77190]: 2025-12-06 08:36:11.05469943 +0000 UTC m=+0.217253765 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, version=17.1.12, io.buildah.version=1.41.4, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, architecture=x86_64, release=1761123044, container_name=logrotate_crond) Dec 6 03:36:11 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:36:11 localhost podman[77194]: 2025-12-06 08:36:11.037501643 +0000 UTC m=+0.193010363 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, version=17.1.12, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi) Dec 6 03:36:11 localhost podman[77193]: 2025-12-06 08:36:11.05893533 +0000 UTC m=+0.218553265 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, tcib_managed=true, io.buildah.version=1.41.4, managed_by=tripleo_ansible, version=17.1.12, config_id=tripleo_step3, distribution-scope=public, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com) Dec 6 03:36:11 localhost podman[77194]: 2025-12-06 08:36:11.117944737 +0000 UTC m=+0.273453477 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044) Dec 6 03:36:11 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:36:11 localhost podman[77193]: 2025-12-06 08:36:11.138205107 +0000 UTC m=+0.297823042 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, container_name=iscsid, vcs-type=git, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, managed_by=tripleo_ansible, version=17.1.12, url=https://www.redhat.com, distribution-scope=public, release=1761123044, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid) Dec 6 03:36:11 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:36:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:36:11 localhost podman[77301]: 2025-12-06 08:36:11.895616673 +0000 UTC m=+0.060760492 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, release=1761123044, distribution-scope=public, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.buildah.version=1.41.4, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 03:36:11 localhost systemd[1]: tmp-crun.wZl3yy.mount: Deactivated successfully. Dec 6 03:36:12 localhost podman[77301]: 2025-12-06 08:36:12.277154418 +0000 UTC m=+0.442298247 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, version=17.1.12, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git) Dec 6 03:36:12 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:36:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:36:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:36:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:36:15 localhost podman[77325]: 2025-12-06 08:36:15.924514307 +0000 UTC m=+0.080446476 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, version=17.1.12, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, tcib_managed=true, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 6 03:36:15 localhost systemd[1]: tmp-crun.wtiIF8.mount: Deactivated successfully. Dec 6 03:36:15 localhost podman[77324]: 2025-12-06 08:36:15.985727472 +0000 UTC m=+0.146348895 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, release=1761123044, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, version=17.1.12, url=https://www.redhat.com, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, tcib_managed=true, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 6 03:36:16 localhost podman[77324]: 2025-12-06 08:36:16.023136388 +0000 UTC m=+0.183757861 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, vendor=Red Hat, Inc., distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, release=1761123044, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team) Dec 6 03:36:16 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:36:16 localhost podman[77326]: 2025-12-06 08:36:16.07283566 +0000 UTC m=+0.225445917 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-type=git, tcib_managed=true, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Dec 6 03:36:16 localhost podman[77325]: 2025-12-06 08:36:16.106005926 +0000 UTC m=+0.261938015 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, version=17.1.12, tcib_managed=true, managed_by=tripleo_ansible, container_name=metrics_qdr, io.buildah.version=1.41.4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:36:16 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:36:16 localhost podman[77326]: 2025-12-06 08:36:16.127081321 +0000 UTC m=+0.279691538 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, batch=17.1_20251118.1, distribution-scope=public, version=17.1.12) Dec 6 03:36:16 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:36:16 localhost systemd[1]: tmp-crun.UcU9Gg.mount: Deactivated successfully. Dec 6 03:36:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:36:20 localhost podman[77397]: 2025-12-06 08:36:20.909583513 +0000 UTC m=+0.075968268 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=) Dec 6 03:36:20 localhost podman[77397]: 2025-12-06 08:36:20.973221943 +0000 UTC m=+0.139606728 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4) Dec 6 03:36:20 localhost podman[77397]: unhealthy Dec 6 03:36:20 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Main process exited, code=exited, status=1/FAILURE Dec 6 03:36:20 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Failed with result 'exit-code'. Dec 6 03:36:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:36:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:36:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:36:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:36:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:36:41 localhost podman[77497]: 2025-12-06 08:36:41.94260488 +0000 UTC m=+0.097005122 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, container_name=logrotate_crond, tcib_managed=true, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, version=17.1.12, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step4, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z) Dec 6 03:36:41 localhost podman[77499]: 2025-12-06 08:36:41.991626862 +0000 UTC m=+0.141633760 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, container_name=ceilometer_agent_compute, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, release=1761123044, url=https://www.redhat.com, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 6 03:36:42 localhost podman[77499]: 2025-12-06 08:36:42.044219552 +0000 UTC m=+0.194226450 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, release=1761123044) Dec 6 03:36:42 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:36:42 localhost podman[77498]: 2025-12-06 08:36:42.049577296 +0000 UTC m=+0.203591377 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, architecture=x86_64, batch=17.1_20251118.1, config_id=tripleo_step3, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., url=https://www.redhat.com) Dec 6 03:36:42 localhost podman[77511]: 2025-12-06 08:36:42.100318401 +0000 UTC m=+0.243056896 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, distribution-scope=public, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, version=17.1.12, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 6 03:36:42 localhost podman[77500]: 2025-12-06 08:36:42.146598157 +0000 UTC m=+0.291905381 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, architecture=x86_64, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, distribution-scope=public, io.buildah.version=1.41.4, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z) Dec 6 03:36:42 localhost podman[77500]: 2025-12-06 08:36:42.154239572 +0000 UTC m=+0.299546866 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, name=rhosp17/openstack-iscsid, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, tcib_managed=true, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, managed_by=tripleo_ansible, distribution-scope=public, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.buildah.version=1.41.4, io.openshift.expose-services=, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:36:42 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:36:42 localhost podman[77497]: 2025-12-06 08:36:42.175403289 +0000 UTC m=+0.329803541 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1761123044, batch=17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, architecture=x86_64, version=17.1.12, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron) Dec 6 03:36:42 localhost podman[77498]: 2025-12-06 08:36:42.184087695 +0000 UTC m=+0.338101756 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vcs-type=git, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container) Dec 6 03:36:42 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:36:42 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:36:42 localhost podman[77511]: 2025-12-06 08:36:42.205734628 +0000 UTC m=+0.348473083 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, version=17.1.12, io.openshift.expose-services=, url=https://www.redhat.com, batch=17.1_20251118.1, distribution-scope=public, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, tcib_managed=true) Dec 6 03:36:42 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:36:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:36:42 localhost podman[77612]: 2025-12-06 08:36:42.918367973 +0000 UTC m=+0.079704652 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, release=1761123044, io.openshift.expose-services=, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, version=17.1.12, url=https://www.redhat.com, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 03:36:43 localhost podman[77612]: 2025-12-06 08:36:43.278197612 +0000 UTC m=+0.439534231 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, container_name=nova_migration_target, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.openshift.expose-services=) Dec 6 03:36:43 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:36:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:36:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:36:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:36:46 localhost systemd[1]: tmp-crun.wz9oN7.mount: Deactivated successfully. Dec 6 03:36:46 localhost podman[77638]: 2025-12-06 08:36:46.918102153 +0000 UTC m=+0.073592534 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4) Dec 6 03:36:46 localhost systemd[1]: tmp-crun.hhWT3d.mount: Deactivated successfully. Dec 6 03:36:46 localhost podman[77638]: 2025-12-06 08:36:46.975397778 +0000 UTC m=+0.130888159 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, tcib_managed=true, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible) Dec 6 03:36:46 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:36:47 localhost podman[77636]: 2025-12-06 08:36:47.022710617 +0000 UTC m=+0.183359036 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, distribution-scope=public, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044) Dec 6 03:36:47 localhost podman[77637]: 2025-12-06 08:36:46.97772398 +0000 UTC m=+0.134477440 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, io.buildah.version=1.41.4, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, config_id=tripleo_step1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, distribution-scope=public, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd) Dec 6 03:36:47 localhost podman[77636]: 2025-12-06 08:36:47.067477478 +0000 UTC m=+0.228125867 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=, version=17.1.12, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, architecture=x86_64, config_id=tripleo_step4, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:36:47 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:36:47 localhost podman[77637]: 2025-12-06 08:36:47.186171073 +0000 UTC m=+0.342924543 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, config_id=tripleo_step1, tcib_managed=true, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, architecture=x86_64, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:36:47 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:36:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:36:51 localhost podman[77711]: 2025-12-06 08:36:51.906087741 +0000 UTC m=+0.068579552 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.buildah.version=1.41.4, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible) Dec 6 03:36:51 localhost podman[77711]: 2025-12-06 08:36:51.975103575 +0000 UTC m=+0.137595396 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com) Dec 6 03:36:51 localhost podman[77711]: unhealthy Dec 6 03:36:51 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Main process exited, code=exited, status=1/FAILURE Dec 6 03:36:51 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Failed with result 'exit-code'. Dec 6 03:37:06 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 6 03:37:06 localhost recover_tripleo_nova_virtqemud[77735]: 61814 Dec 6 03:37:06 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 6 03:37:06 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 6 03:37:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:37:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:37:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:37:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:37:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:37:12 localhost podman[77738]: 2025-12-06 08:37:12.929331235 +0000 UTC m=+0.082118336 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, tcib_managed=true, container_name=ceilometer_agent_compute, config_id=tripleo_step4, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, vendor=Red Hat, Inc., vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, release=1761123044, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 6 03:37:12 localhost podman[77738]: 2025-12-06 08:37:12.952619379 +0000 UTC m=+0.105406490 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, version=17.1.12, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container) Dec 6 03:37:12 localhost podman[77746]: 2025-12-06 08:37:12.99348244 +0000 UTC m=+0.138598385 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, distribution-scope=public, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com) Dec 6 03:37:13 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:37:13 localhost podman[77746]: 2025-12-06 08:37:13.054920061 +0000 UTC m=+0.200035996 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, distribution-scope=public, build-date=2025-11-19T00:12:45Z) Dec 6 03:37:13 localhost systemd[1]: tmp-crun.8w1b5r.mount: Deactivated successfully. Dec 6 03:37:13 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:37:13 localhost podman[77739]: 2025-12-06 08:37:13.085639662 +0000 UTC m=+0.234828453 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.buildah.version=1.41.4, version=17.1.12, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, container_name=iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:37:13 localhost podman[77739]: 2025-12-06 08:37:13.139071509 +0000 UTC m=+0.288260380 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1761123044, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, distribution-scope=public, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com) Dec 6 03:37:13 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:37:13 localhost podman[77737]: 2025-12-06 08:37:13.066845907 +0000 UTC m=+0.222263438 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, container_name=collectd, distribution-scope=public, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, release=1761123044, version=17.1.12, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Dec 6 03:37:13 localhost podman[77736]: 2025-12-06 08:37:13.142331008 +0000 UTC m=+0.298833122 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, release=1761123044, vcs-type=git, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, architecture=x86_64, url=https://www.redhat.com, batch=17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Dec 6 03:37:13 localhost podman[77737]: 2025-12-06 08:37:13.200105348 +0000 UTC m=+0.355522879 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., tcib_managed=true, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, release=1761123044, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd) Dec 6 03:37:13 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:37:13 localhost podman[77736]: 2025-12-06 08:37:13.226352331 +0000 UTC m=+0.382854425 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, vcs-type=git, tcib_managed=true, version=17.1.12, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 6 03:37:13 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:37:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:37:13 localhost systemd[1]: tmp-crun.nRe4UZ.mount: Deactivated successfully. Dec 6 03:37:13 localhost podman[77847]: 2025-12-06 08:37:13.952967305 +0000 UTC m=+0.078906088 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, io.openshift.expose-services=, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., container_name=nova_migration_target, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Dec 6 03:37:14 localhost podman[77847]: 2025-12-06 08:37:14.36294675 +0000 UTC m=+0.488885533 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, release=1761123044, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z) Dec 6 03:37:14 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:37:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:37:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:37:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:37:17 localhost podman[77872]: 2025-12-06 08:37:17.908473062 +0000 UTC m=+0.068963254 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, distribution-scope=public, config_id=tripleo_step4) Dec 6 03:37:17 localhost podman[77871]: 2025-12-06 08:37:17.975501835 +0000 UTC m=+0.136231864 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, distribution-scope=public, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, container_name=metrics_qdr) Dec 6 03:37:17 localhost podman[77872]: 2025-12-06 08:37:17.980225369 +0000 UTC m=+0.140715581 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_id=tripleo_step4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z) Dec 6 03:37:17 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:37:18 localhost podman[77870]: 2025-12-06 08:37:18.035663337 +0000 UTC m=+0.198649335 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, managed_by=tripleo_ansible, version=17.1.12, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, tcib_managed=true, vcs-type=git) Dec 6 03:37:18 localhost podman[77870]: 2025-12-06 08:37:18.058829377 +0000 UTC m=+0.221815445 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, distribution-scope=public, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, release=1761123044, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12) Dec 6 03:37:18 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:37:18 localhost podman[77871]: 2025-12-06 08:37:18.157163658 +0000 UTC m=+0.317893747 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, architecture=x86_64, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, io.openshift.expose-services=, release=1761123044, name=rhosp17/openstack-qdrouterd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public) Dec 6 03:37:18 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:37:18 localhost systemd[1]: tmp-crun.U44YQA.mount: Deactivated successfully. Dec 6 03:37:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:37:22 localhost podman[78035]: 2025-12-06 08:37:22.909079945 +0000 UTC m=+0.075759701 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, batch=17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, architecture=x86_64, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.buildah.version=1.41.4, managed_by=tripleo_ansible, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 6 03:37:22 localhost podman[78035]: 2025-12-06 08:37:22.962469 +0000 UTC m=+0.129148776 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.41.4, container_name=nova_compute, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, architecture=x86_64, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, distribution-scope=public, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 6 03:37:22 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 03:37:29 localhost systemd[1]: libpod-b526ff5c18e8d1e347c23ef14b01140c0e908977002aa81cbae031036fc67931.scope: Deactivated successfully. Dec 6 03:37:29 localhost podman[76317]: 2025-12-06 08:37:29.894614364 +0000 UTC m=+192.798669611 container died b526ff5c18e8d1e347c23ef14b01140c0e908977002aa81cbae031036fc67931 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, release=1761123044, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, container_name=nova_wait_for_compute_service, vcs-type=git) Dec 6 03:37:29 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b526ff5c18e8d1e347c23ef14b01140c0e908977002aa81cbae031036fc67931-userdata-shm.mount: Deactivated successfully. Dec 6 03:37:29 localhost systemd[1]: var-lib-containers-storage-overlay-55528a480405885214081d53ccb18f2317128ddef63aa2fac6c624569bda37fd-merged.mount: Deactivated successfully. Dec 6 03:37:30 localhost podman[78061]: 2025-12-06 08:37:29.997656151 +0000 UTC m=+0.088294675 container cleanup b526ff5c18e8d1e347c23ef14b01140c0e908977002aa81cbae031036fc67931 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-type=git, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, container_name=nova_wait_for_compute_service, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, config_id=tripleo_step5, version=17.1.12, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 03:37:30 localhost systemd[1]: libpod-conmon-b526ff5c18e8d1e347c23ef14b01140c0e908977002aa81cbae031036fc67931.scope: Deactivated successfully. Dec 6 03:37:30 localhost python3[76157]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_wait_for_compute_service --conmon-pidfile /run/nova_wait_for_compute_service.pid --detach=False --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env __OS_DEBUG=true --env TRIPLEO_CONFIG_HASH=179caa3982511c1fd3314b961771f96c --label config_id=tripleo_step5 --label container_name=nova_wait_for_compute_service --label managed_by=tripleo_ansible --label config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_wait_for_compute_service.log --network host --user nova --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/nova:/var/log/nova --volume /var/lib/container-config-scripts:/container-config-scripts registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Dec 6 03:37:30 localhost python3[78116]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:37:30 localhost python3[78132]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_compute_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 6 03:37:31 localhost python3[78193]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765010250.996636-118350-4166422878226/source dest=/etc/systemd/system/tripleo_nova_compute.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:37:32 localhost python3[78209]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 6 03:37:32 localhost systemd[1]: Reloading. Dec 6 03:37:32 localhost systemd-sysv-generator[78241]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:37:32 localhost systemd-rc-local-generator[78238]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:37:32 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:37:33 localhost python3[78262]: ansible-systemd Invoked with state=restarted name=tripleo_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 03:37:33 localhost systemd[1]: Reloading. Dec 6 03:37:33 localhost systemd-rc-local-generator[78289]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:37:33 localhost systemd-sysv-generator[78294]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:37:33 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:37:33 localhost systemd[1]: Starting nova_compute container... Dec 6 03:37:33 localhost tripleo-start-podman-container[78302]: Creating additional drop-in dependency for "nova_compute" (41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007) Dec 6 03:37:33 localhost systemd[1]: Reloading. Dec 6 03:37:33 localhost systemd-rc-local-generator[78362]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 03:37:33 localhost systemd-sysv-generator[78365]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 03:37:33 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 03:37:33 localhost systemd[1]: Started nova_compute container. Dec 6 03:37:34 localhost python3[78400]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks5.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:37:35 localhost python3[78568]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks5.json short_hostname=np0005548789 step=5 update_config_hash_only=False Dec 6 03:37:36 localhost python3[78598]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 03:37:36 localhost python3[78629]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_5 config_pattern=container-puppet-*.json config_overrides={} debug=True Dec 6 03:37:37 localhost sshd[78630]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:37:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:37:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:37:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:37:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:37:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:37:43 localhost podman[78632]: 2025-12-06 08:37:43.910526393 +0000 UTC m=+0.068683475 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, release=1761123044, config_id=tripleo_step4, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, batch=17.1_20251118.1, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Dec 6 03:37:43 localhost podman[78632]: 2025-12-06 08:37:43.923611613 +0000 UTC m=+0.081768685 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, release=1761123044, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Dec 6 03:37:43 localhost podman[78634]: 2025-12-06 08:37:43.925664606 +0000 UTC m=+0.079579857 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:37:43 localhost systemd[1]: tmp-crun.BtMOLN.mount: Deactivated successfully. Dec 6 03:37:43 localhost podman[78634]: 2025-12-06 08:37:43.980286669 +0000 UTC m=+0.134201951 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.openshift.expose-services=, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.41.4, version=17.1.12, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1) Dec 6 03:37:43 localhost podman[78635]: 2025-12-06 08:37:43.980381122 +0000 UTC m=+0.130196068 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, vendor=Red Hat, Inc., url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, version=17.1.12) Dec 6 03:37:43 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:37:43 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:37:44 localhost podman[78633]: 2025-12-06 08:37:44.015712304 +0000 UTC m=+0.169719708 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, distribution-scope=public, name=rhosp17/openstack-collectd, config_id=tripleo_step3, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, url=https://www.redhat.com) Dec 6 03:37:44 localhost podman[78635]: 2025-12-06 08:37:44.078211789 +0000 UTC m=+0.228026715 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, distribution-scope=public, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-iscsid-container, version=17.1.12, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 03:37:44 localhost podman[78641]: 2025-12-06 08:37:44.078869658 +0000 UTC m=+0.226057184 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1761123044, vcs-type=git, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, distribution-scope=public) Dec 6 03:37:44 localhost podman[78633]: 2025-12-06 08:37:44.078615291 +0000 UTC m=+0.232622645 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.buildah.version=1.41.4, io.openshift.expose-services=, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, vcs-type=git) Dec 6 03:37:44 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:37:44 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:37:44 localhost podman[78641]: 2025-12-06 08:37:44.213483431 +0000 UTC m=+0.360670957 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1761123044, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, version=17.1.12, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 6 03:37:44 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:37:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:37:44 localhost podman[78745]: 2025-12-06 08:37:44.915107688 +0000 UTC m=+0.078713322 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, vcs-type=git, batch=17.1_20251118.1, architecture=x86_64) Dec 6 03:37:45 localhost podman[78745]: 2025-12-06 08:37:45.280047725 +0000 UTC m=+0.443653329 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, release=1761123044, distribution-scope=public, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, container_name=nova_migration_target, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible) Dec 6 03:37:45 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:37:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:37:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:37:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:37:48 localhost podman[78770]: 2025-12-06 08:37:48.921455882 +0000 UTC m=+0.082543199 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, release=1761123044, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, tcib_managed=true, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=) Dec 6 03:37:48 localhost systemd[1]: tmp-crun.XVCYYi.mount: Deactivated successfully. Dec 6 03:37:48 localhost podman[78769]: 2025-12-06 08:37:48.978068327 +0000 UTC m=+0.142375293 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, managed_by=tripleo_ansible, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, vcs-type=git, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1) Dec 6 03:37:49 localhost podman[78771]: 2025-12-06 08:37:49.02064243 +0000 UTC m=+0.178575301 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, vcs-type=git, version=17.1.12, architecture=x86_64, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, container_name=ovn_metadata_agent, batch=17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Dec 6 03:37:49 localhost podman[78769]: 2025-12-06 08:37:49.031160643 +0000 UTC m=+0.195467599 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, vcs-type=git, version=17.1.12, io.buildah.version=1.41.4, batch=17.1_20251118.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, release=1761123044, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container) Dec 6 03:37:49 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:37:49 localhost podman[78771]: 2025-12-06 08:37:49.068143695 +0000 UTC m=+0.226076546 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, architecture=x86_64, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, version=17.1.12, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, release=1761123044, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 6 03:37:49 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:37:49 localhost podman[78770]: 2025-12-06 08:37:49.137151278 +0000 UTC m=+0.298238595 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, container_name=metrics_qdr, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, tcib_managed=true, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, vendor=Red Hat, Inc., batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible) Dec 6 03:37:49 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:37:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:37:53 localhost podman[78842]: 2025-12-06 08:37:53.913257926 +0000 UTC m=+0.069529991 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, vcs-type=git, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, distribution-scope=public, url=https://www.redhat.com, container_name=nova_compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 6 03:37:53 localhost podman[78842]: 2025-12-06 08:37:53.945138303 +0000 UTC m=+0.101410428 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, release=1761123044, tcib_managed=true, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 6 03:37:53 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 03:38:01 localhost sshd[78868]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:38:02 localhost systemd-logind[766]: New session 33 of user zuul. Dec 6 03:38:02 localhost systemd[1]: Started Session 33 of User zuul. Dec 6 03:38:02 localhost python3[78977]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 6 03:38:10 localhost python3[79240]: ansible-ansible.legacy.dnf Invoked with name=['iptables'] allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None state=None Dec 6 03:38:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:38:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:38:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:38:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:38:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:38:14 localhost systemd[1]: tmp-crun.5MW6bi.mount: Deactivated successfully. Dec 6 03:38:14 localhost podman[79258]: 2025-12-06 08:38:14.943706651 +0000 UTC m=+0.100184648 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, config_id=tripleo_step3, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container) Dec 6 03:38:14 localhost podman[79258]: 2025-12-06 08:38:14.957124792 +0000 UTC m=+0.113602789 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1761123044, batch=17.1_20251118.1, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z) Dec 6 03:38:14 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:38:15 localhost systemd[1]: tmp-crun.dcmf7l.mount: Deactivated successfully. Dec 6 03:38:15 localhost podman[79259]: 2025-12-06 08:38:15.04650357 +0000 UTC m=+0.201224304 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, release=1761123044, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container) Dec 6 03:38:15 localhost podman[79260]: 2025-12-06 08:38:15.091123336 +0000 UTC m=+0.242716274 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, container_name=iscsid, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Dec 6 03:38:15 localhost podman[79259]: 2025-12-06 08:38:15.102117623 +0000 UTC m=+0.256838397 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1761123044, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, version=17.1.12, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible) Dec 6 03:38:15 localhost podman[79257]: 2025-12-06 08:38:15.011203279 +0000 UTC m=+0.167456939 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.expose-services=, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, vcs-type=git, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044) Dec 6 03:38:15 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:38:15 localhost podman[79260]: 2025-12-06 08:38:15.128200571 +0000 UTC m=+0.279793529 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, tcib_managed=true, name=rhosp17/openstack-iscsid, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, version=17.1.12, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.buildah.version=1.41.4, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:38:15 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:38:15 localhost podman[79257]: 2025-12-06 08:38:15.148523174 +0000 UTC m=+0.304776794 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, architecture=x86_64, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, vendor=Red Hat, Inc., io.buildah.version=1.41.4, batch=17.1_20251118.1, release=1761123044, url=https://www.redhat.com, name=rhosp17/openstack-cron, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container) Dec 6 03:38:15 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:38:15 localhost podman[79261]: 2025-12-06 08:38:15.204932332 +0000 UTC m=+0.351005561 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, release=1761123044, vendor=Red Hat, Inc., io.buildah.version=1.41.4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, url=https://www.redhat.com, vcs-type=git, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 6 03:38:15 localhost podman[79261]: 2025-12-06 08:38:15.242166612 +0000 UTC m=+0.388239801 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, release=1761123044, container_name=ceilometer_agent_ipmi) Dec 6 03:38:15 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:38:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:38:15 localhost podman[79364]: 2025-12-06 08:38:15.919130884 +0000 UTC m=+0.076839054 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20251118.1, release=1761123044, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, tcib_managed=true, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, container_name=nova_migration_target, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-type=git) Dec 6 03:38:16 localhost podman[79364]: 2025-12-06 08:38:16.285853845 +0000 UTC m=+0.443562075 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, maintainer=OpenStack TripleO Team, version=17.1.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, architecture=x86_64, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target) Dec 6 03:38:16 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:38:17 localhost python3[79462]: ansible-ansible.builtin.iptables Invoked with action=insert chain=INPUT comment=allow ssh access for zuul executor in_interface=eth0 jump=ACCEPT protocol=tcp source=38.102.83.114 table=filter state=present ip_version=ipv4 match=[] destination_ports=[] ctstate=[] syn=ignore flush=False chain_management=False numeric=False rule_num=None wait=None to_source=None destination=None to_destination=None tcp_flags=None gateway=None log_prefix=None log_level=None goto=None out_interface=None fragment=None set_counters=None source_port=None destination_port=None to_ports=None set_dscp_mark=None set_dscp_mark_class=None src_range=None dst_range=None match_set=None match_set_flags=None limit=None limit_burst=None uid_owner=None gid_owner=None reject_with=None icmp_type=None policy=None Dec 6 03:38:17 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled Dec 6 03:38:17 localhost systemd-journald[47810]: Field hash table of /run/log/journal/4b30904fc4748c16d0c72dbebcabab49/system.journal has a fill level at 81.1 (270 of 333 items), suggesting rotation. Dec 6 03:38:17 localhost systemd-journald[47810]: /run/log/journal/4b30904fc4748c16d0c72dbebcabab49/system.journal: Journal header limits reached or header out-of-date, rotating. Dec 6 03:38:17 localhost rsyslogd[760]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 6 03:38:17 localhost rsyslogd[760]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 6 03:38:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:38:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:38:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:38:19 localhost systemd[1]: tmp-crun.odyxZe.mount: Deactivated successfully. Dec 6 03:38:19 localhost podman[79509]: 2025-12-06 08:38:19.948832643 +0000 UTC m=+0.099701445 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, io.buildah.version=1.41.4, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, batch=17.1_20251118.1, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 6 03:38:20 localhost podman[79508]: 2025-12-06 08:38:20.002265879 +0000 UTC m=+0.151873992 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-type=git, version=17.1.12, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, tcib_managed=true, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, distribution-scope=public) Dec 6 03:38:20 localhost podman[79510]: 2025-12-06 08:38:20.055158809 +0000 UTC m=+0.205934537 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, url=https://www.redhat.com, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public) Dec 6 03:38:20 localhost podman[79508]: 2025-12-06 08:38:20.083630531 +0000 UTC m=+0.233238644 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, architecture=x86_64, batch=17.1_20251118.1, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, url=https://www.redhat.com, tcib_managed=true, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, vendor=Red Hat, Inc.) Dec 6 03:38:20 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:38:20 localhost podman[79510]: 2025-12-06 08:38:20.144342541 +0000 UTC m=+0.295118269 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 03:38:20 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:38:20 localhost podman[79509]: 2025-12-06 08:38:20.182546501 +0000 UTC m=+0.333415303 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true, name=rhosp17/openstack-qdrouterd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 03:38:20 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:38:20 localhost systemd[1]: tmp-crun.6yFv94.mount: Deactivated successfully. Dec 6 03:38:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:38:24 localhost podman[79606]: 2025-12-06 08:38:24.933856059 +0000 UTC m=+0.090873505 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, version=17.1.12, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git) Dec 6 03:38:24 localhost podman[79606]: 2025-12-06 08:38:24.965874779 +0000 UTC m=+0.122892215 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.12, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 03:38:24 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 03:38:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:38:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:38:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:38:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:38:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:38:45 localhost podman[79710]: 2025-12-06 08:38:45.962338465 +0000 UTC m=+0.106505443 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, name=rhosp17/openstack-cron) Dec 6 03:38:45 localhost podman[79714]: 2025-12-06 08:38:45.99877171 +0000 UTC m=+0.137677937 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, distribution-scope=public, architecture=x86_64, batch=17.1_20251118.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, config_id=tripleo_step4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 03:38:46 localhost podman[79710]: 2025-12-06 08:38:46.045350097 +0000 UTC m=+0.189517105 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, url=https://www.redhat.com, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vendor=Red Hat, Inc., release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, com.redhat.component=openstack-cron-container, tcib_managed=true, name=rhosp17/openstack-cron, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Dec 6 03:38:46 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:38:46 localhost podman[79711]: 2025-12-06 08:38:46.067667671 +0000 UTC m=+0.210377615 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, tcib_managed=true, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, container_name=collectd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, distribution-scope=public, com.redhat.component=openstack-collectd-container) Dec 6 03:38:46 localhost podman[79714]: 2025-12-06 08:38:46.082437873 +0000 UTC m=+0.221344120 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi) Dec 6 03:38:46 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:38:46 localhost podman[79712]: 2025-12-06 08:38:46.103064754 +0000 UTC m=+0.247279194 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, version=17.1.12, config_id=tripleo_step4, distribution-scope=public) Dec 6 03:38:46 localhost podman[79712]: 2025-12-06 08:38:46.162189355 +0000 UTC m=+0.306403795 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, release=1761123044, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 6 03:38:46 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:38:46 localhost podman[79711]: 2025-12-06 08:38:46.178598017 +0000 UTC m=+0.321307961 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, version=17.1.12, io.buildah.version=1.41.4, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, container_name=collectd, distribution-scope=public) Dec 6 03:38:46 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:38:46 localhost podman[79713]: 2025-12-06 08:38:46.166299691 +0000 UTC m=+0.305833557 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, architecture=x86_64, url=https://www.redhat.com, distribution-scope=public, config_id=tripleo_step3, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, container_name=iscsid, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Dec 6 03:38:46 localhost podman[79713]: 2025-12-06 08:38:46.2466084 +0000 UTC m=+0.386142206 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com) Dec 6 03:38:46 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:38:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:38:46 localhost podman[79822]: 2025-12-06 08:38:46.915928488 +0000 UTC m=+0.080920400 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, distribution-scope=public, tcib_managed=true, batch=17.1_20251118.1) Dec 6 03:38:46 localhost systemd[1]: tmp-crun.pv4GIQ.mount: Deactivated successfully. Dec 6 03:38:47 localhost podman[79822]: 2025-12-06 08:38:47.29347553 +0000 UTC m=+0.458467422 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, version=17.1.12, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git) Dec 6 03:38:47 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:38:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:38:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:38:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:38:50 localhost podman[79843]: 2025-12-06 08:38:50.910189182 +0000 UTC m=+0.075822884 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, vcs-type=git, managed_by=tripleo_ansible, release=1761123044, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, config_id=tripleo_step4, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com) Dec 6 03:38:50 localhost podman[79843]: 2025-12-06 08:38:50.960270996 +0000 UTC m=+0.125904698 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, version=17.1.12, io.openshift.expose-services=, batch=17.1_20251118.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4) Dec 6 03:38:50 localhost systemd[1]: tmp-crun.Pon1pI.mount: Deactivated successfully. Dec 6 03:38:50 localhost podman[79845]: 2025-12-06 08:38:50.978983829 +0000 UTC m=+0.135702568 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.4, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, tcib_managed=true, architecture=x86_64, config_id=tripleo_step4, vcs-type=git, container_name=ovn_metadata_agent, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12) Dec 6 03:38:50 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:38:51 localhost podman[79844]: 2025-12-06 08:38:51.039232274 +0000 UTC m=+0.198934884 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:38:51 localhost podman[79845]: 2025-12-06 08:38:51.049193719 +0000 UTC m=+0.205912498 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, architecture=x86_64, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent) Dec 6 03:38:51 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:38:51 localhost podman[79844]: 2025-12-06 08:38:51.263057628 +0000 UTC m=+0.422760208 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20251118.1, io.openshift.expose-services=, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z) Dec 6 03:38:51 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:38:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:38:55 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 6 03:38:55 localhost recover_tripleo_nova_virtqemud[79922]: 61814 Dec 6 03:38:55 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 6 03:38:55 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 6 03:38:55 localhost systemd[1]: tmp-crun.fBfC7z.mount: Deactivated successfully. Dec 6 03:38:55 localhost podman[79920]: 2025-12-06 08:38:55.916050196 +0000 UTC m=+0.082088055 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, architecture=x86_64, io.buildah.version=1.41.4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:38:55 localhost podman[79920]: 2025-12-06 08:38:55.946748926 +0000 UTC m=+0.112786765 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, maintainer=OpenStack TripleO Team, version=17.1.12, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, container_name=nova_compute, config_id=tripleo_step5, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 03:38:55 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 03:39:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:39:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:39:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:39:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:39:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:39:16 localhost systemd[1]: tmp-crun.LE9irJ.mount: Deactivated successfully. Dec 6 03:39:16 localhost podman[79949]: 2025-12-06 08:39:16.927676604 +0000 UTC m=+0.088310016 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2025-11-18T22:49:32Z, distribution-scope=public, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., tcib_managed=true, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.expose-services=, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Dec 6 03:39:16 localhost systemd[1]: tmp-crun.bfnr6s.mount: Deactivated successfully. Dec 6 03:39:16 localhost podman[79949]: 2025-12-06 08:39:16.940152745 +0000 UTC m=+0.100786107 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, tcib_managed=true, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, distribution-scope=public, architecture=x86_64, name=rhosp17/openstack-cron, release=1761123044, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1) Dec 6 03:39:16 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:39:16 localhost podman[79950]: 2025-12-06 08:39:16.979675186 +0000 UTC m=+0.131750146 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, architecture=x86_64, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.buildah.version=1.41.4, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Dec 6 03:39:16 localhost podman[79950]: 2025-12-06 08:39:16.990002812 +0000 UTC m=+0.142077762 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, batch=17.1_20251118.1, config_id=tripleo_step3, distribution-scope=public, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, version=17.1.12) Dec 6 03:39:17 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:39:17 localhost podman[79957]: 2025-12-06 08:39:17.034618289 +0000 UTC m=+0.183715288 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, architecture=x86_64, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, distribution-scope=public, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, release=1761123044, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 03:39:17 localhost podman[79958]: 2025-12-06 08:39:16.941648892 +0000 UTC m=+0.083928902 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, config_id=tripleo_step4, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi) Dec 6 03:39:17 localhost podman[79957]: 2025-12-06 08:39:17.071861349 +0000 UTC m=+0.220958318 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, container_name=iscsid, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:39:17 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:39:17 localhost podman[79951]: 2025-12-06 08:39:17.085126035 +0000 UTC m=+0.236983448 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, url=https://www.redhat.com, architecture=x86_64, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:39:17 localhost podman[79958]: 2025-12-06 08:39:17.123079267 +0000 UTC m=+0.265359257 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, tcib_managed=true, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, config_id=tripleo_step4) Dec 6 03:39:17 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:39:17 localhost podman[79951]: 2025-12-06 08:39:17.139093758 +0000 UTC m=+0.290951171 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, version=17.1.12, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, url=https://www.redhat.com) Dec 6 03:39:17 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:39:17 localhost systemd[1]: session-33.scope: Deactivated successfully. Dec 6 03:39:17 localhost systemd[1]: session-33.scope: Consumed 5.637s CPU time. Dec 6 03:39:17 localhost systemd-logind[766]: Session 33 logged out. Waiting for processes to exit. Dec 6 03:39:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:39:17 localhost systemd-logind[766]: Removed session 33. Dec 6 03:39:17 localhost podman[80062]: 2025-12-06 08:39:17.546455003 +0000 UTC m=+0.086772659 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, vendor=Red Hat, Inc., tcib_managed=true, version=17.1.12, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, name=rhosp17/openstack-nova-compute) Dec 6 03:39:17 localhost podman[80062]: 2025-12-06 08:39:17.901586439 +0000 UTC m=+0.441904045 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc.) Dec 6 03:39:17 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:39:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:39:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:39:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:39:21 localhost podman[80131]: 2025-12-06 08:39:21.913896566 +0000 UTC m=+0.076775442 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, name=rhosp17/openstack-qdrouterd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=) Dec 6 03:39:21 localhost systemd[1]: tmp-crun.K2gCo9.mount: Deactivated successfully. Dec 6 03:39:21 localhost podman[80130]: 2025-12-06 08:39:21.969012274 +0000 UTC m=+0.132261951 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, architecture=x86_64, container_name=ovn_controller, tcib_managed=true, vcs-type=git, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, version=17.1.12, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.openshift.expose-services=, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 6 03:39:22 localhost podman[80130]: 2025-12-06 08:39:22.023189983 +0000 UTC m=+0.186439670 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com) Dec 6 03:39:22 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:39:22 localhost podman[80132]: 2025-12-06 08:39:22.023551095 +0000 UTC m=+0.184170712 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step4, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, version=17.1.12, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, build-date=2025-11-19T00:14:25Z) Dec 6 03:39:22 localhost podman[80131]: 2025-12-06 08:39:22.102130681 +0000 UTC m=+0.265009547 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, release=1761123044, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 6 03:39:22 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:39:22 localhost podman[80132]: 2025-12-06 08:39:22.160740466 +0000 UTC m=+0.321360063 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, distribution-scope=public, release=1761123044, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.openshift.expose-services=, tcib_managed=true) Dec 6 03:39:22 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:39:22 localhost systemd[1]: tmp-crun.OL1NaM.mount: Deactivated successfully. Dec 6 03:39:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:39:26 localhost podman[80203]: 2025-12-06 08:39:26.915683566 +0000 UTC m=+0.080947471 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, container_name=nova_compute, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, architecture=x86_64, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, distribution-scope=public) Dec 6 03:39:26 localhost podman[80203]: 2025-12-06 08:39:26.94425154 +0000 UTC m=+0.109515415 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.expose-services=, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, tcib_managed=true, batch=17.1_20251118.1, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 03:39:26 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 03:39:29 localhost sshd[80229]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:39:29 localhost systemd-logind[766]: New session 34 of user zuul. Dec 6 03:39:29 localhost systemd[1]: Started Session 34 of User zuul. Dec 6 03:39:30 localhost python3[80248]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 6 03:39:40 localhost sshd[80312]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:39:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:39:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:39:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:39:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:39:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:39:47 localhost podman[80331]: 2025-12-06 08:39:47.925234011 +0000 UTC m=+0.081629717 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, container_name=ceilometer_agent_compute, release=1761123044, vcs-type=git, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, version=17.1.12, config_id=tripleo_step4, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 6 03:39:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:39:47 localhost podman[80331]: 2025-12-06 08:39:47.960116078 +0000 UTC m=+0.116511814 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, batch=17.1_20251118.1, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, url=https://www.redhat.com, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z) Dec 6 03:39:47 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:39:47 localhost podman[80333]: 2025-12-06 08:39:47.980989981 +0000 UTC m=+0.133474348 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20251118.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, release=1761123044, vcs-type=git, container_name=ceilometer_agent_ipmi, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., version=17.1.12, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Dec 6 03:39:48 localhost podman[80329]: 2025-12-06 08:39:48.041073074 +0000 UTC m=+0.197219212 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, container_name=logrotate_crond, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, build-date=2025-11-18T22:49:32Z, distribution-scope=public, name=rhosp17/openstack-cron, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, io.openshift.expose-services=) Dec 6 03:39:48 localhost podman[80333]: 2025-12-06 08:39:48.044116776 +0000 UTC m=+0.196601193 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 6 03:39:48 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:39:48 localhost podman[80329]: 2025-12-06 08:39:48.078084626 +0000 UTC m=+0.234230754 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.openshift.expose-services=, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, batch=17.1_20251118.1) Dec 6 03:39:48 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:39:48 localhost podman[80388]: 2025-12-06 08:39:48.090151481 +0000 UTC m=+0.141713988 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, url=https://www.redhat.com, distribution-scope=public, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 03:39:48 localhost podman[80330]: 2025-12-06 08:39:48.135207029 +0000 UTC m=+0.291597315 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, version=17.1.12, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, container_name=collectd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Dec 6 03:39:48 localhost podman[80332]: 2025-12-06 08:39:48.205928703 +0000 UTC m=+0.357551114 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, tcib_managed=true, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, architecture=x86_64, config_id=tripleo_step3, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, io.buildah.version=1.41.4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Dec 6 03:39:48 localhost podman[80332]: 2025-12-06 08:39:48.21803938 +0000 UTC m=+0.369661811 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, container_name=iscsid, name=rhosp17/openstack-iscsid, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 03:39:48 localhost podman[80330]: 2025-12-06 08:39:48.227814457 +0000 UTC m=+0.384204793 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, io.buildah.version=1.41.4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, url=https://www.redhat.com, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3) Dec 6 03:39:48 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:39:48 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:39:48 localhost podman[80388]: 2025-12-06 08:39:48.477793277 +0000 UTC m=+0.529355804 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1761123044, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 03:39:48 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:39:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:39:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:39:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:39:52 localhost systemd[1]: tmp-crun.Ilslfu.mount: Deactivated successfully. Dec 6 03:39:52 localhost podman[80463]: 2025-12-06 08:39:52.932159418 +0000 UTC m=+0.096591841 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, name=rhosp17/openstack-ovn-controller, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.12, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, vcs-type=git, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, batch=17.1_20251118.1, release=1761123044) Dec 6 03:39:52 localhost podman[80463]: 2025-12-06 08:39:52.959173948 +0000 UTC m=+0.123606371 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, io.openshift.expose-services=, tcib_managed=true, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, container_name=ovn_controller) Dec 6 03:39:52 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:39:53 localhost systemd[1]: tmp-crun.UhnBlX.mount: Deactivated successfully. Dec 6 03:39:53 localhost podman[80464]: 2025-12-06 08:39:53.021214349 +0000 UTC m=+0.182288829 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, container_name=metrics_qdr, architecture=x86_64, version=17.1.12, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.4, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 03:39:53 localhost podman[80465]: 2025-12-06 08:39:53.079741904 +0000 UTC m=+0.237685099 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, url=https://www.redhat.com, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044) Dec 6 03:39:53 localhost podman[80465]: 2025-12-06 08:39:53.123071417 +0000 UTC m=+0.281014572 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, batch=17.1_20251118.1, version=17.1.12, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, distribution-scope=public, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git) Dec 6 03:39:53 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:39:53 localhost podman[80464]: 2025-12-06 08:39:53.249058028 +0000 UTC m=+0.410132438 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, release=1761123044) Dec 6 03:39:53 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:39:57 localhost python3[80553]: ansible-ansible.legacy.dnf Invoked with name=['sos'] state=latest allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Dec 6 03:39:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:39:57 localhost podman[80555]: 2025-12-06 08:39:57.895130213 +0000 UTC m=+0.062433134 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, managed_by=tripleo_ansible, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12) Dec 6 03:39:57 localhost podman[80555]: 2025-12-06 08:39:57.923099081 +0000 UTC m=+0.090402022 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.expose-services=, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, version=17.1.12, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, architecture=x86_64) Dec 6 03:39:57 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 03:40:00 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 6 03:40:00 localhost systemd[1]: Starting man-db-cache-update.service... Dec 6 03:40:00 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 6 03:40:01 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Dec 6 03:40:01 localhost systemd[1]: Finished man-db-cache-update.service. Dec 6 03:40:01 localhost systemd[1]: run-rea2e02d7346144b99da475d0d939a2a4.service: Deactivated successfully. Dec 6 03:40:01 localhost systemd[1]: run-reab5424f6eeb43719b36a37c932c6b3a.service: Deactivated successfully. Dec 6 03:40:09 localhost ceph-osd[31726]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 6 03:40:09 localhost ceph-osd[31726]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 5168 writes, 22K keys, 5168 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5168 writes, 575 syncs, 8.99 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 6 03:40:12 localhost ceph-osd[32665]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 6 03:40:12 localhost ceph-osd[32665]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.2 total, 600.0 interval#012Cumulative writes: 4467 writes, 20K keys, 4467 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4467 writes, 521 syncs, 8.57 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 6 03:40:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:40:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:40:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:40:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:40:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:40:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:40:18 localhost podman[80732]: 2025-12-06 08:40:18.953137169 +0000 UTC m=+0.106949815 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vendor=Red Hat, Inc., version=17.1.12, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, container_name=nova_migration_target, config_id=tripleo_step4, vcs-type=git, io.openshift.expose-services=) Dec 6 03:40:18 localhost systemd[1]: tmp-crun.QvOZzq.mount: Deactivated successfully. Dec 6 03:40:18 localhost podman[80730]: 2025-12-06 08:40:18.991924295 +0000 UTC m=+0.149122103 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.buildah.version=1.41.4, vcs-type=git, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, batch=17.1_20251118.1, architecture=x86_64, distribution-scope=public, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 6 03:40:19 localhost systemd[1]: tmp-crun.UwHNXJ.mount: Deactivated successfully. Dec 6 03:40:19 localhost podman[80731]: 2025-12-06 08:40:19.048034627 +0000 UTC m=+0.203545633 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, release=1761123044, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, version=17.1.12, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd) Dec 6 03:40:19 localhost podman[80731]: 2025-12-06 08:40:19.056475093 +0000 UTC m=+0.211986099 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, container_name=collectd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git) Dec 6 03:40:19 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:40:19 localhost podman[80730]: 2025-12-06 08:40:19.075915192 +0000 UTC m=+0.233113010 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, release=1761123044, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, version=17.1.12, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.component=openstack-cron-container, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 6 03:40:19 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:40:19 localhost podman[80739]: 2025-12-06 08:40:19.139271274 +0000 UTC m=+0.285029055 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.12, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, release=1761123044) Dec 6 03:40:19 localhost podman[80739]: 2025-12-06 08:40:19.171343376 +0000 UTC m=+0.317101137 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, tcib_managed=true) Dec 6 03:40:19 localhost podman[80733]: 2025-12-06 08:40:19.197441318 +0000 UTC m=+0.347797978 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, batch=17.1_20251118.1, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, url=https://www.redhat.com, vcs-type=git, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute) Dec 6 03:40:19 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:40:19 localhost podman[80749]: 2025-12-06 08:40:19.241875786 +0000 UTC m=+0.384223804 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, release=1761123044, tcib_managed=true, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, architecture=x86_64, vcs-type=git, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:40:19 localhost podman[80749]: 2025-12-06 08:40:19.267259455 +0000 UTC m=+0.409607483 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 03:40:19 localhost podman[80733]: 2025-12-06 08:40:19.276448004 +0000 UTC m=+0.426804634 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, version=17.1.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, vcs-type=git, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, release=1761123044) Dec 6 03:40:19 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:40:19 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:40:19 localhost podman[80732]: 2025-12-06 08:40:19.32579987 +0000 UTC m=+0.479612446 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, distribution-scope=public, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target) Dec 6 03:40:19 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:40:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:40:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:40:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:40:23 localhost systemd[1]: tmp-crun.5vnnM5.mount: Deactivated successfully. Dec 6 03:40:23 localhost podman[80905]: 2025-12-06 08:40:23.917894169 +0000 UTC m=+0.074554002 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, managed_by=tripleo_ansible, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z) Dec 6 03:40:23 localhost podman[80905]: 2025-12-06 08:40:23.935031479 +0000 UTC m=+0.091691342 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., url=https://www.redhat.com, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4) Dec 6 03:40:23 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:40:23 localhost podman[80906]: 2025-12-06 08:40:23.982240881 +0000 UTC m=+0.138540163 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, config_id=tripleo_step1, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4) Dec 6 03:40:24 localhost podman[80907]: 2025-12-06 08:40:24.032273078 +0000 UTC m=+0.186473166 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.expose-services=, container_name=ovn_metadata_agent, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, architecture=x86_64, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, version=17.1.12, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4) Dec 6 03:40:24 localhost podman[80907]: 2025-12-06 08:40:24.097099314 +0000 UTC m=+0.251299392 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://www.redhat.com, vcs-type=git, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Dec 6 03:40:24 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:40:24 localhost podman[80906]: 2025-12-06 08:40:24.185476193 +0000 UTC m=+0.341775435 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-type=git, batch=17.1_20251118.1, version=17.1.12, io.openshift.expose-services=, container_name=metrics_qdr, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z) Dec 6 03:40:24 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:40:26 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 6 03:40:26 localhost recover_tripleo_nova_virtqemud[80981]: 61814 Dec 6 03:40:26 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 6 03:40:26 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 6 03:40:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:40:28 localhost podman[80982]: 2025-12-06 08:40:28.908485571 +0000 UTC m=+0.070590151 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-type=git, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.12, architecture=x86_64, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5) Dec 6 03:40:28 localhost podman[80982]: 2025-12-06 08:40:28.961294653 +0000 UTC m=+0.123399333 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, version=17.1.12, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, vendor=Red Hat, Inc.) Dec 6 03:40:28 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 03:40:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:40:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:40:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:40:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:40:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:40:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:40:49 localhost podman[81154]: 2025-12-06 08:40:49.95900606 +0000 UTC m=+0.101220150 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, config_id=tripleo_step4, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, version=17.1.12, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 03:40:49 localhost systemd[1]: tmp-crun.ce7y2a.mount: Deactivated successfully. Dec 6 03:40:49 localhost podman[81140]: 2025-12-06 08:40:49.979067319 +0000 UTC m=+0.135789139 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, container_name=collectd, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd) Dec 6 03:40:50 localhost podman[81142]: 2025-12-06 08:40:50.01801435 +0000 UTC m=+0.173679788 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, vendor=Red Hat, Inc., batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, release=1761123044, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 6 03:40:50 localhost podman[81139]: 2025-12-06 08:40:50.024461686 +0000 UTC m=+0.180624259 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.12, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, config_id=tripleo_step4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, name=rhosp17/openstack-cron) Dec 6 03:40:50 localhost podman[81139]: 2025-12-06 08:40:50.033989244 +0000 UTC m=+0.190151827 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, io.buildah.version=1.41.4, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vcs-type=git, distribution-scope=public, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, url=https://www.redhat.com, release=1761123044, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Dec 6 03:40:50 localhost podman[81154]: 2025-12-06 08:40:50.036959934 +0000 UTC m=+0.179174044 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, version=17.1.12, distribution-scope=public, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 6 03:40:50 localhost podman[81142]: 2025-12-06 08:40:50.047329929 +0000 UTC m=+0.202995317 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 03:40:50 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:40:50 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:40:50 localhost podman[81140]: 2025-12-06 08:40:50.09153896 +0000 UTC m=+0.248260760 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, release=1761123044, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., container_name=collectd, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, distribution-scope=public, com.redhat.component=openstack-collectd-container) Dec 6 03:40:50 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:40:50 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:40:50 localhost podman[81143]: 2025-12-06 08:40:50.124976634 +0000 UTC m=+0.276776955 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, tcib_managed=true, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-type=git, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public) Dec 6 03:40:50 localhost podman[81143]: 2025-12-06 08:40:50.137264866 +0000 UTC m=+0.289065197 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc.) Dec 6 03:40:50 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:40:50 localhost podman[81141]: 2025-12-06 08:40:49.93821582 +0000 UTC m=+0.091695402 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.buildah.version=1.41.4, vendor=Red Hat, Inc., batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, release=1761123044, distribution-scope=public, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64) Dec 6 03:40:50 localhost podman[81141]: 2025-12-06 08:40:50.293303538 +0000 UTC m=+0.446783120 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., release=1761123044, name=rhosp17/openstack-nova-compute, tcib_managed=true, io.buildah.version=1.41.4, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git) Dec 6 03:40:50 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:40:52 localhost python3[81286]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager repos --disable rhel-9-for-x86_64-baseos-eus-rpms --disable rhel-9-for-x86_64-appstream-eus-rpms --disable rhel-9-for-x86_64-highavailability-eus-rpms --disable openstack-17.1-for-rhel-9-x86_64-rpms --disable fast-datapath-for-rhel-9-x86_64-rpms _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:40:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:40:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:40:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:40:54 localhost systemd[1]: tmp-crun.67iZVj.mount: Deactivated successfully. Dec 6 03:40:55 localhost podman[81291]: 2025-12-06 08:40:55.000922559 +0000 UTC m=+0.153660121 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1761123044, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, container_name=metrics_qdr) Dec 6 03:40:55 localhost podman[81290]: 2025-12-06 08:40:54.955865852 +0000 UTC m=+0.111077318 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.buildah.version=1.41.4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, container_name=ovn_controller, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, release=1761123044, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:40:55 localhost podman[81290]: 2025-12-06 08:40:55.039425087 +0000 UTC m=+0.194636563 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, architecture=x86_64, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, release=1761123044, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 6 03:40:55 localhost podman[81292]: 2025-12-06 08:40:55.053364419 +0000 UTC m=+0.198193211 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, batch=17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent) Dec 6 03:40:55 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:40:55 localhost podman[81292]: 2025-12-06 08:40:55.126385614 +0000 UTC m=+0.271214416 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, release=1761123044, distribution-scope=public, url=https://www.redhat.com, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, io.openshift.expose-services=, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, architecture=x86_64, vendor=Red Hat, Inc.) Dec 6 03:40:55 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:40:55 localhost podman[81291]: 2025-12-06 08:40:55.214421633 +0000 UTC m=+0.367159185 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_id=tripleo_step1, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 6 03:40:55 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:40:56 localhost rhsm-service[6617]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 6 03:40:56 localhost rhsm-service[6617]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 6 03:40:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:40:59 localhost podman[81492]: 2025-12-06 08:40:59.908892555 +0000 UTC m=+0.068994473 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, version=17.1.12, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:40:59 localhost podman[81492]: 2025-12-06 08:40:59.941103192 +0000 UTC m=+0.101205110 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, batch=17.1_20251118.1, release=1761123044, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, version=17.1.12, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z) Dec 6 03:40:59 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 03:41:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:41:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:41:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:41:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:41:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:41:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:41:20 localhost systemd[1]: tmp-crun.8OjNL9.mount: Deactivated successfully. Dec 6 03:41:20 localhost podman[81577]: 2025-12-06 08:41:20.947247016 +0000 UTC m=+0.103269703 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, config_id=tripleo_step4, name=rhosp17/openstack-cron, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 03:41:20 localhost podman[81577]: 2025-12-06 08:41:20.985103764 +0000 UTC m=+0.141126441 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.41.4, config_id=tripleo_step4, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, version=17.1.12, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, url=https://www.redhat.com, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 03:41:20 localhost systemd[1]: tmp-crun.cgMslG.mount: Deactivated successfully. Dec 6 03:41:20 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:41:21 localhost podman[81579]: 2025-12-06 08:41:21.001353797 +0000 UTC m=+0.153780694 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, io.openshift.expose-services=, url=https://www.redhat.com, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target) Dec 6 03:41:21 localhost podman[81578]: 2025-12-06 08:41:21.049089985 +0000 UTC m=+0.203105451 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, vcs-type=git, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, architecture=x86_64, config_id=tripleo_step3, io.openshift.expose-services=, url=https://www.redhat.com, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 6 03:41:21 localhost podman[81578]: 2025-12-06 08:41:21.090970584 +0000 UTC m=+0.244986070 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step3, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, url=https://www.redhat.com, distribution-scope=public, version=17.1.12, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, tcib_managed=true) Dec 6 03:41:21 localhost podman[81580]: 2025-12-06 08:41:21.101503203 +0000 UTC m=+0.247783484 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, config_id=tripleo_step4, url=https://www.redhat.com, distribution-scope=public, batch=17.1_20251118.1, architecture=x86_64, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, container_name=ceilometer_agent_compute, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z) Dec 6 03:41:21 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:41:21 localhost podman[81586]: 2025-12-06 08:41:21.159617776 +0000 UTC m=+0.304418303 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, release=1761123044, url=https://www.redhat.com, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, tcib_managed=true, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=, container_name=iscsid, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container) Dec 6 03:41:21 localhost podman[81586]: 2025-12-06 08:41:21.173075904 +0000 UTC m=+0.317876451 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, container_name=iscsid, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, vcs-type=git, com.redhat.component=openstack-iscsid-container) Dec 6 03:41:21 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:41:21 localhost podman[81580]: 2025-12-06 08:41:21.189228834 +0000 UTC m=+0.335509125 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, container_name=ceilometer_agent_compute, config_id=tripleo_step4, io.buildah.version=1.41.4, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, release=1761123044, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible) Dec 6 03:41:21 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:41:21 localhost podman[81599]: 2025-12-06 08:41:21.25769095 +0000 UTC m=+0.398331470 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-11-19T00:12:45Z) Dec 6 03:41:21 localhost podman[81599]: 2025-12-06 08:41:21.290935188 +0000 UTC m=+0.431575708 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, version=17.1.12, io.buildah.version=1.41.4, io.openshift.expose-services=, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, release=1761123044, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:41:21 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:41:21 localhost podman[81579]: 2025-12-06 08:41:21.359152917 +0000 UTC m=+0.511579864 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, build-date=2025-11-19T00:36:58Z, architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible) Dec 6 03:41:21 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:41:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:41:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:41:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:41:25 localhost podman[81751]: 2025-12-06 08:41:25.920619047 +0000 UTC m=+0.083516584 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step4, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, version=17.1.12, managed_by=tripleo_ansible, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team) Dec 6 03:41:25 localhost podman[81751]: 2025-12-06 08:41:25.975702836 +0000 UTC m=+0.138600373 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public) Dec 6 03:41:25 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:41:26 localhost podman[81753]: 2025-12-06 08:41:25.976709488 +0000 UTC m=+0.133824340 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.buildah.version=1.41.4, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12) Dec 6 03:41:26 localhost podman[81752]: 2025-12-06 08:41:26.033881181 +0000 UTC m=+0.193914271 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, version=17.1.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd) Dec 6 03:41:26 localhost podman[81753]: 2025-12-06 08:41:26.056113305 +0000 UTC m=+0.213228077 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, config_id=tripleo_step4, version=17.1.12) Dec 6 03:41:26 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:41:26 localhost podman[81752]: 2025-12-06 08:41:26.235111014 +0000 UTC m=+0.395144114 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., tcib_managed=true, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, maintainer=OpenStack TripleO Team) Dec 6 03:41:26 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:41:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:41:30 localhost podman[81823]: 2025-12-06 08:41:30.915248651 +0000 UTC m=+0.077272244 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, release=1761123044, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, tcib_managed=true, vcs-type=git, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team) Dec 6 03:41:31 localhost podman[81823]: 2025-12-06 08:41:31.016202262 +0000 UTC m=+0.178225805 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, distribution-scope=public, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1761123044) Dec 6 03:41:31 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 03:41:44 localhost sshd[81879]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:41:45 localhost sshd[81914]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:41:48 localhost sshd[81931]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:41:49 localhost sshd[81933]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:41:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:41:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:41:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:41:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:41:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:41:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:41:51 localhost systemd[1]: tmp-crun.vAIxSi.mount: Deactivated successfully. Dec 6 03:41:51 localhost podman[81937]: 2025-12-06 08:41:51.753038358 +0000 UTC m=+0.124134206 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, version=17.1.12, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 03:41:51 localhost podman[81936]: 2025-12-06 08:41:51.766017261 +0000 UTC m=+0.139231462 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, architecture=x86_64, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, url=https://www.redhat.com, vcs-type=git, com.redhat.component=openstack-collectd-container, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Dec 6 03:41:51 localhost podman[81938]: 2025-12-06 08:41:51.809901132 +0000 UTC m=+0.171642255 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, vendor=Red Hat, Inc., url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team) Dec 6 03:41:51 localhost podman[81936]: 2025-12-06 08:41:51.824073172 +0000 UTC m=+0.197287393 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, managed_by=tripleo_ansible, url=https://www.redhat.com, distribution-scope=public, config_id=tripleo_step3, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, build-date=2025-11-18T22:51:28Z, tcib_managed=true) Dec 6 03:41:51 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:41:51 localhost podman[81945]: 2025-12-06 08:41:51.873040938 +0000 UTC m=+0.232183632 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12) Dec 6 03:41:51 localhost podman[81945]: 2025-12-06 08:41:51.880750701 +0000 UTC m=+0.239893375 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, distribution-scope=public, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, version=17.1.12, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com) Dec 6 03:41:51 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:41:51 localhost podman[81955]: 2025-12-06 08:41:51.936643406 +0000 UTC m=+0.246122565 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, io.openshift.expose-services=, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, vcs-type=git, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, version=17.1.12) Dec 6 03:41:51 localhost podman[81938]: 2025-12-06 08:41:51.945267917 +0000 UTC m=+0.307009030 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, version=17.1.12, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git) Dec 6 03:41:51 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:41:51 localhost podman[81935]: 2025-12-06 08:41:51.85628376 +0000 UTC m=+0.228414569 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, distribution-scope=public, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, container_name=logrotate_crond, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, vcs-type=git, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Dec 6 03:41:51 localhost podman[81935]: 2025-12-06 08:41:51.989434867 +0000 UTC m=+0.361565706 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, release=1761123044, url=https://www.redhat.com, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 6 03:41:52 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:41:52 localhost podman[81955]: 2025-12-06 08:41:52.04560934 +0000 UTC m=+0.355088549 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.41.4, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, tcib_managed=true, release=1761123044, io.openshift.expose-services=, config_id=tripleo_step4, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:41:52 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:41:52 localhost podman[81937]: 2025-12-06 08:41:52.110276261 +0000 UTC m=+0.481372159 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, vcs-type=git, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, batch=17.1_20251118.1, tcib_managed=true, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public) Dec 6 03:41:52 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:41:52 localhost sshd[82066]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:41:54 localhost sshd[82069]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:41:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:41:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:41:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:41:56 localhost systemd[1]: tmp-crun.8zuOwL.mount: Deactivated successfully. Dec 6 03:41:56 localhost podman[82071]: 2025-12-06 08:41:56.944319727 +0000 UTC m=+0.101078957 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vcs-type=git, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, architecture=x86_64, config_id=tripleo_step4, distribution-scope=public) Dec 6 03:41:56 localhost podman[82071]: 2025-12-06 08:41:56.990328712 +0000 UTC m=+0.147087912 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, maintainer=OpenStack TripleO Team, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, container_name=ovn_controller, io.buildah.version=1.41.4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=) Dec 6 03:41:56 localhost systemd[1]: tmp-crun.KuPQqh.mount: Deactivated successfully. Dec 6 03:41:57 localhost podman[82072]: 2025-12-06 08:41:56.999667185 +0000 UTC m=+0.154792956 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, distribution-scope=public, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, architecture=x86_64, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, version=17.1.12, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true) Dec 6 03:41:57 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:41:57 localhost podman[82073]: 2025-12-06 08:41:57.052977612 +0000 UTC m=+0.204542164 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Dec 6 03:41:57 localhost podman[82073]: 2025-12-06 08:41:57.106125314 +0000 UTC m=+0.257689846 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, batch=17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, managed_by=tripleo_ansible) Dec 6 03:41:57 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:41:57 localhost podman[82072]: 2025-12-06 08:41:57.222162202 +0000 UTC m=+0.377288023 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, batch=17.1_20251118.1, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.41.4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 6 03:41:57 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:42:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:42:02 localhost podman[82148]: 2025-12-06 08:42:02.045137031 +0000 UTC m=+0.051017059 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, container_name=nova_compute, release=1761123044, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, architecture=x86_64, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team) Dec 6 03:42:02 localhost podman[82148]: 2025-12-06 08:42:02.069050046 +0000 UTC m=+0.074930074 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, io.buildah.version=1.41.4, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, batch=17.1_20251118.1) Dec 6 03:42:02 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 03:42:02 localhost systemd[1]: session-34.scope: Deactivated successfully. Dec 6 03:42:02 localhost systemd[1]: session-34.scope: Consumed 13.946s CPU time. Dec 6 03:42:02 localhost systemd-logind[766]: Session 34 logged out. Waiting for processes to exit. Dec 6 03:42:02 localhost systemd-logind[766]: Removed session 34. Dec 6 03:42:04 localhost sshd[82174]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:42:04 localhost systemd-logind[766]: New session 35 of user zuul. Dec 6 03:42:04 localhost systemd[1]: Started Session 35 of User zuul. Dec 6 03:42:04 localhost python3[82193]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager repos --disable rhceph-7-tools-for-rhel-9-x86_64-rpms _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 03:42:08 localhost rhsm-service[6617]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 6 03:42:08 localhost rhsm-service[6617]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 6 03:42:16 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 6 03:42:16 localhost recover_tripleo_nova_virtqemud[82382]: 61814 Dec 6 03:42:16 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 6 03:42:16 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 6 03:42:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:42:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:42:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:42:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:42:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:42:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:42:22 localhost podman[82406]: 2025-12-06 08:42:22.929547802 +0000 UTC m=+0.090593779 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, release=1761123044, vcs-type=git, tcib_managed=true, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, batch=17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron) Dec 6 03:42:22 localhost podman[82406]: 2025-12-06 08:42:22.942252136 +0000 UTC m=+0.103298093 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, container_name=logrotate_crond, distribution-scope=public, architecture=x86_64, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com) Dec 6 03:42:22 localhost systemd[1]: tmp-crun.osZj4M.mount: Deactivated successfully. Dec 6 03:42:22 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:42:22 localhost podman[82426]: 2025-12-06 08:42:22.955378075 +0000 UTC m=+0.098025664 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.41.4, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044) Dec 6 03:42:23 localhost podman[82408]: 2025-12-06 08:42:22.999425581 +0000 UTC m=+0.155196438 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, io.openshift.expose-services=, version=17.1.12, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 03:42:23 localhost podman[82409]: 2025-12-06 08:42:23.038450414 +0000 UTC m=+0.189958041 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, vcs-type=git, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, distribution-scope=public) Dec 6 03:42:23 localhost podman[82407]: 2025-12-06 08:42:23.048864959 +0000 UTC m=+0.206184003 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, release=1761123044, config_id=tripleo_step3, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd) Dec 6 03:42:23 localhost podman[82426]: 2025-12-06 08:42:23.080265663 +0000 UTC m=+0.222913282 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, batch=17.1_20251118.1, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi) Dec 6 03:42:23 localhost podman[82409]: 2025-12-06 08:42:23.089451681 +0000 UTC m=+0.240959328 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, managed_by=tripleo_ansible, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, container_name=ceilometer_agent_compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.openshift.expose-services=, vcs-type=git) Dec 6 03:42:23 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:42:23 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:42:23 localhost podman[82407]: 2025-12-06 08:42:23.136216448 +0000 UTC m=+0.293535512 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, container_name=collectd, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, build-date=2025-11-18T22:51:28Z, tcib_managed=true, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public) Dec 6 03:42:23 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:42:23 localhost podman[82415]: 2025-12-06 08:42:23.091347298 +0000 UTC m=+0.238203165 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, version=17.1.12, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vendor=Red Hat, Inc., config_id=tripleo_step3, vcs-type=git, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid) Dec 6 03:42:23 localhost podman[82415]: 2025-12-06 08:42:23.220929118 +0000 UTC m=+0.367784935 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp17/openstack-iscsid, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, vcs-type=git, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, tcib_managed=true, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044) Dec 6 03:42:23 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:42:23 localhost podman[82408]: 2025-12-06 08:42:23.338544845 +0000 UTC m=+0.494315712 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-type=git, url=https://www.redhat.com, io.openshift.expose-services=, container_name=nova_migration_target, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, release=1761123044) Dec 6 03:42:23 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:42:23 localhost systemd[1]: tmp-crun.glsk2O.mount: Deactivated successfully. Dec 6 03:42:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:42:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:42:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:42:27 localhost podman[82562]: 2025-12-06 08:42:27.916872714 +0000 UTC m=+0.077180642 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vcs-type=git, version=17.1.12, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.buildah.version=1.41.4) Dec 6 03:42:27 localhost podman[82561]: 2025-12-06 08:42:27.980116282 +0000 UTC m=+0.141319267 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, version=17.1.12, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, tcib_managed=true, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 6 03:42:28 localhost podman[82561]: 2025-12-06 08:42:28.001062497 +0000 UTC m=+0.162265462 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, architecture=x86_64, name=rhosp17/openstack-ovn-controller, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., vcs-type=git, tcib_managed=true, release=1761123044, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, container_name=ovn_controller) Dec 6 03:42:28 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:42:28 localhost podman[82563]: 2025-12-06 08:42:28.09611309 +0000 UTC m=+0.250434546 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, release=1761123044, io.openshift.expose-services=, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 6 03:42:28 localhost podman[82562]: 2025-12-06 08:42:28.11031755 +0000 UTC m=+0.270625508 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20251118.1, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, url=https://www.redhat.com) Dec 6 03:42:28 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:42:28 localhost podman[82563]: 2025-12-06 08:42:28.178311412 +0000 UTC m=+0.332632808 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.12, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.4, release=1761123044, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z) Dec 6 03:42:28 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:42:28 localhost systemd[1]: tmp-crun.r660r9.mount: Deactivated successfully. Dec 6 03:42:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:42:32 localhost podman[82634]: 2025-12-06 08:42:32.921632736 +0000 UTC m=+0.082637158 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, vcs-type=git, version=17.1.12, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, release=1761123044) Dec 6 03:42:32 localhost podman[82634]: 2025-12-06 08:42:32.975260162 +0000 UTC m=+0.136264514 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.buildah.version=1.41.4, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, version=17.1.12, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, container_name=nova_compute) Dec 6 03:42:32 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 03:42:33 localhost python3[82673]: ansible-ansible.builtin.slurp Invoked with path=/home/zuul/ansible_hostname src=/home/zuul/ansible_hostname Dec 6 03:42:46 localhost podman[82777]: 2025-12-06 08:42:46.62804494 +0000 UTC m=+0.116337628 container exec ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, com.redhat.component=rhceph-container, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, vcs-type=git, io.buildah.version=1.41.4, GIT_BRANCH=main, name=rhceph, vendor=Red Hat, Inc., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.openshift.expose-services=, RELEASE=main, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 6 03:42:46 localhost podman[82777]: 2025-12-06 08:42:46.759130466 +0000 UTC m=+0.247423174 container exec_died ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.buildah.version=1.41.4, name=rhceph, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, vcs-type=git, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, release=1763362218, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 6 03:42:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:42:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:42:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:42:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:42:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:42:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:42:53 localhost podman[82924]: 2025-12-06 08:42:53.959347586 +0000 UTC m=+0.101532370 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.41.4, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, build-date=2025-11-18T23:44:13Z, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, com.redhat.component=openstack-iscsid-container, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Dec 6 03:42:53 localhost podman[82924]: 2025-12-06 08:42:53.997162713 +0000 UTC m=+0.139347547 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:42:54 localhost podman[82922]: 2025-12-06 08:42:54.00528771 +0000 UTC m=+0.158980863 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, release=1761123044, vcs-type=git, config_id=tripleo_step4, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, batch=17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, container_name=nova_migration_target) Dec 6 03:42:54 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:42:54 localhost podman[82920]: 2025-12-06 08:42:53.941879916 +0000 UTC m=+0.099153817 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 03:42:54 localhost podman[82923]: 2025-12-06 08:42:54.083230862 +0000 UTC m=+0.234571363 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, version=17.1.12, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, config_id=tripleo_step4, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 6 03:42:54 localhost podman[82936]: 2025-12-06 08:42:54.108465818 +0000 UTC m=+0.249758335 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, vcs-type=git, release=1761123044, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container) Dec 6 03:42:54 localhost podman[82936]: 2025-12-06 08:42:54.192362782 +0000 UTC m=+0.333655319 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, version=17.1.12, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 03:42:54 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:42:54 localhost podman[82921]: 2025-12-06 08:42:54.161869518 +0000 UTC m=+0.319689537 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, io.openshift.expose-services=, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.buildah.version=1.41.4, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, maintainer=OpenStack TripleO Team) Dec 6 03:42:54 localhost podman[82923]: 2025-12-06 08:42:54.217233116 +0000 UTC m=+0.368573587 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1761123044, url=https://www.redhat.com, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, managed_by=tripleo_ansible, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1) Dec 6 03:42:54 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:42:54 localhost podman[82920]: 2025-12-06 08:42:54.239176042 +0000 UTC m=+0.396449953 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, version=17.1.12, tcib_managed=true, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, maintainer=OpenStack TripleO Team, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, distribution-scope=public, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=) Dec 6 03:42:54 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:42:54 localhost podman[82921]: 2025-12-06 08:42:54.292693185 +0000 UTC m=+0.450513234 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, vcs-type=git, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64) Dec 6 03:42:54 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:42:54 localhost podman[82922]: 2025-12-06 08:42:54.406231858 +0000 UTC m=+0.559925041 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-type=git, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1) Dec 6 03:42:54 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:42:54 localhost systemd[1]: tmp-crun.2RJIjD.mount: Deactivated successfully. Dec 6 03:42:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:42:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:42:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:42:58 localhost systemd[1]: tmp-crun.aRP2ZR.mount: Deactivated successfully. Dec 6 03:42:58 localhost podman[83053]: 2025-12-06 08:42:58.985615461 +0000 UTC m=+0.142970627 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., version=17.1.12, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, vcs-type=git, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 6 03:42:59 localhost podman[83053]: 2025-12-06 08:42:59.041230928 +0000 UTC m=+0.198586124 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, url=https://www.redhat.com, tcib_managed=true, build-date=2025-11-18T23:34:05Z, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, release=1761123044, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64) Dec 6 03:42:59 localhost podman[83054]: 2025-12-06 08:42:58.951460705 +0000 UTC m=+0.103503840 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.buildah.version=1.41.4, architecture=x86_64, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, release=1761123044, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 6 03:42:59 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:42:59 localhost podman[83055]: 2025-12-06 08:42:59.0452449 +0000 UTC m=+0.194086418 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, version=17.1.12, vendor=Red Hat, Inc., release=1761123044, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4) Dec 6 03:42:59 localhost podman[83055]: 2025-12-06 08:42:59.126535125 +0000 UTC m=+0.275376573 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_id=tripleo_step4, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20251118.1) Dec 6 03:42:59 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:42:59 localhost podman[83054]: 2025-12-06 08:42:59.151220304 +0000 UTC m=+0.303263439 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=metrics_qdr, batch=17.1_20251118.1, vcs-type=git, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, architecture=x86_64, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 6 03:42:59 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:43:00 localhost sshd[83127]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:43:00 localhost sshd[83129]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:43:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:43:03 localhost podman[83130]: 2025-12-06 08:43:03.90781954 +0000 UTC m=+0.069480268 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 03:43:03 localhost podman[83130]: 2025-12-06 08:43:03.954373111 +0000 UTC m=+0.116033819 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_compute, managed_by=tripleo_ansible, config_id=tripleo_step5, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, vcs-type=git, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 03:43:03 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 03:43:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:43:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:43:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:43:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:43:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:43:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:43:24 localhost systemd[1]: tmp-crun.PNoK57.mount: Deactivated successfully. Dec 6 03:43:24 localhost podman[83202]: 2025-12-06 08:43:24.927463995 +0000 UTC m=+0.086847425 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, tcib_managed=true) Dec 6 03:43:24 localhost podman[83203]: 2025-12-06 08:43:24.986071022 +0000 UTC m=+0.138582984 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, managed_by=tripleo_ansible, version=17.1.12, release=1761123044, vcs-type=git, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.openshift.expose-services=, url=https://www.redhat.com, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 03:43:25 localhost podman[83202]: 2025-12-06 08:43:25.010405389 +0000 UTC m=+0.169788829 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, name=rhosp17/openstack-collectd, release=1761123044, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.buildah.version=1.41.4, url=https://www.redhat.com, tcib_managed=true, version=17.1.12, vendor=Red Hat, Inc., batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, config_id=tripleo_step3, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:43:25 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:43:25 localhost podman[83204]: 2025-12-06 08:43:25.103066229 +0000 UTC m=+0.255840799 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, container_name=ceilometer_agent_compute, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, vendor=Red Hat, Inc., release=1761123044) Dec 6 03:43:25 localhost podman[83204]: 2025-12-06 08:43:25.133986838 +0000 UTC m=+0.286761438 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-type=git, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.4, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1) Dec 6 03:43:25 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:43:25 localhost podman[83217]: 2025-12-06 08:43:25.148524669 +0000 UTC m=+0.293095660 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, io.openshift.expose-services=, io.buildah.version=1.41.4, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4) Dec 6 03:43:25 localhost podman[83201]: 2025-12-06 08:43:25.19901489 +0000 UTC m=+0.360144873 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.12, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, container_name=logrotate_crond, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 03:43:25 localhost podman[83211]: 2025-12-06 08:43:24.965656102 +0000 UTC m=+0.111384558 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, container_name=iscsid, vendor=Red Hat, Inc., batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:43:25 localhost podman[83217]: 2025-12-06 08:43:25.225504313 +0000 UTC m=+0.370075304 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 6 03:43:25 localhost podman[83201]: 2025-12-06 08:43:25.23433658 +0000 UTC m=+0.395466633 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, build-date=2025-11-18T22:49:32Z, distribution-scope=public, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, managed_by=tripleo_ansible, io.buildah.version=1.41.4, release=1761123044, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 03:43:25 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:43:25 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:43:25 localhost podman[83211]: 2025-12-06 08:43:25.25213149 +0000 UTC m=+0.397859876 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20251118.1, vcs-type=git, distribution-scope=public, config_id=tripleo_step3, io.buildah.version=1.41.4, managed_by=tripleo_ansible, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, release=1761123044) Dec 6 03:43:25 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:43:25 localhost podman[83203]: 2025-12-06 08:43:25.376074049 +0000 UTC m=+0.528586011 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, managed_by=tripleo_ansible, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, distribution-scope=public, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com) Dec 6 03:43:25 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:43:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:43:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:43:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:43:29 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 6 03:43:29 localhost recover_tripleo_nova_virtqemud[83342]: 61814 Dec 6 03:43:29 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 6 03:43:29 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 6 03:43:29 localhost podman[83329]: 2025-12-06 08:43:29.928224417 +0000 UTC m=+0.085540195 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.buildah.version=1.41.4, version=17.1.12, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Dec 6 03:43:29 localhost podman[83328]: 2025-12-06 08:43:29.989278519 +0000 UTC m=+0.148719651 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, version=17.1.12, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, release=1761123044, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_id=tripleo_step4, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 6 03:43:30 localhost podman[83328]: 2025-12-06 08:43:30.021124155 +0000 UTC m=+0.180565267 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, io.buildah.version=1.41.4, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, config_id=tripleo_step4, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 6 03:43:30 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:43:30 localhost podman[83330]: 2025-12-06 08:43:30.092880881 +0000 UTC m=+0.245210938 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, config_id=tripleo_step4, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, distribution-scope=public, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:43:30 localhost podman[83330]: 2025-12-06 08:43:30.120139117 +0000 UTC m=+0.272469184 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, config_id=tripleo_step4, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64) Dec 6 03:43:30 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:43:30 localhost podman[83329]: 2025-12-06 08:43:30.155221741 +0000 UTC m=+0.312537439 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, config_id=tripleo_step1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc.) Dec 6 03:43:30 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:43:33 localhost systemd[1]: session-35.scope: Deactivated successfully. Dec 6 03:43:33 localhost systemd[1]: session-35.scope: Consumed 5.876s CPU time. Dec 6 03:43:33 localhost systemd-logind[766]: Session 35 logged out. Waiting for processes to exit. Dec 6 03:43:33 localhost systemd-logind[766]: Removed session 35. Dec 6 03:43:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:43:34 localhost podman[83404]: 2025-12-06 08:43:34.913391285 +0000 UTC m=+0.075191921 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, container_name=nova_compute, managed_by=tripleo_ansible, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.buildah.version=1.41.4) Dec 6 03:43:34 localhost podman[83404]: 2025-12-06 08:43:34.944187009 +0000 UTC m=+0.105987715 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, distribution-scope=public, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, architecture=x86_64, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, version=17.1.12) Dec 6 03:43:34 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 03:43:50 localhost sshd[83508]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:43:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:43:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:43:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:43:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:43:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:43:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:43:55 localhost systemd[1]: tmp-crun.JtGThZ.mount: Deactivated successfully. Dec 6 03:43:55 localhost podman[83531]: 2025-12-06 08:43:55.986295755 +0000 UTC m=+0.104614440 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, vcs-type=git, io.buildah.version=1.41.4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044) Dec 6 03:43:56 localhost podman[83524]: 2025-12-06 08:43:56.020998036 +0000 UTC m=+0.142619032 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, config_id=tripleo_step3, release=1761123044, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com) Dec 6 03:43:56 localhost podman[83524]: 2025-12-06 08:43:56.055402418 +0000 UTC m=+0.177023444 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, url=https://www.redhat.com, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.41.4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, com.redhat.component=openstack-iscsid-container, version=17.1.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, name=rhosp17/openstack-iscsid, distribution-scope=public, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, config_id=tripleo_step3, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 03:43:56 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:43:56 localhost podman[83531]: 2025-12-06 08:43:56.065983752 +0000 UTC m=+0.184302427 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 03:43:56 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:43:56 localhost podman[83510]: 2025-12-06 08:43:56.072409268 +0000 UTC m=+0.206704151 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, version=17.1.12, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1) Dec 6 03:43:56 localhost podman[83511]: 2025-12-06 08:43:56.1309997 +0000 UTC m=+0.263155679 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, container_name=collectd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 6 03:43:56 localhost podman[83511]: 2025-12-06 08:43:56.139326545 +0000 UTC m=+0.271482574 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, version=17.1.12, build-date=2025-11-18T22:51:28Z, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com) Dec 6 03:43:56 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:43:56 localhost podman[83510]: 2025-12-06 08:43:56.203004422 +0000 UTC m=+0.337299335 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044) Dec 6 03:43:56 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:43:56 localhost podman[83513]: 2025-12-06 08:43:56.193619335 +0000 UTC m=+0.318932984 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, version=17.1.12, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, container_name=ceilometer_agent_compute, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 6 03:43:56 localhost podman[83512]: 2025-12-06 08:43:56.274794438 +0000 UTC m=+0.404638166 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://www.redhat.com, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1761123044, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.12, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 03:43:56 localhost podman[83513]: 2025-12-06 08:43:56.327557751 +0000 UTC m=+0.452871440 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, managed_by=tripleo_ansible, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, distribution-scope=public) Dec 6 03:43:56 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:43:56 localhost podman[83512]: 2025-12-06 08:43:56.652175977 +0000 UTC m=+0.782019715 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, vcs-type=git, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, url=https://www.redhat.com, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 03:43:56 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:43:56 localhost systemd[1]: tmp-crun.nqTMms.mount: Deactivated successfully. Dec 6 03:44:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:44:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:44:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:44:00 localhost systemd[1]: tmp-crun.dz3wuw.mount: Deactivated successfully. Dec 6 03:44:00 localhost podman[83645]: 2025-12-06 08:44:00.928534156 +0000 UTC m=+0.089530039 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vendor=Red Hat, Inc., version=17.1.12, config_id=tripleo_step4, io.buildah.version=1.41.4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 6 03:44:00 localhost podman[83644]: 2025-12-06 08:44:00.986349554 +0000 UTC m=+0.148680108 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, release=1761123044, maintainer=OpenStack TripleO Team, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 6 03:44:01 localhost podman[83645]: 2025-12-06 08:44:01.016057852 +0000 UTC m=+0.177053725 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, version=17.1.12, distribution-scope=public, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 6 03:44:01 localhost podman[83643]: 2025-12-06 08:44:00.965700882 +0000 UTC m=+0.129155700 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, container_name=ovn_controller, architecture=x86_64) Dec 6 03:44:01 localhost podman[83643]: 2025-12-06 08:44:01.048155124 +0000 UTC m=+0.211609892 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z) Dec 6 03:44:01 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:44:01 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:44:01 localhost podman[83644]: 2025-12-06 08:44:01.247249522 +0000 UTC m=+0.409580106 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, container_name=metrics_qdr, version=17.1.12, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Dec 6 03:44:01 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:44:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:44:05 localhost podman[83720]: 2025-12-06 08:44:05.945713198 +0000 UTC m=+0.108740447 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, version=17.1.12, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:44:05 localhost podman[83720]: 2025-12-06 08:44:05.971326111 +0000 UTC m=+0.134353400 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, io.buildah.version=1.41.4, version=17.1.12, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_id=tripleo_step5, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, vcs-type=git, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 03:44:05 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 03:44:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:44:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:44:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:44:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:44:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:44:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:44:26 localhost podman[83805]: 2025-12-06 08:44:26.932533883 +0000 UTC m=+0.081640417 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, url=https://www.redhat.com, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, io.openshift.expose-services=, release=1761123044, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 6 03:44:26 localhost podman[83791]: 2025-12-06 08:44:26.97721368 +0000 UTC m=+0.133405180 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, release=1761123044, vcs-type=git, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, distribution-scope=public, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_id=tripleo_step3, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64) Dec 6 03:44:26 localhost podman[83791]: 2025-12-06 08:44:26.993942041 +0000 UTC m=+0.150133531 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, url=https://www.redhat.com, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd) Dec 6 03:44:27 localhost podman[83797]: 2025-12-06 08:44:27.003089452 +0000 UTC m=+0.155770115 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, version=17.1.12, batch=17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true) Dec 6 03:44:27 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:44:27 localhost podman[83805]: 2025-12-06 08:44:27.011998814 +0000 UTC m=+0.161105418 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, io.openshift.expose-services=, version=17.1.12, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 6 03:44:27 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:44:27 localhost podman[83799]: 2025-12-06 08:44:27.098936143 +0000 UTC m=+0.246484089 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vcs-type=git, config_id=tripleo_step3, io.openshift.expose-services=, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044) Dec 6 03:44:27 localhost podman[83790]: 2025-12-06 08:44:27.076981911 +0000 UTC m=+0.240469275 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.12, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, architecture=x86_64, release=1761123044, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, url=https://www.redhat.com, name=rhosp17/openstack-cron) Dec 6 03:44:27 localhost podman[83799]: 2025-12-06 08:44:27.138478802 +0000 UTC m=+0.286026808 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, container_name=iscsid, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, config_id=tripleo_step3, tcib_managed=true, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 6 03:44:27 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:44:27 localhost podman[83798]: 2025-12-06 08:44:27.189302115 +0000 UTC m=+0.337867143 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, architecture=x86_64, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.openshift.expose-services=, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4) Dec 6 03:44:27 localhost podman[83790]: 2025-12-06 08:44:27.210664099 +0000 UTC m=+0.374151503 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, version=17.1.12, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, url=https://www.redhat.com) Dec 6 03:44:27 localhost podman[83798]: 2025-12-06 08:44:27.223251783 +0000 UTC m=+0.371816891 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, container_name=ceilometer_agent_compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 6 03:44:27 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:44:27 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:44:27 localhost podman[83797]: 2025-12-06 08:44:27.382111041 +0000 UTC m=+0.534791614 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, version=17.1.12, build-date=2025-11-19T00:36:58Z, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, io.buildah.version=1.41.4, config_id=tripleo_step4, vendor=Red Hat, Inc., release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-type=git, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible) Dec 6 03:44:27 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:44:27 localhost systemd[1]: tmp-crun.Vc2xLx.mount: Deactivated successfully. Dec 6 03:44:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:44:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:44:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:44:31 localhost podman[83921]: 2025-12-06 08:44:31.908471925 +0000 UTC m=+0.076823751 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., tcib_managed=true, container_name=ovn_controller, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, version=17.1.12, url=https://www.redhat.com) Dec 6 03:44:31 localhost podman[83928]: 2025-12-06 08:44:31.917995526 +0000 UTC m=+0.076182811 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, distribution-scope=public, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 6 03:44:31 localhost systemd[1]: tmp-crun.cso6LJ.mount: Deactivated successfully. Dec 6 03:44:31 localhost podman[83921]: 2025-12-06 08:44:31.966970004 +0000 UTC m=+0.135321810 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, tcib_managed=true, release=1761123044, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, distribution-scope=public, io.openshift.expose-services=) Dec 6 03:44:31 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:44:31 localhost podman[83928]: 2025-12-06 08:44:31.998646062 +0000 UTC m=+0.156833317 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, url=https://www.redhat.com, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Dec 6 03:44:32 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:44:32 localhost podman[83922]: 2025-12-06 08:44:31.968321885 +0000 UTC m=+0.129623635 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vcs-type=git) Dec 6 03:44:32 localhost podman[83922]: 2025-12-06 08:44:32.154979913 +0000 UTC m=+0.316281713 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, release=1761123044, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=17.1.12) Dec 6 03:44:32 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:44:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:44:36 localhost systemd[1]: tmp-crun.NtiYjc.mount: Deactivated successfully. Dec 6 03:44:36 localhost podman[84105]: 2025-12-06 08:44:36.936899481 +0000 UTC m=+0.094861742 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, architecture=x86_64, vcs-type=git, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, container_name=nova_compute, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 6 03:44:36 localhost podman[84105]: 2025-12-06 08:44:36.961493392 +0000 UTC m=+0.119455703 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, tcib_managed=true, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, io.openshift.expose-services=, batch=17.1_20251118.1, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:44:36 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 03:44:40 localhost systemd-logind[766]: Existing logind session ID 28 used by new audit session, ignoring. Dec 6 03:44:40 localhost systemd[1]: Created slice User Slice of UID 0. Dec 6 03:44:41 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Dec 6 03:44:41 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Dec 6 03:44:41 localhost systemd[1]: Starting User Manager for UID 0... Dec 6 03:44:41 localhost systemd[84400]: Queued start job for default target Main User Target. Dec 6 03:44:41 localhost systemd[84400]: Created slice User Application Slice. Dec 6 03:44:41 localhost systemd[84400]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Dec 6 03:44:41 localhost systemd[84400]: Started Daily Cleanup of User's Temporary Directories. Dec 6 03:44:41 localhost systemd[84400]: Reached target Paths. Dec 6 03:44:41 localhost systemd[84400]: Reached target Timers. Dec 6 03:44:41 localhost systemd[84400]: Starting D-Bus User Message Bus Socket... Dec 6 03:44:41 localhost systemd[84400]: Starting Create User's Volatile Files and Directories... Dec 6 03:44:41 localhost systemd[84400]: Listening on D-Bus User Message Bus Socket. Dec 6 03:44:41 localhost systemd[84400]: Reached target Sockets. Dec 6 03:44:41 localhost systemd[84400]: Finished Create User's Volatile Files and Directories. Dec 6 03:44:41 localhost systemd[84400]: Reached target Basic System. Dec 6 03:44:41 localhost systemd[84400]: Reached target Main User Target. Dec 6 03:44:41 localhost systemd[84400]: Startup finished in 155ms. Dec 6 03:44:41 localhost systemd[1]: Started User Manager for UID 0. Dec 6 03:44:41 localhost systemd[1]: Started Session c11 of User root. Dec 6 03:44:42 localhost kernel: tun: Universal TUN/TAP device driver, 1.6 Dec 6 03:44:42 localhost kernel: device tap86fc0b7a-fb entered promiscuous mode Dec 6 03:44:42 localhost NetworkManager[5973]: [1765010682.3705] manager: (tap86fc0b7a-fb): new Tun device (/org/freedesktop/NetworkManager/Devices/13) Dec 6 03:44:42 localhost systemd-udevd[84436]: Network interface NamePolicy= disabled on kernel command line. Dec 6 03:44:42 localhost NetworkManager[5973]: [1765010682.3850] device (tap86fc0b7a-fb): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Dec 6 03:44:42 localhost NetworkManager[5973]: [1765010682.3859] device (tap86fc0b7a-fb): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external') Dec 6 03:44:42 localhost systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 6 03:44:42 localhost systemd[1]: Starting Virtual Machine and Container Registration Service... Dec 6 03:44:42 localhost systemd[1]: Started Virtual Machine and Container Registration Service. Dec 6 03:44:42 localhost systemd-machined[84444]: New machine qemu-1-instance-00000002. Dec 6 03:44:42 localhost systemd[1]: Started Virtual Machine qemu-1-instance-00000002. Dec 6 03:44:42 localhost NetworkManager[5973]: [1765010682.6084] manager: (tap652b6bdc-40): new Veth device (/org/freedesktop/NetworkManager/Devices/14) Dec 6 03:44:42 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap652b6bdc-41: link becomes ready Dec 6 03:44:42 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap652b6bdc-40: link becomes ready Dec 6 03:44:42 localhost NetworkManager[5973]: [1765010682.6733] device (tap652b6bdc-40): carrier: link connected Dec 6 03:44:42 localhost kernel: device tap652b6bdc-40 entered promiscuous mode Dec 6 03:44:44 localhost systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs... Dec 6 03:44:44 localhost systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs. Dec 6 03:44:44 localhost systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged. Dec 6 03:44:44 localhost systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service. Dec 6 03:44:45 localhost podman[84581]: 2025-12-06 08:44:45.034213222 +0000 UTC m=+0.096904354 container create 12ba6c9101b1d507aed708fcb0f1f5958064f89edf86d2af4b3dde5856898445 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=neutron-haproxy-ovnmeta-652b6bdc-40ce-45b7-8aa5-3bca79987993, batch=17.1_20251118.1, io.openshift.expose-services=, tcib_managed=true, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, architecture=x86_64, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 6 03:44:45 localhost podman[84581]: 2025-12-06 08:44:44.989467494 +0000 UTC m=+0.052158666 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Dec 6 03:44:45 localhost systemd[1]: Started libpod-conmon-12ba6c9101b1d507aed708fcb0f1f5958064f89edf86d2af4b3dde5856898445.scope. Dec 6 03:44:45 localhost systemd[1]: tmp-crun.tFTTaw.mount: Deactivated successfully. Dec 6 03:44:45 localhost systemd[1]: Started libcrun container. Dec 6 03:44:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31fbdb956fdb20faf0121dfd2c519c9e748cc292d5fc54ebad7f5d80f477ded1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 03:44:45 localhost podman[84581]: 2025-12-06 08:44:45.159293967 +0000 UTC m=+0.221985109 container init 12ba6c9101b1d507aed708fcb0f1f5958064f89edf86d2af4b3dde5856898445 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=neutron-haproxy-ovnmeta-652b6bdc-40ce-45b7-8aa5-3bca79987993, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64) Dec 6 03:44:45 localhost podman[84581]: 2025-12-06 08:44:45.176365619 +0000 UTC m=+0.239056761 container start 12ba6c9101b1d507aed708fcb0f1f5958064f89edf86d2af4b3dde5856898445 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=neutron-haproxy-ovnmeta-652b6bdc-40ce-45b7-8aa5-3bca79987993, distribution-scope=public, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, build-date=2025-11-19T00:14:25Z, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Dec 6 03:44:45 localhost setroubleshoot[84539]: SELinux is preventing /usr/libexec/qemu-kvm from read access on the file max_map_count. For complete SELinux messages run: sealert -l 58e2bb45-d8cf-42a0-b321-404a4f96b4c3 Dec 6 03:44:45 localhost setroubleshoot[84539]: SELinux is preventing /usr/libexec/qemu-kvm from read access on the file max_map_count.#012#012***** Plugin qemu_file_image (98.8 confidence) suggests *******************#012#012If max_map_count is a virtualization target#012Then you need to change the label on max_map_count'#012Do#012# semanage fcontext -a -t virt_image_t 'max_map_count'#012# restorecon -v 'max_map_count'#012#012***** Plugin catchall (2.13 confidence) suggests **************************#012#012If you believe that qemu-kvm should be allowed read access on the max_map_count file by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'qemu-kvm' --raw | audit2allow -M my-qemukvm#012# semodule -X 300 -i my-qemukvm.pp#012 Dec 6 03:44:54 localhost systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully. Dec 6 03:44:55 localhost systemd[1]: setroubleshootd.service: Deactivated successfully. Dec 6 03:44:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:44:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:44:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:44:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:44:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:44:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:44:57 localhost systemd[1]: tmp-crun.rtckzU.mount: Deactivated successfully. Dec 6 03:44:57 localhost podman[84686]: 2025-12-06 08:44:57.94903043 +0000 UTC m=+0.105634440 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, name=rhosp17/openstack-cron, tcib_managed=true, architecture=x86_64, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, distribution-scope=public, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Dec 6 03:44:58 localhost podman[84700]: 2025-12-06 08:44:57.98597217 +0000 UTC m=+0.130704768 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, architecture=x86_64, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, url=https://www.redhat.com, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi) Dec 6 03:44:58 localhost podman[84687]: 2025-12-06 08:44:58.053544047 +0000 UTC m=+0.210408206 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, batch=17.1_20251118.1, container_name=collectd, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, com.redhat.component=openstack-collectd-container, architecture=x86_64, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public) Dec 6 03:44:58 localhost podman[84700]: 2025-12-06 08:44:58.06805636 +0000 UTC m=+0.212788968 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, architecture=x86_64, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, batch=17.1_20251118.1, config_id=tripleo_step4, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:44:58 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:44:58 localhost podman[84688]: 2025-12-06 08:44:58.026608273 +0000 UTC m=+0.181298886 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, url=https://www.redhat.com, batch=17.1_20251118.1, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 03:44:58 localhost podman[84686]: 2025-12-06 08:44:58.088457654 +0000 UTC m=+0.245061644 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, tcib_managed=true, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible) Dec 6 03:44:58 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:44:58 localhost podman[84687]: 2025-12-06 08:44:58.143496958 +0000 UTC m=+0.300361087 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.4, version=17.1.12, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, architecture=x86_64, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 03:44:58 localhost podman[84689]: 2025-12-06 08:44:58.150934474 +0000 UTC m=+0.299915332 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1761123044, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, config_id=tripleo_step4, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 6 03:44:58 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:44:58 localhost podman[84690]: 2025-12-06 08:44:58.205913916 +0000 UTC m=+0.354895074 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, release=1761123044, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, batch=17.1_20251118.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.component=openstack-iscsid-container) Dec 6 03:44:58 localhost podman[84689]: 2025-12-06 08:44:58.211164487 +0000 UTC m=+0.360145315 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 03:44:58 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:44:58 localhost podman[84690]: 2025-12-06 08:44:58.241017459 +0000 UTC m=+0.389998577 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, url=https://www.redhat.com, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, config_id=tripleo_step3, distribution-scope=public) Dec 6 03:44:58 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:44:58 localhost podman[84688]: 2025-12-06 08:44:58.356689927 +0000 UTC m=+0.511380460 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, release=1761123044, vendor=Red Hat, Inc., version=17.1.12, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.buildah.version=1.41.4) Dec 6 03:44:58 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:44:58 localhost systemd[1]: tmp-crun.swmenC.mount: Deactivated successfully. Dec 6 03:45:01 localhost haproxy-metadata-proxy-652b6bdc-40ce-45b7-8aa5-3bca79987993[84602]: 192.168.0.162:33064 [06/Dec/2025:08:45:00.308] listener listener/metadata 0/0/0/1660/1660 200 130 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/instance-id HTTP/1.1" Dec 6 03:45:02 localhost haproxy-metadata-proxy-652b6bdc-40ce-45b7-8aa5-3bca79987993[84602]: 192.168.0.162:33068 [06/Dec/2025:08:45:02.067] listener listener/metadata 0/0/0/14/14 404 281 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/public-keys HTTP/1.1" Dec 6 03:45:02 localhost haproxy-metadata-proxy-652b6bdc-40ce-45b7-8aa5-3bca79987993[84602]: 192.168.0.162:33080 [06/Dec/2025:08:45:02.678] listener listener/metadata 0/0/0/12/12 200 130 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/instance-id HTTP/1.1" Dec 6 03:45:02 localhost haproxy-metadata-proxy-652b6bdc-40ce-45b7-8aa5-3bca79987993[84602]: 192.168.0.162:33088 [06/Dec/2025:08:45:02.762] listener listener/metadata 0/0/0/12/12 200 120 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/ami-launch-index HTTP/1.1" Dec 6 03:45:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:45:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:45:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:45:02 localhost haproxy-metadata-proxy-652b6bdc-40ce-45b7-8aa5-3bca79987993[84602]: 192.168.0.162:33098 [06/Dec/2025:08:45:02.829] listener listener/metadata 0/0/0/13/13 200 127 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/instance-type HTTP/1.1" Dec 6 03:45:02 localhost haproxy-metadata-proxy-652b6bdc-40ce-45b7-8aa5-3bca79987993[84602]: 192.168.0.162:33110 [06/Dec/2025:08:45:02.883] listener listener/metadata 0/0/0/17/17 200 133 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/local-ipv4 HTTP/1.1" Dec 6 03:45:02 localhost podman[84820]: 2025-12-06 08:45:02.907168585 +0000 UTC m=+0.074124067 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vendor=Red Hat, Inc., config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, vcs-type=git, io.buildah.version=1.41.4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, maintainer=OpenStack TripleO Team, container_name=ovn_controller, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Dec 6 03:45:02 localhost haproxy-metadata-proxy-652b6bdc-40ce-45b7-8aa5-3bca79987993[84602]: 192.168.0.162:33118 [06/Dec/2025:08:45:02.940] listener listener/metadata 0/0/0/13/13 200 134 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/public-ipv4 HTTP/1.1" Dec 6 03:45:02 localhost systemd[1]: tmp-crun.ry8JGC.mount: Deactivated successfully. Dec 6 03:45:02 localhost podman[84822]: 2025-12-06 08:45:02.963653532 +0000 UTC m=+0.124062614 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, container_name=ovn_metadata_agent, batch=17.1_20251118.1, config_id=tripleo_step4, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, version=17.1.12, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 6 03:45:02 localhost podman[84820]: 2025-12-06 08:45:02.984942324 +0000 UTC m=+0.151897836 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, url=https://www.redhat.com, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, distribution-scope=public, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc.) Dec 6 03:45:02 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:45:03 localhost haproxy-metadata-proxy-652b6bdc-40ce-45b7-8aa5-3bca79987993[84602]: 192.168.0.162:33128 [06/Dec/2025:08:45:02.999] listener listener/metadata 0/0/0/14/14 200 123 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/hostname HTTP/1.1" Dec 6 03:45:03 localhost podman[84822]: 2025-12-06 08:45:03.013722453 +0000 UTC m=+0.174131546 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Dec 6 03:45:03 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:45:03 localhost haproxy-metadata-proxy-652b6bdc-40ce-45b7-8aa5-3bca79987993[84602]: 192.168.0.162:33130 [06/Dec/2025:08:45:03.052] listener listener/metadata 0/0/0/13/13 200 123 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/local-hostname HTTP/1.1" Dec 6 03:45:03 localhost podman[84821]: 2025-12-06 08:45:03.072994846 +0000 UTC m=+0.234356867 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z) Dec 6 03:45:03 localhost haproxy-metadata-proxy-652b6bdc-40ce-45b7-8aa5-3bca79987993[84602]: 192.168.0.162:33140 [06/Dec/2025:08:45:03.130] listener listener/metadata 0/0/0/9/9 404 281 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/user-data HTTP/1.1" Dec 6 03:45:03 localhost haproxy-metadata-proxy-652b6bdc-40ce-45b7-8aa5-3bca79987993[84602]: 192.168.0.162:33146 [06/Dec/2025:08:45:03.192] listener listener/metadata 0/0/0/13/13 200 139 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/block-device-mapping HTTP/1.1" Dec 6 03:45:03 localhost haproxy-metadata-proxy-652b6bdc-40ce-45b7-8aa5-3bca79987993[84602]: 192.168.0.162:33148 [06/Dec/2025:08:45:03.234] listener listener/metadata 0/0/0/13/13 200 122 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/block-device-mapping/ami HTTP/1.1" Dec 6 03:45:03 localhost podman[84821]: 2025-12-06 08:45:03.284174674 +0000 UTC m=+0.445536745 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, architecture=x86_64, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr) Dec 6 03:45:03 localhost haproxy-metadata-proxy-652b6bdc-40ce-45b7-8aa5-3bca79987993[84602]: 192.168.0.162:33152 [06/Dec/2025:08:45:03.281] listener listener/metadata 0/0/0/12/12 200 127 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/block-device-mapping/ephemeral0 HTTP/1.1" Dec 6 03:45:03 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:45:03 localhost haproxy-metadata-proxy-652b6bdc-40ce-45b7-8aa5-3bca79987993[84602]: 192.168.0.162:33158 [06/Dec/2025:08:45:03.331] listener listener/metadata 0/0/0/14/14 200 127 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/block-device-mapping/root HTTP/1.1" Dec 6 03:45:03 localhost haproxy-metadata-proxy-652b6bdc-40ce-45b7-8aa5-3bca79987993[84602]: 192.168.0.162:33160 [06/Dec/2025:08:45:03.385] listener listener/metadata 0/0/0/15/15 200 123 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/public-hostname HTTP/1.1" Dec 6 03:45:03 localhost haproxy-metadata-proxy-652b6bdc-40ce-45b7-8aa5-3bca79987993[84602]: 192.168.0.162:33170 [06/Dec/2025:08:45:03.440] listener listener/metadata 0/0/0/10/10 200 123 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/placement/availability-zone HTTP/1.1" Dec 6 03:45:06 localhost ceph-osd[31726]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0. Dec 6 03:45:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:45:07 localhost podman[84896]: 2025-12-06 08:45:07.922190483 +0000 UTC m=+0.079559243 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, release=1761123044, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 03:45:07 localhost podman[84896]: 2025-12-06 08:45:07.984260161 +0000 UTC m=+0.141628911 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.12, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, container_name=nova_compute, io.buildah.version=1.41.4, vcs-type=git, tcib_managed=true, architecture=x86_64, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 03:45:07 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 03:45:14 localhost snmpd[67279]: empty variable list in _query Dec 6 03:45:14 localhost snmpd[67279]: empty variable list in _query Dec 6 03:45:26 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 6 03:45:26 localhost recover_tripleo_nova_virtqemud[84969]: 61814 Dec 6 03:45:26 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 6 03:45:26 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 6 03:45:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:45:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:45:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:45:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:45:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:45:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:45:28 localhost systemd[1]: tmp-crun.pJDxPC.mount: Deactivated successfully. Dec 6 03:45:28 localhost podman[84972]: 2025-12-06 08:45:28.967039284 +0000 UTC m=+0.116965048 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20251118.1, config_id=tripleo_step4, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, version=17.1.12, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git) Dec 6 03:45:29 localhost podman[84986]: 2025-12-06 08:45:29.031731112 +0000 UTC m=+0.166530784 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, release=1761123044, io.buildah.version=1.41.4, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_id=tripleo_step4) Dec 6 03:45:29 localhost podman[84970]: 2025-12-06 08:45:29.073240192 +0000 UTC m=+0.227008684 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, container_name=logrotate_crond, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.41.4, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, distribution-scope=public) Dec 6 03:45:29 localhost podman[84986]: 2025-12-06 08:45:29.087068453 +0000 UTC m=+0.221868095 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, url=https://www.redhat.com, distribution-scope=public, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:45:29 localhost podman[84971]: 2025-12-06 08:45:29.108148339 +0000 UTC m=+0.258250598 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, tcib_managed=true, config_id=tripleo_step3, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, version=17.1.12, architecture=x86_64, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4) Dec 6 03:45:29 localhost podman[84971]: 2025-12-06 08:45:29.122160437 +0000 UTC m=+0.272262666 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, container_name=collectd, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 03:45:29 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:45:29 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:45:29 localhost podman[84970]: 2025-12-06 08:45:29.161060657 +0000 UTC m=+0.314829159 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, batch=17.1_20251118.1, url=https://www.redhat.com, container_name=logrotate_crond, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, build-date=2025-11-18T22:49:32Z) Dec 6 03:45:29 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:45:29 localhost podman[84979]: 2025-12-06 08:45:29.123188078 +0000 UTC m=+0.262329222 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, release=1761123044, tcib_managed=true, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 6 03:45:29 localhost podman[84979]: 2025-12-06 08:45:29.207268339 +0000 UTC m=+0.346409533 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, vcs-type=git, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, distribution-scope=public, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:45:29 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:45:29 localhost podman[84973]: 2025-12-06 08:45:29.111040487 +0000 UTC m=+0.255344659 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1761123044, tcib_managed=true, container_name=ceilometer_agent_compute, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20251118.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=) Dec 6 03:45:29 localhost podman[84973]: 2025-12-06 08:45:29.291273308 +0000 UTC m=+0.435577470 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-type=git, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=) Dec 6 03:45:29 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:45:29 localhost podman[84972]: 2025-12-06 08:45:29.339410081 +0000 UTC m=+0.489335825 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vendor=Red Hat, Inc., version=17.1.12, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, batch=17.1_20251118.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 03:45:29 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:45:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:45:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:45:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:45:33 localhost podman[85101]: 2025-12-06 08:45:33.924272122 +0000 UTC m=+0.082662798 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, config_id=tripleo_step4, io.buildah.version=1.41.4, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible) Dec 6 03:45:33 localhost podman[85103]: 2025-12-06 08:45:33.968143674 +0000 UTC m=+0.124611462 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1761123044, distribution-scope=public, batch=17.1_20251118.1, vcs-type=git, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4) Dec 6 03:45:33 localhost podman[85101]: 2025-12-06 08:45:33.971214947 +0000 UTC m=+0.129605653 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, tcib_managed=true, vcs-type=git, batch=17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, release=1761123044, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 6 03:45:33 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:45:34 localhost podman[85103]: 2025-12-06 08:45:34.026838778 +0000 UTC m=+0.183306516 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20251118.1, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, distribution-scope=public, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true) Dec 6 03:45:34 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:45:34 localhost podman[85102]: 2025-12-06 08:45:34.027741586 +0000 UTC m=+0.181482541 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.buildah.version=1.41.4, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, architecture=x86_64, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 6 03:45:34 localhost podman[85102]: 2025-12-06 08:45:34.311216244 +0000 UTC m=+0.464957189 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:45:34 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:45:34 localhost systemd[1]: tmp-crun.M1ph9e.mount: Deactivated successfully. Dec 6 03:45:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:45:38 localhost systemd[1]: tmp-crun.Q2WV0o.mount: Deactivated successfully. Dec 6 03:45:38 localhost podman[85175]: 2025-12-06 08:45:38.945036413 +0000 UTC m=+0.099592055 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, vcs-type=git, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044) Dec 6 03:45:38 localhost podman[85175]: 2025-12-06 08:45:38.979330043 +0000 UTC m=+0.133885685 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, vcs-type=git, managed_by=tripleo_ansible, container_name=nova_compute, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4, config_id=tripleo_step5, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, distribution-scope=public, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute) Dec 6 03:45:38 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 03:45:53 localhost sshd[85279]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:45:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:45:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:45:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:45:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:45:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:45:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:45:59 localhost podman[85282]: 2025-12-06 08:45:59.944137221 +0000 UTC m=+0.098787852 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible) Dec 6 03:45:59 localhost podman[85297]: 2025-12-06 08:45:59.987447676 +0000 UTC m=+0.131440531 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, vcs-type=git, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=) Dec 6 03:46:00 localhost podman[85282]: 2025-12-06 08:46:00.057843108 +0000 UTC m=+0.212493729 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, distribution-scope=public, url=https://www.redhat.com, io.buildah.version=1.41.4, release=1761123044, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1) Dec 6 03:46:00 localhost podman[85297]: 2025-12-06 08:46:00.06801673 +0000 UTC m=+0.212009625 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, io.buildah.version=1.41.4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12) Dec 6 03:46:00 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:46:00 localhost podman[85285]: 2025-12-06 08:46:00.074837288 +0000 UTC m=+0.220715251 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., tcib_managed=true, version=17.1.12, io.buildah.version=1.41.4, batch=17.1_20251118.1, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team) Dec 6 03:46:00 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:46:00 localhost podman[85285]: 2025-12-06 08:46:00.089226458 +0000 UTC m=+0.235104421 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, architecture=x86_64) Dec 6 03:46:00 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:46:00 localhost podman[85283]: 2025-12-06 08:46:00.039202828 +0000 UTC m=+0.190977251 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, managed_by=tripleo_ansible, container_name=nova_migration_target, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com) Dec 6 03:46:00 localhost podman[85281]: 2025-12-06 08:46:00.152776301 +0000 UTC m=+0.307784413 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, architecture=x86_64, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 03:46:00 localhost podman[85284]: 2025-12-06 08:46:00.199921403 +0000 UTC m=+0.348616862 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12) Dec 6 03:46:00 localhost podman[85281]: 2025-12-06 08:46:00.21387991 +0000 UTC m=+0.368888052 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container, vcs-type=git, config_id=tripleo_step4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, batch=17.1_20251118.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 6 03:46:00 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:46:00 localhost podman[85284]: 2025-12-06 08:46:00.260202666 +0000 UTC m=+0.408898135 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, version=17.1.12, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=) Dec 6 03:46:00 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:46:00 localhost podman[85283]: 2025-12-06 08:46:00.412265736 +0000 UTC m=+0.564040209 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, distribution-scope=public, batch=17.1_20251118.1, release=1761123044) Dec 6 03:46:00 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:46:00 localhost systemd[1]: tmp-crun.ywg8bs.mount: Deactivated successfully. Dec 6 03:46:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:46:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:46:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:46:04 localhost systemd[1]: tmp-crun.cI89pB.mount: Deactivated successfully. Dec 6 03:46:04 localhost podman[85418]: 2025-12-06 08:46:04.953053703 +0000 UTC m=+0.104399703 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_id=tripleo_step1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, version=17.1.12, io.buildah.version=1.41.4) Dec 6 03:46:04 localhost podman[85419]: 2025-12-06 08:46:04.97648926 +0000 UTC m=+0.123077774 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, batch=17.1_20251118.1, vcs-type=git, release=1761123044, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.expose-services=, url=https://www.redhat.com) Dec 6 03:46:05 localhost podman[85417]: 2025-12-06 08:46:05.047919444 +0000 UTC m=+0.201982747 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, container_name=ovn_controller, io.buildah.version=1.41.4, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, release=1761123044) Dec 6 03:46:05 localhost podman[85419]: 2025-12-06 08:46:05.055127545 +0000 UTC m=+0.201716009 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 6 03:46:05 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:46:05 localhost podman[85417]: 2025-12-06 08:46:05.098337266 +0000 UTC m=+0.252400539 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, vcs-type=git, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.) Dec 6 03:46:05 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:46:05 localhost podman[85418]: 2025-12-06 08:46:05.152337567 +0000 UTC m=+0.303683577 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, container_name=metrics_qdr, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, release=1761123044, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 03:46:05 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:46:05 localhost systemd[1]: tmp-crun.bft6bg.mount: Deactivated successfully. Dec 6 03:46:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:46:09 localhost systemd[1]: tmp-crun.Kd077u.mount: Deactivated successfully. Dec 6 03:46:09 localhost podman[85493]: 2025-12-06 08:46:09.936075851 +0000 UTC m=+0.095500821 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, container_name=nova_compute, url=https://www.redhat.com, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, batch=17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5) Dec 6 03:46:09 localhost podman[85493]: 2025-12-06 08:46:09.971146864 +0000 UTC m=+0.130571854 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:46:09 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 03:46:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:46:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:46:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:46:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:46:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:46:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:46:30 localhost systemd[1]: tmp-crun.hxLiSj.mount: Deactivated successfully. Dec 6 03:46:30 localhost podman[85565]: 2025-12-06 08:46:30.957927297 +0000 UTC m=+0.110732047 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, tcib_managed=true, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, architecture=x86_64, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 6 03:46:30 localhost podman[85565]: 2025-12-06 08:46:30.964493368 +0000 UTC m=+0.117298058 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, release=1761123044, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, version=17.1.12, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Dec 6 03:46:30 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:46:31 localhost podman[85581]: 2025-12-06 08:46:31.011612788 +0000 UTC m=+0.148051438 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 6 03:46:31 localhost podman[85581]: 2025-12-06 08:46:31.045969599 +0000 UTC m=+0.182408299 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, architecture=x86_64, version=17.1.12, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=) Dec 6 03:46:31 localhost podman[85579]: 2025-12-06 08:46:31.05550082 +0000 UTC m=+0.193664312 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, vcs-type=git, tcib_managed=true, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z) Dec 6 03:46:31 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:46:31 localhost podman[85567]: 2025-12-06 08:46:31.106156089 +0000 UTC m=+0.254237195 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, config_id=tripleo_step4, io.openshift.expose-services=) Dec 6 03:46:31 localhost podman[85566]: 2025-12-06 08:46:31.159615684 +0000 UTC m=+0.312069723 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, com.redhat.component=openstack-collectd-container, distribution-scope=public, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, name=rhosp17/openstack-collectd, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, config_id=tripleo_step3) Dec 6 03:46:31 localhost podman[85579]: 2025-12-06 08:46:31.171118406 +0000 UTC m=+0.309281948 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, container_name=iscsid, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, distribution-scope=public, batch=17.1_20251118.1, release=1761123044, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, io.buildah.version=1.41.4, managed_by=tripleo_ansible, url=https://www.redhat.com) Dec 6 03:46:31 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:46:31 localhost podman[85566]: 2025-12-06 08:46:31.222449026 +0000 UTC m=+0.374903045 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, version=17.1.12, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, container_name=collectd, tcib_managed=true, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4) Dec 6 03:46:31 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:46:31 localhost podman[85568]: 2025-12-06 08:46:31.309853339 +0000 UTC m=+0.455112479 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, io.buildah.version=1.41.4, vcs-type=git, build-date=2025-11-19T00:11:48Z) Dec 6 03:46:31 localhost podman[85568]: 2025-12-06 08:46:31.360147537 +0000 UTC m=+0.505406707 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, batch=17.1_20251118.1, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, distribution-scope=public, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, vendor=Red Hat, Inc.) Dec 6 03:46:31 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:46:31 localhost podman[85567]: 2025-12-06 08:46:31.565355972 +0000 UTC m=+0.713437048 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., architecture=x86_64, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, version=17.1.12, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 6 03:46:31 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:46:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:46:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:46:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:46:35 localhost systemd[1]: tmp-crun.i4oDh0.mount: Deactivated successfully. Dec 6 03:46:35 localhost podman[85698]: 2025-12-06 08:46:35.936106037 +0000 UTC m=+0.092988875 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, distribution-scope=public, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-type=git, architecture=x86_64, version=17.1.12, maintainer=OpenStack TripleO Team) Dec 6 03:46:35 localhost podman[85697]: 2025-12-06 08:46:35.983941519 +0000 UTC m=+0.140568659 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, config_id=tripleo_step1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.12, vendor=Red Hat, Inc.) Dec 6 03:46:35 localhost podman[85698]: 2025-12-06 08:46:35.990208012 +0000 UTC m=+0.147090840 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, architecture=x86_64, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn) Dec 6 03:46:36 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:46:36 localhost podman[85696]: 2025-12-06 08:46:36.070883338 +0000 UTC m=+0.230896221 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, vendor=Red Hat, Inc., container_name=ovn_controller, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, url=https://www.redhat.com, config_id=tripleo_step4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044) Dec 6 03:46:36 localhost podman[85696]: 2025-12-06 08:46:36.097553854 +0000 UTC m=+0.257566757 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, batch=17.1_20251118.1, release=1761123044, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, version=17.1.12, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, config_id=tripleo_step4, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Dec 6 03:46:36 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:46:36 localhost podman[85697]: 2025-12-06 08:46:36.205380821 +0000 UTC m=+0.362007981 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, release=1761123044, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, version=17.1.12, container_name=metrics_qdr, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., batch=17.1_20251118.1, distribution-scope=public, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true) Dec 6 03:46:36 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:46:36 localhost systemd[1]: tmp-crun.jL8H3R.mount: Deactivated successfully. Dec 6 03:46:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:46:40 localhost podman[85772]: 2025-12-06 08:46:40.924026804 +0000 UTC m=+0.083948198 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step5, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.openshift.expose-services=, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:46:40 localhost podman[85772]: 2025-12-06 08:46:40.954197206 +0000 UTC m=+0.114118560 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=nova_compute, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, io.openshift.expose-services=, release=1761123044, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, url=https://www.redhat.com, vcs-type=git, managed_by=tripleo_ansible, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Dec 6 03:46:40 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 03:47:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:47:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:47:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:47:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:47:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:47:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:47:01 localhost systemd[1]: tmp-crun.uzvnxT.mount: Deactivated successfully. Dec 6 03:47:01 localhost podman[85877]: 2025-12-06 08:47:01.989251935 +0000 UTC m=+0.145325965 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, distribution-scope=public, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, url=https://www.redhat.com, release=1761123044, container_name=nova_migration_target, io.openshift.expose-services=, vcs-type=git) Dec 6 03:47:01 localhost podman[85876]: 2025-12-06 08:47:01.939261086 +0000 UTC m=+0.096823851 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, vcs-type=git, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Dec 6 03:47:02 localhost podman[85896]: 2025-12-06 08:47:01.95540947 +0000 UTC m=+0.100002549 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.41.4, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Dec 6 03:47:02 localhost podman[85876]: 2025-12-06 08:47:02.022159722 +0000 UTC m=+0.179722577 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, container_name=collectd, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, batch=17.1_20251118.1, config_id=tripleo_step3, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true) Dec 6 03:47:02 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:47:02 localhost podman[85875]: 2025-12-06 08:47:02.035218481 +0000 UTC m=+0.194598792 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, io.openshift.expose-services=, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, build-date=2025-11-18T22:49:32Z) Dec 6 03:47:02 localhost podman[85896]: 2025-12-06 08:47:02.03810865 +0000 UTC m=+0.182701669 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, architecture=x86_64, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, managed_by=tripleo_ansible, release=1761123044, version=17.1.12, vendor=Red Hat, Inc.) Dec 6 03:47:02 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:47:02 localhost podman[85884]: 2025-12-06 08:47:02.100507978 +0000 UTC m=+0.245980314 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=iscsid, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, release=1761123044, tcib_managed=true, config_id=tripleo_step3, managed_by=tripleo_ansible, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 03:47:02 localhost podman[85875]: 2025-12-06 08:47:02.121984935 +0000 UTC m=+0.281365296 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-type=git, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, config_id=tripleo_step4, container_name=logrotate_crond, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z) Dec 6 03:47:02 localhost podman[85878]: 2025-12-06 08:47:01.968616384 +0000 UTC m=+0.117626748 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, build-date=2025-11-19T00:11:48Z, architecture=x86_64, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.buildah.version=1.41.4, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:47:02 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:47:02 localhost podman[85884]: 2025-12-06 08:47:02.138040295 +0000 UTC m=+0.283512641 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, release=1761123044, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://www.redhat.com) Dec 6 03:47:02 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:47:02 localhost podman[85878]: 2025-12-06 08:47:02.204337943 +0000 UTC m=+0.353348317 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vcs-type=git, distribution-scope=public, url=https://www.redhat.com, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container) Dec 6 03:47:02 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:47:02 localhost podman[85877]: 2025-12-06 08:47:02.353196394 +0000 UTC m=+0.509270424 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.openshift.expose-services=, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.buildah.version=1.41.4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, version=17.1.12, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 03:47:02 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:47:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:47:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:47:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:47:06 localhost podman[86004]: 2025-12-06 08:47:06.920465959 +0000 UTC m=+0.080040629 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, release=1761123044, url=https://www.redhat.com, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, version=17.1.12, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller) Dec 6 03:47:06 localhost systemd[1]: tmp-crun.xtWjUi.mount: Deactivated successfully. Dec 6 03:47:06 localhost podman[86004]: 2025-12-06 08:47:06.970059445 +0000 UTC m=+0.129634065 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.buildah.version=1.41.4, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1) Dec 6 03:47:06 localhost podman[86005]: 2025-12-06 08:47:06.97546032 +0000 UTC m=+0.134678219 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, config_id=tripleo_step1, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, url=https://www.redhat.com, tcib_managed=true, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 6 03:47:06 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:47:07 localhost podman[86006]: 2025-12-06 08:47:07.027954906 +0000 UTC m=+0.182203393 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_id=tripleo_step4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent) Dec 6 03:47:07 localhost podman[86006]: 2025-12-06 08:47:07.094215372 +0000 UTC m=+0.248463859 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, version=17.1.12, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-11-19T00:14:25Z) Dec 6 03:47:07 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:47:07 localhost podman[86005]: 2025-12-06 08:47:07.158016842 +0000 UTC m=+0.317234721 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, distribution-scope=public, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:47:07 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:47:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:47:11 localhost podman[86079]: 2025-12-06 08:47:11.914841633 +0000 UTC m=+0.071304861 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.41.4, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, batch=17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044) Dec 6 03:47:11 localhost podman[86079]: 2025-12-06 08:47:11.974288791 +0000 UTC m=+0.130752039 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, release=1761123044, vcs-type=git, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, version=17.1.12, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:47:11 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 03:47:16 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 6 03:47:16 localhost recover_tripleo_nova_virtqemud[86106]: 61814 Dec 6 03:47:16 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 6 03:47:16 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 6 03:47:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:47:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:47:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:47:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:47:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:47:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:47:32 localhost podman[86153]: 2025-12-06 08:47:32.961227571 +0000 UTC m=+0.107596711 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.41.4, tcib_managed=true, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, release=1761123044, url=https://www.redhat.com, name=rhosp17/openstack-collectd, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, batch=17.1_20251118.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd) Dec 6 03:47:33 localhost podman[86154]: 2025-12-06 08:47:33.003443062 +0000 UTC m=+0.143554811 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.41.4, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, tcib_managed=true, vcs-type=git, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.expose-services=, release=1761123044, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 6 03:47:33 localhost podman[86152]: 2025-12-06 08:47:33.050999646 +0000 UTC m=+0.197462309 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, batch=17.1_20251118.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, container_name=logrotate_crond, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:47:33 localhost podman[86152]: 2025-12-06 08:47:33.063093826 +0000 UTC m=+0.209556529 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.openshift.expose-services=, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, distribution-scope=public, build-date=2025-11-18T22:49:32Z, version=17.1.12, io.buildah.version=1.41.4) Dec 6 03:47:33 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:47:33 localhost podman[86165]: 2025-12-06 08:47:33.112736774 +0000 UTC m=+0.244084325 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.4, batch=17.1_20251118.1, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, config_id=tripleo_step3, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044) Dec 6 03:47:33 localhost podman[86165]: 2025-12-06 08:47:33.122664588 +0000 UTC m=+0.254012129 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, managed_by=tripleo_ansible, vcs-type=git, version=17.1.12, distribution-scope=public, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, batch=17.1_20251118.1) Dec 6 03:47:33 localhost podman[86153]: 2025-12-06 08:47:33.142002749 +0000 UTC m=+0.288371899 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, batch=17.1_20251118.1, url=https://www.redhat.com, io.openshift.expose-services=, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, config_id=tripleo_step3, vcs-type=git) Dec 6 03:47:33 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:47:33 localhost podman[86155]: 2025-12-06 08:47:33.162492196 +0000 UTC m=+0.301089478 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., config_id=tripleo_step4, io.buildah.version=1.41.4, distribution-scope=public, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, url=https://www.redhat.com) Dec 6 03:47:33 localhost podman[86155]: 2025-12-06 08:47:33.202446827 +0000 UTC m=+0.341044159 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, batch=17.1_20251118.1, architecture=x86_64, config_id=tripleo_step4, version=17.1.12, release=1761123044, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 6 03:47:33 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:47:33 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:47:33 localhost podman[86170]: 2025-12-06 08:47:33.125327789 +0000 UTC m=+0.250526312 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, url=https://www.redhat.com, distribution-scope=public, release=1761123044, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi) Dec 6 03:47:33 localhost podman[86170]: 2025-12-06 08:47:33.308203901 +0000 UTC m=+0.433402434 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, batch=17.1_20251118.1, distribution-scope=public, version=17.1.12, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z) Dec 6 03:47:33 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:47:33 localhost podman[86154]: 2025-12-06 08:47:33.35918257 +0000 UTC m=+0.499294249 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, version=17.1.12, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, tcib_managed=true) Dec 6 03:47:33 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:47:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:47:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:47:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:47:37 localhost systemd[1]: tmp-crun.Vya2b5.mount: Deactivated successfully. Dec 6 03:47:37 localhost podman[86285]: 2025-12-06 08:47:37.922302126 +0000 UTC m=+0.084402461 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, release=1761123044, config_id=tripleo_step4, version=17.1.12, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, url=https://www.redhat.com) Dec 6 03:47:37 localhost systemd[1]: tmp-crun.z7fXYI.mount: Deactivated successfully. Dec 6 03:47:37 localhost podman[86283]: 2025-12-06 08:47:37.944605758 +0000 UTC m=+0.104430634 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, architecture=x86_64, distribution-scope=public, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, maintainer=OpenStack TripleO Team, container_name=ovn_controller, config_id=tripleo_step4, managed_by=tripleo_ansible, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, release=1761123044, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 6 03:47:37 localhost podman[86285]: 2025-12-06 08:47:37.99435426 +0000 UTC m=+0.156454535 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4) Dec 6 03:47:38 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:47:38 localhost podman[86284]: 2025-12-06 08:47:38.008522973 +0000 UTC m=+0.171170285 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git, tcib_managed=true, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, version=17.1.12) Dec 6 03:47:38 localhost podman[86283]: 2025-12-06 08:47:38.045329549 +0000 UTC m=+0.205154435 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_step4, version=17.1.12, container_name=ovn_controller, io.buildah.version=1.41.4, architecture=x86_64, release=1761123044, url=https://www.redhat.com) Dec 6 03:47:38 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:47:38 localhost podman[86284]: 2025-12-06 08:47:38.215135562 +0000 UTC m=+0.377782814 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, url=https://www.redhat.com, release=1761123044, container_name=metrics_qdr, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_id=tripleo_step1, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=) Dec 6 03:47:38 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:47:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:47:42 localhost podman[86356]: 2025-12-06 08:47:42.919786457 +0000 UTC m=+0.081714330 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, vcs-type=git, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Dec 6 03:47:42 localhost podman[86356]: 2025-12-06 08:47:42.974999405 +0000 UTC m=+0.136927258 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, tcib_managed=true, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, release=1761123044, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 6 03:47:42 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 03:48:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:48:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:48:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:48:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:48:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:48:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:48:03 localhost podman[86460]: 2025-12-06 08:48:03.948721119 +0000 UTC m=+0.096720079 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, batch=17.1_20251118.1, tcib_managed=true, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, release=1761123044, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, url=https://www.redhat.com, vcs-type=git, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 6 03:48:03 localhost podman[86460]: 2025-12-06 08:48:03.985155413 +0000 UTC m=+0.133154333 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.openshift.expose-services=, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, tcib_managed=true) Dec 6 03:48:04 localhost systemd[1]: tmp-crun.GyAkv1.mount: Deactivated successfully. Dec 6 03:48:04 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:48:04 localhost podman[86462]: 2025-12-06 08:48:04.003209365 +0000 UTC m=+0.148323627 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, distribution-scope=public, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, architecture=x86_64, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible) Dec 6 03:48:04 localhost podman[86461]: 2025-12-06 08:48:04.050305645 +0000 UTC m=+0.198282214 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-type=git, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com) Dec 6 03:48:04 localhost podman[86461]: 2025-12-06 08:48:04.060039523 +0000 UTC m=+0.208016032 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, tcib_managed=true, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, container_name=collectd, vcs-type=git, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 03:48:04 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:48:04 localhost podman[86480]: 2025-12-06 08:48:04.101773089 +0000 UTC m=+0.239128503 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, io.buildah.version=1.41.4, tcib_managed=true, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1761123044, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, architecture=x86_64, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team) Dec 6 03:48:04 localhost podman[86480]: 2025-12-06 08:48:04.109714422 +0000 UTC m=+0.247069826 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, container_name=iscsid, config_id=tripleo_step3, managed_by=tripleo_ansible, release=1761123044, com.redhat.component=openstack-iscsid-container) Dec 6 03:48:04 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:48:04 localhost podman[86465]: 2025-12-06 08:48:04.159827485 +0000 UTC m=+0.294794626 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.4, url=https://www.redhat.com, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, version=17.1.12, build-date=2025-11-19T00:11:48Z, vcs-type=git, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 6 03:48:04 localhost podman[86465]: 2025-12-06 08:48:04.200117786 +0000 UTC m=+0.335084867 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4) Dec 6 03:48:04 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:48:04 localhost podman[86481]: 2025-12-06 08:48:04.216405734 +0000 UTC m=+0.343436792 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, tcib_managed=true, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64) Dec 6 03:48:04 localhost podman[86481]: 2025-12-06 08:48:04.275274445 +0000 UTC m=+0.402305453 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, architecture=x86_64, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team) Dec 6 03:48:04 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:48:04 localhost podman[86462]: 2025-12-06 08:48:04.367825995 +0000 UTC m=+0.512940217 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, version=17.1.12, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target) Dec 6 03:48:04 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:48:04 localhost sshd[86591]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:48:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:48:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:48:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:48:08 localhost podman[86595]: 2025-12-06 08:48:08.951742378 +0000 UTC m=+0.089333893 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.4, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, url=https://www.redhat.com, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 6 03:48:08 localhost podman[86595]: 2025-12-06 08:48:08.988484672 +0000 UTC m=+0.126076207 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, distribution-scope=public, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team) Dec 6 03:48:09 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:48:09 localhost podman[86594]: 2025-12-06 08:48:09.010182515 +0000 UTC m=+0.150037019 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, release=1761123044, distribution-scope=public, io.buildah.version=1.41.4, vcs-type=git, io.openshift.expose-services=, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:48:09 localhost podman[86593]: 2025-12-06 08:48:08.985070437 +0000 UTC m=+0.130115440 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, batch=17.1_20251118.1, io.openshift.expose-services=, release=1761123044, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, container_name=ovn_controller, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4) Dec 6 03:48:09 localhost podman[86593]: 2025-12-06 08:48:09.063830536 +0000 UTC m=+0.208875509 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20251118.1, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, container_name=ovn_controller, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, release=1761123044, version=17.1.12, architecture=x86_64) Dec 6 03:48:09 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:48:09 localhost podman[86594]: 2025-12-06 08:48:09.204199898 +0000 UTC m=+0.344054322 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, container_name=metrics_qdr, distribution-scope=public, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Dec 6 03:48:09 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:48:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:48:13 localhost podman[86670]: 2025-12-06 08:48:13.969572312 +0000 UTC m=+0.085303500 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, architecture=x86_64) Dec 6 03:48:14 localhost podman[86670]: 2025-12-06 08:48:14.027242475 +0000 UTC m=+0.142973663 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., container_name=nova_compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, release=1761123044, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:48:14 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 03:48:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:48:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:48:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:48:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:48:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:48:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:48:34 localhost systemd[1]: tmp-crun.6xdcGY.mount: Deactivated successfully. Dec 6 03:48:34 localhost podman[86757]: 2025-12-06 08:48:34.944421 +0000 UTC m=+0.083208896 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, url=https://www.redhat.com, tcib_managed=true, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 6 03:48:35 localhost podman[86757]: 2025-12-06 08:48:35.0062249 +0000 UTC m=+0.145012796 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_id=tripleo_step4, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64) Dec 6 03:48:35 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:48:35 localhost podman[86743]: 2025-12-06 08:48:35.047396109 +0000 UTC m=+0.197225043 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, config_id=tripleo_step3, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, container_name=collectd) Dec 6 03:48:35 localhost podman[86751]: 2025-12-06 08:48:35.007459518 +0000 UTC m=+0.147109500 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, architecture=x86_64, url=https://www.redhat.com, io.buildah.version=1.41.4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, tcib_managed=true, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid) Dec 6 03:48:35 localhost podman[86751]: 2025-12-06 08:48:35.08767702 +0000 UTC m=+0.227326962 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, architecture=x86_64, container_name=iscsid, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, batch=17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, release=1761123044, version=17.1.12, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:48:35 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:48:35 localhost podman[86744]: 2025-12-06 08:48:35.108471827 +0000 UTC m=+0.254185804 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, release=1761123044, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 03:48:35 localhost podman[86743]: 2025-12-06 08:48:35.136541515 +0000 UTC m=+0.286370439 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.41.4, version=17.1.12, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-collectd-container, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible) Dec 6 03:48:35 localhost podman[86742]: 2025-12-06 08:48:35.147477609 +0000 UTC m=+0.300376047 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, batch=17.1_20251118.1, url=https://www.redhat.com, name=rhosp17/openstack-cron, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, version=17.1.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible) Dec 6 03:48:35 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:48:35 localhost podman[86742]: 2025-12-06 08:48:35.182672585 +0000 UTC m=+0.335570993 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, tcib_managed=true, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 6 03:48:35 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:48:35 localhost podman[86746]: 2025-12-06 08:48:35.198149509 +0000 UTC m=+0.339609247 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, managed_by=tripleo_ansible, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, batch=17.1_20251118.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, container_name=ceilometer_agent_compute, release=1761123044, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container) Dec 6 03:48:35 localhost podman[86746]: 2025-12-06 08:48:35.253184921 +0000 UTC m=+0.394644649 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, config_id=tripleo_step4, tcib_managed=true, batch=17.1_20251118.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, release=1761123044, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 6 03:48:35 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:48:35 localhost podman[86744]: 2025-12-06 08:48:35.515377149 +0000 UTC m=+0.661091166 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., architecture=x86_64, version=17.1.12, com.redhat.component=openstack-nova-compute-container, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, release=1761123044, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1) Dec 6 03:48:35 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:48:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:48:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:48:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:48:39 localhost systemd[1]: tmp-crun.jifAer.mount: Deactivated successfully. Dec 6 03:48:39 localhost podman[86876]: 2025-12-06 08:48:39.935629588 +0000 UTC m=+0.096063490 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-type=git, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.openshift.expose-services=, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4) Dec 6 03:48:39 localhost podman[86876]: 2025-12-06 08:48:39.987180584 +0000 UTC m=+0.147614456 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, release=1761123044, vendor=Red Hat, Inc., tcib_managed=true, url=https://www.redhat.com, config_id=tripleo_step4, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64) Dec 6 03:48:40 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:48:40 localhost podman[86877]: 2025-12-06 08:48:39.98704699 +0000 UTC m=+0.142606722 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_id=tripleo_step1, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, release=1761123044, architecture=x86_64, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd) Dec 6 03:48:40 localhost podman[86878]: 2025-12-06 08:48:40.08384272 +0000 UTC m=+0.237111022 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.4, architecture=x86_64, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, release=1761123044, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Dec 6 03:48:40 localhost podman[86878]: 2025-12-06 08:48:40.12507131 +0000 UTC m=+0.278339622 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, vcs-type=git, url=https://www.redhat.com, io.openshift.expose-services=, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Dec 6 03:48:40 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:48:40 localhost podman[86877]: 2025-12-06 08:48:40.177549035 +0000 UTC m=+0.333108757 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, vcs-type=git, architecture=x86_64, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, managed_by=tripleo_ansible, distribution-scope=public, tcib_managed=true, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4) Dec 6 03:48:40 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:48:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:48:44 localhost systemd[1]: tmp-crun.kPIxE1.mount: Deactivated successfully. Dec 6 03:48:44 localhost podman[86952]: 2025-12-06 08:48:44.946539739 +0000 UTC m=+0.103026840 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true) Dec 6 03:48:44 localhost podman[86952]: 2025-12-06 08:48:44.979854518 +0000 UTC m=+0.136341659 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, distribution-scope=public, release=1761123044, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com) Dec 6 03:48:44 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 03:49:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:49:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:49:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:49:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:49:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:49:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:49:05 localhost podman[87055]: 2025-12-06 08:49:05.958829493 +0000 UTC m=+0.106845268 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, managed_by=tripleo_ansible, batch=17.1_20251118.1, name=rhosp17/openstack-cron, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, distribution-scope=public) Dec 6 03:49:05 localhost podman[87058]: 2025-12-06 08:49:05.968933833 +0000 UTC m=+0.094098559 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, version=17.1.12, release=1761123044, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 6 03:49:05 localhost podman[87055]: 2025-12-06 08:49:05.998133635 +0000 UTC m=+0.146149400 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-type=git, release=1761123044, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 03:49:06 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:49:06 localhost podman[87056]: 2025-12-06 08:49:06.017481947 +0000 UTC m=+0.165547714 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, release=1761123044, version=17.1.12, name=rhosp17/openstack-collectd, architecture=x86_64, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step3, distribution-scope=public) Dec 6 03:49:06 localhost podman[87058]: 2025-12-06 08:49:06.048269098 +0000 UTC m=+0.173433814 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, version=17.1.12, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute) Dec 6 03:49:06 localhost podman[87057]: 2025-12-06 08:49:06.054765976 +0000 UTC m=+0.201469791 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, release=1761123044, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, container_name=nova_migration_target, version=17.1.12, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, managed_by=tripleo_ansible) Dec 6 03:49:06 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:49:06 localhost podman[87077]: 2025-12-06 08:49:06.110818971 +0000 UTC m=+0.245403445 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, version=17.1.12, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 6 03:49:06 localhost podman[87056]: 2025-12-06 08:49:06.133410801 +0000 UTC m=+0.281476568 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, distribution-scope=public, name=rhosp17/openstack-collectd, url=https://www.redhat.com, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.buildah.version=1.41.4, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, batch=17.1_20251118.1, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:49:06 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:49:06 localhost podman[87077]: 2025-12-06 08:49:06.185175265 +0000 UTC m=+0.319759689 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, batch=17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, release=1761123044, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., vcs-type=git, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:49:06 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:49:06 localhost podman[87059]: 2025-12-06 08:49:06.269934436 +0000 UTC m=+0.402270521 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, url=https://www.redhat.com, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, tcib_managed=true, container_name=iscsid, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git) Dec 6 03:49:06 localhost podman[87059]: 2025-12-06 08:49:06.282111438 +0000 UTC m=+0.414447563 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.4, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.12, build-date=2025-11-18T23:44:13Z, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, config_id=tripleo_step3, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64) Dec 6 03:49:06 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:49:06 localhost podman[87057]: 2025-12-06 08:49:06.417107957 +0000 UTC m=+0.563811752 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, name=rhosp17/openstack-nova-compute, tcib_managed=true, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, version=17.1.12, architecture=x86_64, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:49:06 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:49:06 localhost systemd[1]: tmp-crun.c26Y8T.mount: Deactivated successfully. Dec 6 03:49:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:49:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:49:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:49:10 localhost podman[87187]: 2025-12-06 08:49:10.918675014 +0000 UTC m=+0.082191275 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, managed_by=tripleo_ansible, io.openshift.expose-services=, distribution-scope=public, batch=17.1_20251118.1) Dec 6 03:49:10 localhost systemd[1]: tmp-crun.JFHs5J.mount: Deactivated successfully. Dec 6 03:49:10 localhost podman[87189]: 2025-12-06 08:49:10.983015702 +0000 UTC m=+0.140021163 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, config_id=tripleo_step4, io.buildah.version=1.41.4, release=1761123044, tcib_managed=true, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, batch=17.1_20251118.1, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com) Dec 6 03:49:11 localhost podman[87188]: 2025-12-06 08:49:11.034593418 +0000 UTC m=+0.194671634 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.buildah.version=1.41.4) Dec 6 03:49:11 localhost podman[87187]: 2025-12-06 08:49:11.04544721 +0000 UTC m=+0.208963471 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20251118.1, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, version=17.1.12, container_name=ovn_controller, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vendor=Red Hat, Inc., release=1761123044, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4) Dec 6 03:49:11 localhost podman[87189]: 2025-12-06 08:49:11.058315543 +0000 UTC m=+0.215320994 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, container_name=ovn_metadata_agent, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 6 03:49:11 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:49:11 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:49:11 localhost podman[87188]: 2025-12-06 08:49:11.231273223 +0000 UTC m=+0.391351409 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, container_name=metrics_qdr, config_id=tripleo_step1, tcib_managed=true, io.buildah.version=1.41.4, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, version=17.1.12) Dec 6 03:49:11 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:49:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:49:15 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 6 03:49:15 localhost recover_tripleo_nova_virtqemud[87268]: 61814 Dec 6 03:49:15 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 6 03:49:15 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 6 03:49:15 localhost systemd[1]: tmp-crun.owH1NC.mount: Deactivated successfully. Dec 6 03:49:15 localhost podman[87266]: 2025-12-06 08:49:15.920241101 +0000 UTC m=+0.083855305 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, architecture=x86_64, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, release=1761123044, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com) Dec 6 03:49:15 localhost podman[87266]: 2025-12-06 08:49:15.976280525 +0000 UTC m=+0.139894769 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, vcs-type=git, url=https://www.redhat.com, tcib_managed=true, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, distribution-scope=public, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Dec 6 03:49:15 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 03:49:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:49:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:49:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:49:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:49:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:49:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:49:36 localhost systemd[1]: tmp-crun.BTS0Dz.mount: Deactivated successfully. Dec 6 03:49:36 localhost podman[87341]: 2025-12-06 08:49:36.942418787 +0000 UTC m=+0.097169993 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, name=rhosp17/openstack-collectd, version=17.1.12, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, io.buildah.version=1.41.4, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, batch=17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 03:49:37 localhost podman[87340]: 2025-12-06 08:49:37.00139076 +0000 UTC m=+0.155886067 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, version=17.1.12, container_name=logrotate_crond, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:49:37 localhost podman[87341]: 2025-12-06 08:49:37.005460395 +0000 UTC m=+0.160211561 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.buildah.version=1.41.4, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3) Dec 6 03:49:37 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:49:37 localhost podman[87340]: 2025-12-06 08:49:37.031067588 +0000 UTC m=+0.185562885 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://www.redhat.com, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, vcs-type=git, tcib_managed=true, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, version=17.1.12, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64) Dec 6 03:49:37 localhost podman[87357]: 2025-12-06 08:49:37.037217186 +0000 UTC m=+0.172797115 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, version=17.1.12, vendor=Red Hat, Inc., io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, distribution-scope=public, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4) Dec 6 03:49:37 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:49:37 localhost podman[87354]: 2025-12-06 08:49:36.959946793 +0000 UTC m=+0.101426023 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, version=17.1.12, container_name=iscsid, distribution-scope=public, url=https://www.redhat.com) Dec 6 03:49:37 localhost podman[87354]: 2025-12-06 08:49:37.089220707 +0000 UTC m=+0.230699967 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, version=17.1.12, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, release=1761123044, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step3, io.openshift.expose-services=, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid) Dec 6 03:49:37 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:49:37 localhost podman[87357]: 2025-12-06 08:49:37.113963043 +0000 UTC m=+0.249542992 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, url=https://www.redhat.com, config_id=tripleo_step4, tcib_managed=true, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 03:49:37 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:49:37 localhost podman[87343]: 2025-12-06 08:49:37.096357955 +0000 UTC m=+0.241997981 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, batch=17.1_20251118.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1761123044, io.openshift.expose-services=) Dec 6 03:49:37 localhost podman[87342]: 2025-12-06 08:49:37.197094305 +0000 UTC m=+0.345365632 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1761123044, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, vendor=Red Hat, Inc., architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, distribution-scope=public, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_id=tripleo_step4) Dec 6 03:49:37 localhost podman[87343]: 2025-12-06 08:49:37.225543005 +0000 UTC m=+0.371183091 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, release=1761123044, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, vcs-type=git, distribution-scope=public, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 6 03:49:37 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:49:37 localhost podman[87342]: 2025-12-06 08:49:37.568247125 +0000 UTC m=+0.716518462 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, version=17.1.12, release=1761123044, tcib_managed=true, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, distribution-scope=public, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, managed_by=tripleo_ansible) Dec 6 03:49:37 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:49:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:49:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:49:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:49:41 localhost systemd[84400]: Created slice User Background Tasks Slice. Dec 6 03:49:41 localhost podman[87472]: 2025-12-06 08:49:41.939858578 +0000 UTC m=+0.095628265 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, io.buildah.version=1.41.4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, architecture=x86_64, build-date=2025-11-18T23:34:05Z) Dec 6 03:49:41 localhost systemd[84400]: Starting Cleanup of User's Temporary Files and Directories... Dec 6 03:49:41 localhost systemd[84400]: Finished Cleanup of User's Temporary Files and Directories. Dec 6 03:49:42 localhost podman[87472]: 2025-12-06 08:49:41.994189409 +0000 UTC m=+0.149959056 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, config_id=tripleo_step4, url=https://www.redhat.com, batch=17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1761123044, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:49:42 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:49:42 localhost podman[87474]: 2025-12-06 08:49:42.053482283 +0000 UTC m=+0.203106723 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., version=17.1.12, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public) Dec 6 03:49:42 localhost podman[87474]: 2025-12-06 08:49:42.097934101 +0000 UTC m=+0.247558591 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, tcib_managed=true, release=1761123044, vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Dec 6 03:49:42 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:49:42 localhost podman[87473]: 2025-12-06 08:49:42.01614324 +0000 UTC m=+0.168155203 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, version=17.1.12, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step1, container_name=metrics_qdr, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, url=https://www.redhat.com, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:49:42 localhost podman[87473]: 2025-12-06 08:49:42.22803942 +0000 UTC m=+0.380051343 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-11-18T22:49:46Z, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, architecture=x86_64, container_name=metrics_qdr, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., release=1761123044, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd) Dec 6 03:49:42 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:49:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:49:46 localhost systemd[1]: tmp-crun.P7gqiP.mount: Deactivated successfully. Dec 6 03:49:46 localhost podman[87547]: 2025-12-06 08:49:46.927702156 +0000 UTC m=+0.089564730 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, release=1761123044, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, container_name=nova_compute, maintainer=OpenStack TripleO Team, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, io.openshift.expose-services=) Dec 6 03:49:46 localhost podman[87547]: 2025-12-06 08:49:46.984375078 +0000 UTC m=+0.146237682 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step5, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, container_name=nova_compute) Dec 6 03:49:46 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 03:50:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:50:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:50:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:50:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:50:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:50:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:50:07 localhost podman[87650]: 2025-12-06 08:50:07.931384564 +0000 UTC m=+0.085575527 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, container_name=collectd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, batch=17.1_20251118.1) Dec 6 03:50:07 localhost podman[87650]: 2025-12-06 08:50:07.938031097 +0000 UTC m=+0.092222070 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, config_id=tripleo_step3, distribution-scope=public, architecture=x86_64, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd) Dec 6 03:50:07 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:50:07 localhost podman[87649]: 2025-12-06 08:50:07.946397853 +0000 UTC m=+0.095520002 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, com.redhat.component=openstack-cron-container, release=1761123044, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 03:50:08 localhost systemd[1]: tmp-crun.TcsEqh.mount: Deactivated successfully. Dec 6 03:50:08 localhost podman[87653]: 2025-12-06 08:50:08.006796231 +0000 UTC m=+0.151672699 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.12, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=rhosp17/openstack-iscsid, container_name=iscsid, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Dec 6 03:50:08 localhost podman[87651]: 2025-12-06 08:50:08.010226175 +0000 UTC m=+0.162146989 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, vendor=Red Hat, Inc., batch=17.1_20251118.1, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 03:50:08 localhost podman[87649]: 2025-12-06 08:50:08.030298049 +0000 UTC m=+0.179420198 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.component=openstack-cron-container, release=1761123044, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z) Dec 6 03:50:08 localhost podman[87653]: 2025-12-06 08:50:08.042556774 +0000 UTC m=+0.187433282 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_id=tripleo_step3, release=1761123044, tcib_managed=true, url=https://www.redhat.com) Dec 6 03:50:08 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:50:08 localhost podman[87652]: 2025-12-06 08:50:08.053005764 +0000 UTC m=+0.201628307 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, batch=17.1_20251118.1, architecture=x86_64, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, version=17.1.12, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public) Dec 6 03:50:08 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:50:08 localhost podman[87662]: 2025-12-06 08:50:08.105091146 +0000 UTC m=+0.247534770 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.buildah.version=1.41.4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4) Dec 6 03:50:08 localhost podman[87652]: 2025-12-06 08:50:08.111244044 +0000 UTC m=+0.259866557 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 6 03:50:08 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:50:08 localhost podman[87662]: 2025-12-06 08:50:08.133278378 +0000 UTC m=+0.275721962 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., vcs-type=git, tcib_managed=true, config_id=tripleo_step4, url=https://www.redhat.com, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=) Dec 6 03:50:08 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:50:08 localhost podman[87651]: 2025-12-06 08:50:08.367702227 +0000 UTC m=+0.519623081 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.12, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, vcs-type=git) Dec 6 03:50:08 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:50:09 localhost ceph-osd[31726]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 6 03:50:09 localhost ceph-osd[31726]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 5761 writes, 25K keys, 5761 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5761 writes, 760 syncs, 7.58 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 593 writes, 2365 keys, 593 commit groups, 1.0 writes per commit group, ingest: 3.12 MB, 0.01 MB/s#012Interval WAL: 593 writes, 185 syncs, 3.21 writes per sync, written: 0.00 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 6 03:50:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:50:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:50:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:50:12 localhost systemd[1]: tmp-crun.otVyqC.mount: Deactivated successfully. Dec 6 03:50:12 localhost ceph-osd[32665]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 6 03:50:12 localhost ceph-osd[32665]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.2 total, 600.0 interval#012Cumulative writes: 4879 writes, 21K keys, 4879 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4879 writes, 669 syncs, 7.29 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 412 writes, 1624 keys, 412 commit groups, 1.0 writes per commit group, ingest: 1.78 MB, 0.00 MB/s#012Interval WAL: 412 writes, 148 syncs, 2.78 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 6 03:50:12 localhost podman[87780]: 2025-12-06 08:50:12.9708961 +0000 UTC m=+0.129192442 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.41.4, url=https://www.redhat.com, distribution-scope=public, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4) Dec 6 03:50:12 localhost podman[87779]: 2025-12-06 08:50:12.938111027 +0000 UTC m=+0.099571335 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, container_name=metrics_qdr, config_id=tripleo_step1, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=) Dec 6 03:50:13 localhost podman[87780]: 2025-12-06 08:50:13.006124178 +0000 UTC m=+0.164420490 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20251118.1, io.buildah.version=1.41.4, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true) Dec 6 03:50:13 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:50:13 localhost podman[87778]: 2025-12-06 08:50:13.084337069 +0000 UTC m=+0.245416296 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 6 03:50:13 localhost podman[87779]: 2025-12-06 08:50:13.1023634 +0000 UTC m=+0.263823748 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, release=1761123044, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, version=17.1.12, batch=17.1_20251118.1, container_name=metrics_qdr, io.buildah.version=1.41.4) Dec 6 03:50:13 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:50:13 localhost podman[87778]: 2025-12-06 08:50:13.146428468 +0000 UTC m=+0.307507675 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, architecture=x86_64, config_id=tripleo_step4, vendor=Red Hat, Inc., tcib_managed=true, release=1761123044, io.buildah.version=1.41.4, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, version=17.1.12, managed_by=tripleo_ansible) Dec 6 03:50:13 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:50:13 localhost systemd[1]: tmp-crun.CfUE3L.mount: Deactivated successfully. Dec 6 03:50:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:50:17 localhost podman[87856]: 2025-12-06 08:50:17.918402932 +0000 UTC m=+0.078118320 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, batch=17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc.) Dec 6 03:50:17 localhost podman[87856]: 2025-12-06 08:50:17.945862062 +0000 UTC m=+0.105577470 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, release=1761123044, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step5, version=17.1.12, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Dec 6 03:50:17 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 03:50:19 localhost sshd[87882]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:50:19 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 6 03:50:19 localhost recover_tripleo_nova_virtqemud[87885]: 61814 Dec 6 03:50:19 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 6 03:50:19 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 6 03:50:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:50:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:50:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:50:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:50:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:50:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:50:38 localhost podman[87932]: 2025-12-06 08:50:38.948537273 +0000 UTC m=+0.099664858 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, distribution-scope=public, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, version=17.1.12, vendor=Red Hat, Inc., tcib_managed=true) Dec 6 03:50:39 localhost systemd[1]: tmp-crun.4kosVR.mount: Deactivated successfully. Dec 6 03:50:39 localhost podman[87934]: 2025-12-06 08:50:39.006250658 +0000 UTC m=+0.152155884 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, release=1761123044, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 03:50:39 localhost podman[87934]: 2025-12-06 08:50:39.03804552 +0000 UTC m=+0.183950746 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, config_id=tripleo_step4, distribution-scope=public, managed_by=tripleo_ansible) Dec 6 03:50:39 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:50:39 localhost podman[87933]: 2025-12-06 08:50:39.053961747 +0000 UTC m=+0.202112561 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1761123044, config_id=tripleo_step4, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 6 03:50:39 localhost podman[87941]: 2025-12-06 08:50:39.098502089 +0000 UTC m=+0.231769758 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, distribution-scope=public, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_id=tripleo_step3, version=17.1.12, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, tcib_managed=true) Dec 6 03:50:39 localhost podman[87941]: 2025-12-06 08:50:39.110138205 +0000 UTC m=+0.243405914 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, vcs-type=git, io.buildah.version=1.41.4, release=1761123044) Dec 6 03:50:39 localhost podman[87932]: 2025-12-06 08:50:39.119580503 +0000 UTC m=+0.270708138 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, architecture=x86_64, container_name=collectd, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, com.redhat.component=openstack-collectd-container) Dec 6 03:50:39 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:50:39 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:50:39 localhost podman[87951]: 2025-12-06 08:50:39.20872882 +0000 UTC m=+0.346329802 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, distribution-scope=public, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.expose-services=, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, tcib_managed=true, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 6 03:50:39 localhost podman[87951]: 2025-12-06 08:50:39.2600805 +0000 UTC m=+0.397681452 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, architecture=x86_64, batch=17.1_20251118.1, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z) Dec 6 03:50:39 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:50:39 localhost podman[87931]: 2025-12-06 08:50:39.263616669 +0000 UTC m=+0.415990373 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, url=https://www.redhat.com, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-cron) Dec 6 03:50:39 localhost podman[87931]: 2025-12-06 08:50:39.347281076 +0000 UTC m=+0.499654730 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, distribution-scope=public, config_id=tripleo_step4, url=https://www.redhat.com, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, tcib_managed=true, vcs-type=git) Dec 6 03:50:39 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:50:39 localhost podman[87933]: 2025-12-06 08:50:39.413171911 +0000 UTC m=+0.561322735 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1761123044, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, distribution-scope=public) Dec 6 03:50:39 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:50:39 localhost systemd[1]: tmp-crun.pOFKGu.mount: Deactivated successfully. Dec 6 03:50:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:50:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:50:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:50:43 localhost systemd[1]: tmp-crun.TOjdyo.mount: Deactivated successfully. Dec 6 03:50:43 localhost podman[88067]: 2025-12-06 08:50:43.900098528 +0000 UTC m=+0.061005906 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, vcs-type=git, distribution-scope=public, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, managed_by=tripleo_ansible, architecture=x86_64, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 6 03:50:43 localhost systemd[1]: tmp-crun.ZD05SM.mount: Deactivated successfully. Dec 6 03:50:44 localhost podman[88066]: 2025-12-06 08:50:43.998677813 +0000 UTC m=+0.158928982 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, version=17.1.12, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, url=https://www.redhat.com, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 03:50:44 localhost podman[88068]: 2025-12-06 08:50:43.967257392 +0000 UTC m=+0.119656110 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vendor=Red Hat, Inc., io.buildah.version=1.41.4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, build-date=2025-11-19T00:14:25Z, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:50:44 localhost podman[88068]: 2025-12-06 08:50:44.048102794 +0000 UTC m=+0.200501492 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, build-date=2025-11-19T00:14:25Z, version=17.1.12, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, container_name=ovn_metadata_agent, tcib_managed=true) Dec 6 03:50:44 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:50:44 localhost podman[88066]: 2025-12-06 08:50:44.07217674 +0000 UTC m=+0.232427929 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, config_id=tripleo_step4, distribution-scope=public, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-type=git, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Dec 6 03:50:44 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:50:44 localhost podman[88067]: 2025-12-06 08:50:44.10913653 +0000 UTC m=+0.270043918 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container) Dec 6 03:50:44 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:50:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:50:48 localhost systemd[1]: tmp-crun.Y0F8aw.mount: Deactivated successfully. Dec 6 03:50:48 localhost podman[88142]: 2025-12-06 08:50:48.892221054 +0000 UTC m=+0.061112480 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, url=https://www.redhat.com, release=1761123044, distribution-scope=public, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, container_name=nova_compute, vcs-type=git) Dec 6 03:50:48 localhost podman[88142]: 2025-12-06 08:50:48.921091647 +0000 UTC m=+0.089983113 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, release=1761123044, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, batch=17.1_20251118.1, tcib_managed=true, config_id=tripleo_step5, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, architecture=x86_64, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 03:50:48 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 03:51:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:51:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:51:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:51:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:51:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:51:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:51:09 localhost systemd[1]: tmp-crun.CmgBkp.mount: Deactivated successfully. Dec 6 03:51:09 localhost podman[88296]: 2025-12-06 08:51:09.943804958 +0000 UTC m=+0.103546207 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://www.redhat.com, vcs-type=git, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-11-18T22:51:28Z, container_name=collectd, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=) Dec 6 03:51:09 localhost podman[88310]: 2025-12-06 08:51:09.977884371 +0000 UTC m=+0.121430235 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, distribution-scope=public, batch=17.1_20251118.1, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3) Dec 6 03:51:09 localhost podman[88296]: 2025-12-06 08:51:09.982039128 +0000 UTC m=+0.141780367 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, release=1761123044, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, tcib_managed=true, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, version=17.1.12, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:51:09 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:51:10 localhost podman[88310]: 2025-12-06 08:51:10.013983434 +0000 UTC m=+0.157529308 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, release=1761123044, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Dec 6 03:51:10 localhost podman[88297]: 2025-12-06 08:51:10.0328161 +0000 UTC m=+0.181482021 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 03:51:10 localhost podman[88316]: 2025-12-06 08:51:10.12927153 +0000 UTC m=+0.266843371 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12) Dec 6 03:51:10 localhost podman[88295]: 2025-12-06 08:51:10.151531861 +0000 UTC m=+0.310126675 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 6 03:51:10 localhost podman[88295]: 2025-12-06 08:51:10.159109532 +0000 UTC m=+0.317704376 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, tcib_managed=true) Dec 6 03:51:10 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:51:10 localhost podman[88316]: 2025-12-06 08:51:10.187038996 +0000 UTC m=+0.324610817 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, vcs-type=git, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, version=17.1.12, io.openshift.expose-services=, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, batch=17.1_20251118.1) Dec 6 03:51:10 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:51:10 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:51:10 localhost podman[88303]: 2025-12-06 08:51:10.211736532 +0000 UTC m=+0.358477453 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, release=1761123044, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, batch=17.1_20251118.1, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute) Dec 6 03:51:10 localhost podman[88303]: 2025-12-06 08:51:10.239054127 +0000 UTC m=+0.385795038 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, batch=17.1_20251118.1, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, release=1761123044, managed_by=tripleo_ansible, architecture=x86_64) Dec 6 03:51:10 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:51:10 localhost podman[88297]: 2025-12-06 08:51:10.331052951 +0000 UTC m=+0.479718882 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1761123044, architecture=x86_64, vcs-type=git, io.buildah.version=1.41.4, io.openshift.expose-services=, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:51:10 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:51:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:51:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:51:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:51:14 localhost podman[88427]: 2025-12-06 08:51:14.918570712 +0000 UTC m=+0.082217205 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, vcs-type=git, architecture=x86_64, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 6 03:51:14 localhost systemd[1]: tmp-crun.q5HjcA.mount: Deactivated successfully. Dec 6 03:51:14 localhost podman[88429]: 2025-12-06 08:51:14.975230735 +0000 UTC m=+0.135371621 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, architecture=x86_64, io.buildah.version=1.41.4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, version=17.1.12, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 03:51:15 localhost podman[88429]: 2025-12-06 08:51:15.021111088 +0000 UTC m=+0.181251954 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, config_id=tripleo_step4, version=17.1.12, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, release=1761123044, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, url=https://www.redhat.com) Dec 6 03:51:15 localhost systemd[1]: tmp-crun.hQFVcb.mount: Deactivated successfully. Dec 6 03:51:15 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:51:15 localhost podman[88428]: 2025-12-06 08:51:15.036864729 +0000 UTC m=+0.199161211 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, url=https://www.redhat.com, vcs-type=git, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, tcib_managed=true, build-date=2025-11-18T22:49:46Z) Dec 6 03:51:15 localhost podman[88427]: 2025-12-06 08:51:15.042179142 +0000 UTC m=+0.205825585 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, config_id=tripleo_step4, vcs-type=git, url=https://www.redhat.com, release=1761123044, tcib_managed=true, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, architecture=x86_64, distribution-scope=public, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 6 03:51:15 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:51:15 localhost podman[88428]: 2025-12-06 08:51:15.256233728 +0000 UTC m=+0.418530260 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, url=https://www.redhat.com, managed_by=tripleo_ansible, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Dec 6 03:51:15 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:51:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:51:19 localhost podman[88504]: 2025-12-06 08:51:19.916888958 +0000 UTC m=+0.072878190 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, url=https://www.redhat.com, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, distribution-scope=public, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, batch=17.1_20251118.1, io.openshift.expose-services=) Dec 6 03:51:19 localhost podman[88504]: 2025-12-06 08:51:19.945990418 +0000 UTC m=+0.101979590 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, version=17.1.12) Dec 6 03:51:19 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 03:51:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:51:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:51:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:51:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:51:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:51:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:51:40 localhost podman[88577]: 2025-12-06 08:51:40.930584702 +0000 UTC m=+0.086153355 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-type=git, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=) Dec 6 03:51:40 localhost podman[88577]: 2025-12-06 08:51:40.93933879 +0000 UTC m=+0.094907463 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.openshift.expose-services=, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, tcib_managed=true, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, version=17.1.12, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com) Dec 6 03:51:40 localhost systemd[1]: tmp-crun.BSFwMn.mount: Deactivated successfully. Dec 6 03:51:41 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:51:41 localhost podman[88579]: 2025-12-06 08:51:41.028128704 +0000 UTC m=+0.179312644 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, vcs-type=git, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.openshift.expose-services=, url=https://www.redhat.com, version=17.1.12, vendor=Red Hat, Inc.) Dec 6 03:51:41 localhost podman[88578]: 2025-12-06 08:51:40.993831366 +0000 UTC m=+0.142787938 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, version=17.1.12, vendor=Red Hat, Inc., config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64) Dec 6 03:51:41 localhost podman[88585]: 2025-12-06 08:51:41.083062685 +0000 UTC m=+0.231890253 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, config_id=tripleo_step3, distribution-scope=public, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, vcs-type=git, architecture=x86_64, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Dec 6 03:51:41 localhost podman[88576]: 2025-12-06 08:51:41.146911607 +0000 UTC m=+0.303449081 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, version=17.1.12, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:51:41 localhost podman[88576]: 2025-12-06 08:51:41.156111758 +0000 UTC m=+0.312649262 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64) Dec 6 03:51:41 localhost podman[88579]: 2025-12-06 08:51:41.166783235 +0000 UTC m=+0.317967155 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.12, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 6 03:51:41 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:51:41 localhost podman[88592]: 2025-12-06 08:51:41.119888781 +0000 UTC m=+0.261024093 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, version=17.1.12, io.buildah.version=1.41.4, vcs-type=git, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 6 03:51:41 localhost podman[88592]: 2025-12-06 08:51:41.201210707 +0000 UTC m=+0.342345979 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-type=git, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, version=17.1.12, distribution-scope=public, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=) Dec 6 03:51:41 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:51:41 localhost podman[88585]: 2025-12-06 08:51:41.220366443 +0000 UTC m=+0.369194001 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.12, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, architecture=x86_64, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, url=https://www.redhat.com, managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, distribution-scope=public, batch=17.1_20251118.1, io.openshift.expose-services=) Dec 6 03:51:41 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:51:41 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:51:41 localhost podman[88578]: 2025-12-06 08:51:41.363702696 +0000 UTC m=+0.512659268 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, release=1761123044, vcs-type=git, build-date=2025-11-19T00:36:58Z, architecture=x86_64) Dec 6 03:51:41 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:51:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:51:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:51:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:51:45 localhost podman[88704]: 2025-12-06 08:51:45.914431066 +0000 UTC m=+0.075938983 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, url=https://www.redhat.com, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Dec 6 03:51:45 localhost podman[88704]: 2025-12-06 08:51:45.941285067 +0000 UTC m=+0.102792984 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_id=tripleo_step4, distribution-scope=public, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:51:45 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:51:46 localhost podman[88705]: 2025-12-06 08:51:46.021588673 +0000 UTC m=+0.179934713 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:51:46 localhost podman[88706]: 2025-12-06 08:51:46.073849871 +0000 UTC m=+0.229133598 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, batch=17.1_20251118.1, container_name=ovn_metadata_agent, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Dec 6 03:51:46 localhost podman[88706]: 2025-12-06 08:51:46.119063654 +0000 UTC m=+0.274347391 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, batch=17.1_20251118.1, container_name=ovn_metadata_agent, release=1761123044, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com) Dec 6 03:51:46 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:51:46 localhost podman[88705]: 2025-12-06 08:51:46.281162071 +0000 UTC m=+0.439508101 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, release=1761123044, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z) Dec 6 03:51:46 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:51:46 localhost systemd[1]: tmp-crun.yseGHU.mount: Deactivated successfully. Dec 6 03:51:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:51:50 localhost podman[88778]: 2025-12-06 08:51:50.914159284 +0000 UTC m=+0.080369328 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://www.redhat.com, container_name=nova_compute, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, distribution-scope=public, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true, vcs-type=git, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 6 03:51:50 localhost podman[88778]: 2025-12-06 08:51:50.933880477 +0000 UTC m=+0.100090561 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, name=rhosp17/openstack-nova-compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, batch=17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, vcs-type=git, release=1761123044, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 03:51:50 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 03:52:04 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 6 03:52:04 localhost recover_tripleo_nova_virtqemud[88820]: 61814 Dec 6 03:52:04 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 6 03:52:04 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 6 03:52:10 localhost sshd[88883]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:52:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:52:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:52:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:52:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:52:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:52:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:52:11 localhost systemd[1]: tmp-crun.RznnjS.mount: Deactivated successfully. Dec 6 03:52:11 localhost podman[88886]: 2025-12-06 08:52:11.930828554 +0000 UTC m=+0.088645002 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, container_name=collectd, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, config_id=tripleo_step3, distribution-scope=public, com.redhat.component=openstack-collectd-container, release=1761123044, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 6 03:52:11 localhost podman[88886]: 2025-12-06 08:52:11.940981484 +0000 UTC m=+0.098797952 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., version=17.1.12, architecture=x86_64, name=rhosp17/openstack-collectd, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:52:11 localhost systemd[1]: tmp-crun.4n5bBq.mount: Deactivated successfully. Dec 6 03:52:11 localhost podman[88906]: 2025-12-06 08:52:11.955819327 +0000 UTC m=+0.093571552 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, vcs-type=git, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, version=17.1.12, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 6 03:52:11 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:52:11 localhost podman[88906]: 2025-12-06 08:52:11.96702822 +0000 UTC m=+0.104780415 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, vcs-type=git, container_name=iscsid, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Dec 6 03:52:11 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:52:11 localhost podman[88887]: 2025-12-06 08:52:11.9833928 +0000 UTC m=+0.136249896 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.12, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, release=1761123044, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com) Dec 6 03:52:12 localhost podman[88885]: 2025-12-06 08:52:12.032531304 +0000 UTC m=+0.191426005 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, url=https://www.redhat.com, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, batch=17.1_20251118.1, io.buildah.version=1.41.4, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron) Dec 6 03:52:12 localhost podman[88885]: 2025-12-06 08:52:12.03797432 +0000 UTC m=+0.196869061 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, distribution-scope=public, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, url=https://www.redhat.com, io.openshift.expose-services=) Dec 6 03:52:12 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:52:12 localhost podman[88898]: 2025-12-06 08:52:12.088395032 +0000 UTC m=+0.232572653 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-type=git, config_id=tripleo_step4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12) Dec 6 03:52:12 localhost podman[88913]: 2025-12-06 08:52:12.140160294 +0000 UTC m=+0.275076632 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1761123044, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, vcs-type=git, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, distribution-scope=public, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 6 03:52:12 localhost podman[88898]: 2025-12-06 08:52:12.148356205 +0000 UTC m=+0.292533876 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, distribution-scope=public, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=) Dec 6 03:52:12 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:52:12 localhost podman[88913]: 2025-12-06 08:52:12.168297855 +0000 UTC m=+0.303214193 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, managed_by=tripleo_ansible, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, release=1761123044, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, container_name=ceilometer_agent_ipmi) Dec 6 03:52:12 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:52:12 localhost podman[88887]: 2025-12-06 08:52:12.329983719 +0000 UTC m=+0.482840815 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, tcib_managed=true, url=https://www.redhat.com, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target) Dec 6 03:52:12 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:52:12 localhost sshd[89009]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:52:14 localhost sshd[89011]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:52:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:52:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:52:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:52:16 localhost podman[89013]: 2025-12-06 08:52:16.944380945 +0000 UTC m=+0.098852414 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z) Dec 6 03:52:16 localhost systemd[1]: tmp-crun.Zf9pxE.mount: Deactivated successfully. Dec 6 03:52:17 localhost podman[89015]: 2025-12-06 08:52:17.000869792 +0000 UTC m=+0.148796631 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, url=https://www.redhat.com, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 6 03:52:17 localhost podman[89014]: 2025-12-06 08:52:17.043824826 +0000 UTC m=+0.195506900 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.buildah.version=1.41.4, batch=17.1_20251118.1, distribution-scope=public, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, io.openshift.expose-services=, container_name=metrics_qdr, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container) Dec 6 03:52:17 localhost podman[89015]: 2025-12-06 08:52:17.063059264 +0000 UTC m=+0.210986053 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, url=https://www.redhat.com, version=17.1.12, io.openshift.expose-services=, release=1761123044, vendor=Red Hat, Inc., distribution-scope=public) Dec 6 03:52:17 localhost podman[89013]: 2025-12-06 08:52:17.071328767 +0000 UTC m=+0.225800286 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, vendor=Red Hat, Inc., version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, maintainer=OpenStack TripleO Team, release=1761123044) Dec 6 03:52:17 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:52:17 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:52:17 localhost podman[89014]: 2025-12-06 08:52:17.226378508 +0000 UTC m=+0.378060652 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, release=1761123044, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, container_name=metrics_qdr, vcs-type=git, config_id=tripleo_step1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container) Dec 6 03:52:17 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:52:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:52:21 localhost systemd[1]: tmp-crun.aKx9SB.mount: Deactivated successfully. Dec 6 03:52:21 localhost podman[89090]: 2025-12-06 08:52:21.350924705 +0000 UTC m=+0.099854444 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, vcs-type=git, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.buildah.version=1.41.4, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, version=17.1.12, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 6 03:52:21 localhost podman[89090]: 2025-12-06 08:52:21.384217473 +0000 UTC m=+0.133147162 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, vcs-type=git, managed_by=tripleo_ansible, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_id=tripleo_step5) Dec 6 03:52:21 localhost sshd[89115]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:52:21 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 03:52:24 localhost sshd[89120]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:52:27 localhost sshd[89122]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:52:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:52:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:52:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:52:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:52:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:52:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:52:42 localhost podman[89148]: 2025-12-06 08:52:42.938725576 +0000 UTC m=+0.086637770 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, url=https://www.redhat.com, vcs-type=git, batch=17.1_20251118.1, release=1761123044, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, managed_by=tripleo_ansible, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Dec 6 03:52:42 localhost podman[89148]: 2025-12-06 08:52:42.950190276 +0000 UTC m=+0.098102510 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, release=1761123044, build-date=2025-11-18T22:51:28Z, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=collectd) Dec 6 03:52:42 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:52:42 localhost podman[89161]: 2025-12-06 08:52:42.992495441 +0000 UTC m=+0.129953385 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, managed_by=tripleo_ansible, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, tcib_managed=true, vcs-type=git, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, release=1761123044) Dec 6 03:52:43 localhost podman[89161]: 2025-12-06 08:52:43.00623315 +0000 UTC m=+0.143691144 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, vendor=Red Hat, Inc., url=https://www.redhat.com, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, container_name=iscsid, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1) Dec 6 03:52:43 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:52:43 localhost podman[89162]: 2025-12-06 08:52:43.048276095 +0000 UTC m=+0.179740866 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, tcib_managed=true, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-type=git, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, url=https://www.redhat.com, io.buildah.version=1.41.4, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 03:52:43 localhost podman[89149]: 2025-12-06 08:52:43.107624631 +0000 UTC m=+0.249057086 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, release=1761123044, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, tcib_managed=true, vendor=Red Hat, Inc.) Dec 6 03:52:43 localhost podman[89162]: 2025-12-06 08:52:43.131972135 +0000 UTC m=+0.263436936 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., batch=17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, version=17.1.12, vcs-type=git) Dec 6 03:52:43 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:52:43 localhost podman[89147]: 2025-12-06 08:52:43.195655772 +0000 UTC m=+0.345072572 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, config_id=tripleo_step4, architecture=x86_64, vcs-type=git, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond) Dec 6 03:52:43 localhost podman[89150]: 2025-12-06 08:52:43.206291497 +0000 UTC m=+0.342346348 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, architecture=x86_64, release=1761123044) Dec 6 03:52:43 localhost podman[89147]: 2025-12-06 08:52:43.207011629 +0000 UTC m=+0.356428429 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.buildah.version=1.41.4, config_id=tripleo_step4, batch=17.1_20251118.1, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, vendor=Red Hat, Inc.) Dec 6 03:52:43 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:52:43 localhost podman[89150]: 2025-12-06 08:52:43.31202284 +0000 UTC m=+0.448077701 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., version=17.1.12, batch=17.1_20251118.1, container_name=ceilometer_agent_compute, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, build-date=2025-11-19T00:11:48Z, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 6 03:52:43 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:52:43 localhost podman[89149]: 2025-12-06 08:52:43.477297954 +0000 UTC m=+0.618730389 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, distribution-scope=public, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:52:43 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:52:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:52:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:52:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:52:47 localhost podman[89281]: 2025-12-06 08:52:47.926801933 +0000 UTC m=+0.082246666 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vcs-type=git, release=1761123044, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com) Dec 6 03:52:47 localhost podman[89281]: 2025-12-06 08:52:47.95122666 +0000 UTC m=+0.106671363 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, managed_by=tripleo_ansible, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, architecture=x86_64, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc.) Dec 6 03:52:47 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:52:48 localhost systemd[1]: tmp-crun.SiD1qJ.mount: Deactivated successfully. Dec 6 03:52:48 localhost podman[89282]: 2025-12-06 08:52:48.037198038 +0000 UTC m=+0.192250129 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, config_id=tripleo_step1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.expose-services=) Dec 6 03:52:48 localhost podman[89283]: 2025-12-06 08:52:48.100470343 +0000 UTC m=+0.250098798 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, url=https://www.redhat.com, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, container_name=ovn_metadata_agent, io.openshift.expose-services=, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:52:48 localhost podman[89283]: 2025-12-06 08:52:48.133030457 +0000 UTC m=+0.282658912 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, vcs-type=git) Dec 6 03:52:48 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:52:48 localhost podman[89282]: 2025-12-06 08:52:48.26032532 +0000 UTC m=+0.415377461 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step1, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, version=17.1.12, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, url=https://www.redhat.com, vcs-type=git, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 6 03:52:48 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:52:48 localhost systemd[1]: tmp-crun.IzyFP2.mount: Deactivated successfully. Dec 6 03:52:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:52:51 localhost podman[89359]: 2025-12-06 08:52:51.922593667 +0000 UTC m=+0.081032139 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, version=17.1.12, release=1761123044, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, container_name=nova_compute, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible) Dec 6 03:52:51 localhost podman[89359]: 2025-12-06 08:52:51.955317748 +0000 UTC m=+0.113756240 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_compute, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, tcib_managed=true, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, url=https://www.redhat.com, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:52:51 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 03:53:06 localhost podman[89489]: 2025-12-06 08:53:06.318215245 +0000 UTC m=+0.062394259 container exec ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.openshift.expose-services=, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, vcs-type=git, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, architecture=x86_64) Dec 6 03:53:06 localhost podman[89489]: 2025-12-06 08:53:06.410570069 +0000 UTC m=+0.154749113 container exec_died ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, version=7, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , GIT_BRANCH=main, name=rhceph, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, vcs-type=git, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, vendor=Red Hat, Inc.) Dec 6 03:53:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:53:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:53:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:53:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:53:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:53:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:53:13 localhost systemd[1]: tmp-crun.QNTgWa.mount: Deactivated successfully. Dec 6 03:53:13 localhost podman[89654]: 2025-12-06 08:53:13.958998298 +0000 UTC m=+0.098336917 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., vcs-type=git, version=17.1.12) Dec 6 03:53:13 localhost systemd[1]: tmp-crun.rIKUrw.mount: Deactivated successfully. Dec 6 03:53:13 localhost podman[89638]: 2025-12-06 08:53:13.992108991 +0000 UTC m=+0.147706057 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, distribution-scope=public, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, architecture=x86_64, vcs-type=git, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, tcib_managed=true, maintainer=OpenStack TripleO Team) Dec 6 03:53:14 localhost podman[89638]: 2025-12-06 08:53:14.004012916 +0000 UTC m=+0.159609972 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, container_name=logrotate_crond, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public) Dec 6 03:53:14 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:53:14 localhost podman[89645]: 2025-12-06 08:53:14.044630217 +0000 UTC m=+0.186530975 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=, version=17.1.12, architecture=x86_64, vcs-type=git, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., release=1761123044, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 6 03:53:14 localhost podman[89654]: 2025-12-06 08:53:14.074426228 +0000 UTC m=+0.213764927 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container) Dec 6 03:53:14 localhost podman[89645]: 2025-12-06 08:53:14.081247626 +0000 UTC m=+0.223148434 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, container_name=iscsid, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid) Dec 6 03:53:14 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:53:14 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:53:14 localhost podman[89640]: 2025-12-06 08:53:14.151996779 +0000 UTC m=+0.302436048 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, architecture=x86_64) Dec 6 03:53:14 localhost podman[89639]: 2025-12-06 08:53:14.213482029 +0000 UTC m=+0.368510658 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, version=17.1.12, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044) Dec 6 03:53:14 localhost podman[89639]: 2025-12-06 08:53:14.223650781 +0000 UTC m=+0.378679420 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, release=1761123044, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, distribution-scope=public, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container) Dec 6 03:53:14 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:53:14 localhost podman[89641]: 2025-12-06 08:53:14.297836669 +0000 UTC m=+0.441596713 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, batch=17.1_20251118.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, tcib_managed=true, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, release=1761123044, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute) Dec 6 03:53:14 localhost podman[89641]: 2025-12-06 08:53:14.318069438 +0000 UTC m=+0.461829562 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, tcib_managed=true, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, architecture=x86_64, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, distribution-scope=public, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible) Dec 6 03:53:14 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:53:14 localhost podman[89640]: 2025-12-06 08:53:14.597200272 +0000 UTC m=+0.747639481 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, container_name=nova_migration_target) Dec 6 03:53:14 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:53:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:53:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:53:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:53:18 localhost systemd[1]: tmp-crun.yTqNwE.mount: Deactivated successfully. Dec 6 03:53:18 localhost podman[89770]: 2025-12-06 08:53:18.939813553 +0000 UTC m=+0.097701637 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 03:53:18 localhost podman[89769]: 2025-12-06 08:53:18.896466238 +0000 UTC m=+0.059936663 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, tcib_managed=true, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, vcs-type=git, url=https://www.redhat.com, architecture=x86_64, io.buildah.version=1.41.4, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team) Dec 6 03:53:18 localhost podman[89769]: 2025-12-06 08:53:18.975272218 +0000 UTC m=+0.138742623 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, version=17.1.12, vcs-type=git, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, architecture=x86_64, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 6 03:53:18 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:53:18 localhost podman[89771]: 2025-12-06 08:53:18.988261665 +0000 UTC m=+0.142110326 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, architecture=x86_64, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, url=https://www.redhat.com, io.openshift.expose-services=, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc.) Dec 6 03:53:19 localhost podman[89771]: 2025-12-06 08:53:19.046088053 +0000 UTC m=+0.199936674 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, version=17.1.12, build-date=2025-11-19T00:14:25Z, architecture=x86_64, url=https://www.redhat.com, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, batch=17.1_20251118.1) Dec 6 03:53:19 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:53:19 localhost podman[89770]: 2025-12-06 08:53:19.118229569 +0000 UTC m=+0.276117703 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.12, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, release=1761123044, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 6 03:53:19 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:53:19 localhost systemd[1]: tmp-crun.I8bopW.mount: Deactivated successfully. Dec 6 03:53:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:53:22 localhost systemd[1]: tmp-crun.Qet4nb.mount: Deactivated successfully. Dec 6 03:53:22 localhost podman[89846]: 2025-12-06 08:53:22.930975457 +0000 UTC m=+0.092321384 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, release=1761123044, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, version=17.1.12, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, container_name=nova_compute, url=https://www.redhat.com, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:53:22 localhost podman[89846]: 2025-12-06 08:53:22.987111383 +0000 UTC m=+0.148457260 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, distribution-scope=public, io.openshift.expose-services=, release=1761123044, container_name=nova_compute, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:53:23 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 03:53:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:53:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:53:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:53:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:53:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:53:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:53:44 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 6 03:53:44 localhost recover_tripleo_nova_virtqemud[89927]: 61814 Dec 6 03:53:44 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 6 03:53:44 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 6 03:53:44 localhost systemd[1]: tmp-crun.9j0JNa.mount: Deactivated successfully. Dec 6 03:53:44 localhost podman[89895]: 2025-12-06 08:53:44.968722297 +0000 UTC m=+0.118890696 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, config_id=tripleo_step4, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.component=openstack-cron-container, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible) Dec 6 03:53:44 localhost podman[89904]: 2025-12-06 08:53:44.981294521 +0000 UTC m=+0.116152342 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, vcs-type=git, io.openshift.expose-services=, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.12, build-date=2025-11-18T23:44:13Z, architecture=x86_64) Dec 6 03:53:45 localhost podman[89896]: 2025-12-06 08:53:45.016271291 +0000 UTC m=+0.161547541 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, url=https://www.redhat.com, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, container_name=collectd, vendor=Red Hat, Inc., config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1) Dec 6 03:53:45 localhost podman[89914]: 2025-12-06 08:53:45.023295025 +0000 UTC m=+0.152222275 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, distribution-scope=public, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., io.buildah.version=1.41.4, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 6 03:53:45 localhost podman[89904]: 2025-12-06 08:53:45.037124258 +0000 UTC m=+0.171982159 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, io.openshift.expose-services=, version=17.1.12, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, batch=17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, vcs-type=git, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, architecture=x86_64, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:53:45 localhost podman[89896]: 2025-12-06 08:53:45.048150145 +0000 UTC m=+0.193426335 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, batch=17.1_20251118.1, version=17.1.12, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, url=https://www.redhat.com, architecture=x86_64, distribution-scope=public, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, tcib_managed=true, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:53:45 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:53:45 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:53:45 localhost podman[89914]: 2025-12-06 08:53:45.091681817 +0000 UTC m=+0.220609087 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, distribution-scope=public, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., release=1761123044, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, io.openshift.expose-services=, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1) Dec 6 03:53:45 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:53:45 localhost podman[89895]: 2025-12-06 08:53:45.112178033 +0000 UTC m=+0.262346422 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, batch=17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron) Dec 6 03:53:45 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:53:45 localhost podman[89903]: 2025-12-06 08:53:45.169185626 +0000 UTC m=+0.307278356 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, distribution-scope=public, release=1761123044, vendor=Red Hat, Inc., config_id=tripleo_step4, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 6 03:53:45 localhost podman[89903]: 2025-12-06 08:53:45.193145799 +0000 UTC m=+0.331238539 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Dec 6 03:53:45 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:53:45 localhost podman[89897]: 2025-12-06 08:53:45.042807982 +0000 UTC m=+0.185972927 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, managed_by=tripleo_ansible, config_id=tripleo_step4, vendor=Red Hat, Inc., release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:53:45 localhost podman[89897]: 2025-12-06 08:53:45.385855182 +0000 UTC m=+0.529020157 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., container_name=nova_migration_target, release=1761123044, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 6 03:53:45 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:53:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:53:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:53:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:53:49 localhost podman[90029]: 2025-12-06 08:53:49.918640015 +0000 UTC m=+0.071299691 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, version=17.1.12, io.openshift.expose-services=) Dec 6 03:53:49 localhost podman[90027]: 2025-12-06 08:53:49.973578875 +0000 UTC m=+0.130940114 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, io.openshift.expose-services=, release=1761123044, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team) Dec 6 03:53:49 localhost podman[90029]: 2025-12-06 08:53:49.981125516 +0000 UTC m=+0.133785182 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, container_name=ovn_metadata_agent, architecture=x86_64, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, config_id=tripleo_step4, batch=17.1_20251118.1, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 03:53:49 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:53:49 localhost podman[90027]: 2025-12-06 08:53:49.997125915 +0000 UTC m=+0.154487204 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.buildah.version=1.41.4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, release=1761123044, io.openshift.expose-services=, container_name=ovn_controller, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4) Dec 6 03:53:50 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:53:50 localhost podman[90028]: 2025-12-06 08:53:50.079618357 +0000 UTC m=+0.235608085 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1761123044, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.buildah.version=1.41.4, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z) Dec 6 03:53:50 localhost podman[90028]: 2025-12-06 08:53:50.278062095 +0000 UTC m=+0.434051823 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.12, vcs-type=git, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, distribution-scope=public, batch=17.1_20251118.1, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:53:50 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:53:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:53:53 localhost podman[90102]: 2025-12-06 08:53:53.939088874 +0000 UTC m=+0.100112112 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-type=git, architecture=x86_64, config_id=tripleo_step5, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, tcib_managed=true, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, vendor=Red Hat, Inc., managed_by=tripleo_ansible) Dec 6 03:53:53 localhost podman[90102]: 2025-12-06 08:53:53.968237835 +0000 UTC m=+0.129261053 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, io.buildah.version=1.41.4, managed_by=tripleo_ansible) Dec 6 03:53:53 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 03:54:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:54:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:54:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:54:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:54:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:54:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:54:15 localhost systemd[1]: tmp-crun.4n3QPH.mount: Deactivated successfully. Dec 6 03:54:15 localhost podman[90210]: 2025-12-06 08:54:15.930903713 +0000 UTC m=+0.087979830 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target) Dec 6 03:54:15 localhost podman[90205]: 2025-12-06 08:54:15.97982872 +0000 UTC m=+0.143401666 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.buildah.version=1.41.4, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, architecture=x86_64, name=rhosp17/openstack-cron, tcib_managed=true, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vcs-type=git, managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.12, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc.) Dec 6 03:54:15 localhost podman[90205]: 2025-12-06 08:54:15.987971529 +0000 UTC m=+0.151544495 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, version=17.1.12, io.buildah.version=1.41.4, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, distribution-scope=public, tcib_managed=true, name=rhosp17/openstack-cron, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Dec 6 03:54:15 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:54:16 localhost podman[90225]: 2025-12-06 08:54:16.025096324 +0000 UTC m=+0.167805162 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, config_id=tripleo_step4, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com) Dec 6 03:54:16 localhost podman[90222]: 2025-12-06 08:54:16.041880287 +0000 UTC m=+0.184264895 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.4, batch=17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, version=17.1.12, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid) Dec 6 03:54:16 localhost podman[90222]: 2025-12-06 08:54:16.05308759 +0000 UTC m=+0.195472208 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, container_name=iscsid, release=1761123044, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, tcib_managed=true, distribution-scope=public) Dec 6 03:54:16 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:54:16 localhost podman[90225]: 2025-12-06 08:54:16.077086593 +0000 UTC m=+0.219795431 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-type=git, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, tcib_managed=true, version=17.1.12) Dec 6 03:54:16 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:54:16 localhost podman[90213]: 2025-12-06 08:54:16.094042641 +0000 UTC m=+0.244365652 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, architecture=x86_64, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 6 03:54:16 localhost podman[90213]: 2025-12-06 08:54:16.114140736 +0000 UTC m=+0.264463828 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, batch=17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, managed_by=tripleo_ansible, release=1761123044, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, version=17.1.12, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 6 03:54:16 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:54:16 localhost podman[90206]: 2025-12-06 08:54:16.201131856 +0000 UTC m=+0.358153252 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, container_name=collectd, name=rhosp17/openstack-collectd, io.openshift.expose-services=, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z) Dec 6 03:54:16 localhost podman[90206]: 2025-12-06 08:54:16.211040389 +0000 UTC m=+0.368061735 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1761123044, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, url=https://www.redhat.com) Dec 6 03:54:16 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:54:16 localhost podman[90210]: 2025-12-06 08:54:16.275155519 +0000 UTC m=+0.432231706 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, version=17.1.12, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64) Dec 6 03:54:16 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:54:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:54:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:54:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:54:20 localhost podman[90336]: 2025-12-06 08:54:20.915318457 +0000 UTC m=+0.081003678 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, architecture=x86_64, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 6 03:54:20 localhost systemd[1]: tmp-crun.EBOJs9.mount: Deactivated successfully. Dec 6 03:54:20 localhost podman[90335]: 2025-12-06 08:54:20.980746047 +0000 UTC m=+0.145228981 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, config_id=tripleo_step4, url=https://www.redhat.com, container_name=ovn_controller, architecture=x86_64, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public) Dec 6 03:54:21 localhost podman[90335]: 2025-12-06 08:54:21.009077604 +0000 UTC m=+0.173560498 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, release=1761123044, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 6 03:54:21 localhost podman[90337]: 2025-12-06 08:54:21.022712361 +0000 UTC m=+0.180910323 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true) Dec 6 03:54:21 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:54:21 localhost podman[90337]: 2025-12-06 08:54:21.096334031 +0000 UTC m=+0.254532023 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, managed_by=tripleo_ansible, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, config_id=tripleo_step4, vendor=Red Hat, Inc., batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 6 03:54:21 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:54:21 localhost podman[90336]: 2025-12-06 08:54:21.111235887 +0000 UTC m=+0.276921198 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, release=1761123044, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, version=17.1.12, url=https://www.redhat.com, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 6 03:54:21 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:54:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:54:24 localhost systemd[1]: tmp-crun.OiNcAd.mount: Deactivated successfully. Dec 6 03:54:24 localhost podman[90414]: 2025-12-06 08:54:24.936840017 +0000 UTC m=+0.096840802 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.openshift.expose-services=, distribution-scope=public, batch=17.1_20251118.1, container_name=nova_compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, url=https://www.redhat.com, config_id=tripleo_step5) Dec 6 03:54:24 localhost podman[90414]: 2025-12-06 08:54:24.992236171 +0000 UTC m=+0.152236916 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, architecture=x86_64, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.4, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com) Dec 6 03:54:25 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 03:54:33 localhost sshd[90462]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:54:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:54:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:54:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:54:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:54:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:54:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:54:46 localhost podman[90466]: 2025-12-06 08:54:46.947804529 +0000 UTC m=+0.087971140 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, vcs-type=git, architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, vendor=Red Hat, Inc.) Dec 6 03:54:46 localhost systemd[1]: tmp-crun.O6V8cV.mount: Deactivated successfully. Dec 6 03:54:46 localhost podman[90467]: 2025-12-06 08:54:46.965505241 +0000 UTC m=+0.094549222 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, build-date=2025-11-19T00:11:48Z, version=17.1.12, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:54:47 localhost podman[90464]: 2025-12-06 08:54:47.010773245 +0000 UTC m=+0.152479244 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.component=openstack-cron-container, version=17.1.12, vcs-type=git, managed_by=tripleo_ansible, container_name=logrotate_crond, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 6 03:54:47 localhost podman[90467]: 2025-12-06 08:54:47.013954972 +0000 UTC m=+0.142998883 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, url=https://www.redhat.com, vendor=Red Hat, Inc., release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 6 03:54:47 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:54:47 localhost podman[90464]: 2025-12-06 08:54:47.067453998 +0000 UTC m=+0.209159957 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, url=https://www.redhat.com, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, tcib_managed=true, vcs-type=git) Dec 6 03:54:47 localhost podman[90465]: 2025-12-06 08:54:47.074441271 +0000 UTC m=+0.214021925 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.openshift.expose-services=, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, version=17.1.12, vcs-type=git, maintainer=OpenStack TripleO Team, container_name=collectd, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4) Dec 6 03:54:47 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:54:47 localhost podman[90483]: 2025-12-06 08:54:47.100633442 +0000 UTC m=+0.224251917 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20251118.1, version=17.1.12, architecture=x86_64, config_id=tripleo_step4, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:54:47 localhost podman[90465]: 2025-12-06 08:54:47.107043678 +0000 UTC m=+0.246624332 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.4, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, vcs-type=git, tcib_managed=true, com.redhat.component=openstack-collectd-container) Dec 6 03:54:47 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:54:47 localhost podman[90474]: 2025-12-06 08:54:47.16042654 +0000 UTC m=+0.290607017 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, batch=17.1_20251118.1, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, config_id=tripleo_step3, url=https://www.redhat.com, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, release=1761123044, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container) Dec 6 03:54:47 localhost podman[90474]: 2025-12-06 08:54:47.1904898 +0000 UTC m=+0.320670307 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, vcs-type=git, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, container_name=iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid) Dec 6 03:54:47 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:54:47 localhost podman[90483]: 2025-12-06 08:54:47.246230994 +0000 UTC m=+0.369849549 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, managed_by=tripleo_ansible) Dec 6 03:54:47 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:54:47 localhost podman[90466]: 2025-12-06 08:54:47.329847971 +0000 UTC m=+0.470014652 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, release=1761123044, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 03:54:47 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:54:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:54:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:54:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:54:51 localhost podman[90595]: 2025-12-06 08:54:51.928600272 +0000 UTC m=+0.085967720 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 03:54:52 localhost systemd[1]: tmp-crun.hSHBbG.mount: Deactivated successfully. Dec 6 03:54:52 localhost podman[90596]: 2025-12-06 08:54:52.01225621 +0000 UTC m=+0.164714117 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, managed_by=tripleo_ansible, distribution-scope=public, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Dec 6 03:54:52 localhost podman[90594]: 2025-12-06 08:54:52.053074238 +0000 UTC m=+0.212134237 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, distribution-scope=public, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, architecture=x86_64, vendor=Red Hat, Inc., batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, io.openshift.expose-services=, container_name=ovn_controller, managed_by=tripleo_ansible) Dec 6 03:54:52 localhost podman[90596]: 2025-12-06 08:54:52.058744992 +0000 UTC m=+0.211202939 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.expose-services=, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:54:52 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:54:52 localhost podman[90594]: 2025-12-06 08:54:52.081075625 +0000 UTC m=+0.240135604 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, config_id=tripleo_step4, version=17.1.12, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:54:52 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:54:52 localhost podman[90595]: 2025-12-06 08:54:52.158033247 +0000 UTC m=+0.315400675 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, tcib_managed=true, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Dec 6 03:54:52 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:54:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:54:55 localhost systemd[1]: tmp-crun.VjVYvR.mount: Deactivated successfully. Dec 6 03:54:55 localhost podman[90669]: 2025-12-06 08:54:55.92249661 +0000 UTC m=+0.085410623 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, name=rhosp17/openstack-nova-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, release=1761123044, distribution-scope=public, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 03:54:55 localhost podman[90669]: 2025-12-06 08:54:55.949136334 +0000 UTC m=+0.112050347 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, url=https://www.redhat.com, version=17.1.12, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20251118.1, io.buildah.version=1.41.4, release=1761123044, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute) Dec 6 03:54:55 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 03:55:04 localhost sshd[90693]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:55:06 localhost sshd[90695]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:55:08 localhost sshd[90697]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:55:10 localhost sshd[90729]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:55:12 localhost sshd[90779]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:55:16 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 6 03:55:16 localhost recover_tripleo_nova_virtqemud[90782]: 61814 Dec 6 03:55:16 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 6 03:55:16 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 6 03:55:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:55:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:55:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:55:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:55:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:55:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:55:17 localhost systemd[1]: tmp-crun.N10JAa.mount: Deactivated successfully. Dec 6 03:55:17 localhost podman[90784]: 2025-12-06 08:55:17.943031436 +0000 UTC m=+0.096732938 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20251118.1, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, distribution-scope=public, architecture=x86_64, build-date=2025-11-18T22:51:28Z, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4) Dec 6 03:55:17 localhost systemd[1]: tmp-crun.G9rcL4.mount: Deactivated successfully. Dec 6 03:55:17 localhost podman[90786]: 2025-12-06 08:55:17.988828357 +0000 UTC m=+0.136663900 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, vcs-type=git, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 6 03:55:17 localhost podman[90783]: 2025-12-06 08:55:17.996168071 +0000 UTC m=+0.149847032 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, container_name=logrotate_crond, release=1761123044, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, com.redhat.component=openstack-cron-container, vcs-type=git, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.expose-services=) Dec 6 03:55:18 localhost podman[90784]: 2025-12-06 08:55:18.009104137 +0000 UTC m=+0.162805649 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vcs-type=git, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd) Dec 6 03:55:18 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:55:18 localhost podman[90783]: 2025-12-06 08:55:18.033037719 +0000 UTC m=+0.186716690 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, release=1761123044, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public) Dec 6 03:55:18 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:55:18 localhost podman[90785]: 2025-12-06 08:55:18.047887683 +0000 UTC m=+0.197245183 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, container_name=nova_migration_target, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, com.redhat.component=openstack-nova-compute-container) Dec 6 03:55:18 localhost podman[90786]: 2025-12-06 08:55:18.065965685 +0000 UTC m=+0.213801218 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, build-date=2025-11-19T00:11:48Z, distribution-scope=public, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, config_id=tripleo_step4, batch=17.1_20251118.1) Dec 6 03:55:18 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:55:18 localhost podman[90805]: 2025-12-06 08:55:18.145236599 +0000 UTC m=+0.285572772 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, managed_by=tripleo_ansible, url=https://www.redhat.com, io.buildah.version=1.41.4, config_id=tripleo_step4, version=17.1.12, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1) Dec 6 03:55:18 localhost podman[90798]: 2025-12-06 08:55:18.191000298 +0000 UTC m=+0.332805587 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, release=1761123044, batch=17.1_20251118.1, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, build-date=2025-11-18T23:44:13Z, tcib_managed=true, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container) Dec 6 03:55:18 localhost podman[90798]: 2025-12-06 08:55:18.222559913 +0000 UTC m=+0.364365222 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, vcs-type=git, io.buildah.version=1.41.4, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Dec 6 03:55:18 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:55:18 localhost podman[90805]: 2025-12-06 08:55:18.273187641 +0000 UTC m=+0.413523784 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, release=1761123044, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vcs-type=git, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, build-date=2025-11-19T00:12:45Z) Dec 6 03:55:18 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:55:18 localhost podman[90785]: 2025-12-06 08:55:18.41115654 +0000 UTC m=+0.560514130 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-11-19T00:36:58Z, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, release=1761123044, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, container_name=nova_migration_target, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc.) Dec 6 03:55:18 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:55:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:55:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:55:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:55:22 localhost podman[90911]: 2025-12-06 08:55:22.916113451 +0000 UTC m=+0.078765930 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, version=17.1.12, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team) Dec 6 03:55:22 localhost podman[90911]: 2025-12-06 08:55:22.968053308 +0000 UTC m=+0.130705817 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, container_name=ovn_controller, vcs-type=git, batch=17.1_20251118.1, architecture=x86_64, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., config_id=tripleo_step4, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:55:22 localhost podman[90912]: 2025-12-06 08:55:22.979125757 +0000 UTC m=+0.139110475 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, batch=17.1_20251118.1, tcib_managed=true, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd) Dec 6 03:55:22 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:55:23 localhost podman[90913]: 2025-12-06 08:55:23.023786072 +0000 UTC m=+0.180517150 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, batch=17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, release=1761123044, config_id=tripleo_step4, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, url=https://www.redhat.com) Dec 6 03:55:23 localhost podman[90913]: 2025-12-06 08:55:23.068211061 +0000 UTC m=+0.224942149 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, version=17.1.12, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, release=1761123044, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 6 03:55:23 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:55:23 localhost podman[90912]: 2025-12-06 08:55:23.136321873 +0000 UTC m=+0.296306651 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.openshift.expose-services=, version=17.1.12, url=https://www.redhat.com, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr) Dec 6 03:55:23 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:55:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:55:26 localhost systemd[1]: tmp-crun.SfimPh.mount: Deactivated successfully. Dec 6 03:55:26 localhost podman[90985]: 2025-12-06 08:55:26.925124499 +0000 UTC m=+0.087481887 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-11-19T00:36:58Z, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, batch=17.1_20251118.1) Dec 6 03:55:26 localhost podman[90985]: 2025-12-06 08:55:26.982158222 +0000 UTC m=+0.144515550 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.buildah.version=1.41.4, container_name=nova_compute, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20251118.1, architecture=x86_64, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team) Dec 6 03:55:26 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 03:55:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:55:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:55:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:55:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:55:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:55:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:55:48 localhost podman[91041]: 2025-12-06 08:55:48.946029073 +0000 UTC m=+0.094853290 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, tcib_managed=true, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20251118.1, container_name=nova_migration_target) Dec 6 03:55:48 localhost systemd[1]: tmp-crun.USg5WS.mount: Deactivated successfully. Dec 6 03:55:48 localhost podman[91035]: 2025-12-06 08:55:48.996587619 +0000 UTC m=+0.147526411 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, version=17.1.12, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3) Dec 6 03:55:49 localhost podman[91035]: 2025-12-06 08:55:49.004573003 +0000 UTC m=+0.155511796 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, managed_by=tripleo_ansible, release=1761123044, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 03:55:49 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:55:49 localhost podman[91034]: 2025-12-06 08:55:49.055703178 +0000 UTC m=+0.215916914 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, distribution-scope=public, config_id=tripleo_step4, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, url=https://www.redhat.com) Dec 6 03:55:49 localhost podman[91034]: 2025-12-06 08:55:49.062954649 +0000 UTC m=+0.223168435 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_id=tripleo_step4, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, container_name=logrotate_crond, vcs-type=git, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, tcib_managed=true) Dec 6 03:55:49 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:55:49 localhost podman[91050]: 2025-12-06 08:55:49.13821688 +0000 UTC m=+0.278643760 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, tcib_managed=true, release=1761123044, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, distribution-scope=public, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 6 03:55:49 localhost podman[91043]: 2025-12-06 08:55:49.196448711 +0000 UTC m=+0.338835091 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, batch=17.1_20251118.1, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, io.buildah.version=1.41.4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, release=1761123044, distribution-scope=public, version=17.1.12, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, com.redhat.component=openstack-iscsid-container, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid) Dec 6 03:55:49 localhost podman[91043]: 2025-12-06 08:55:49.235160105 +0000 UTC m=+0.377546435 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, version=17.1.12, managed_by=tripleo_ansible, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, release=1761123044, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., url=https://www.redhat.com) Dec 6 03:55:49 localhost podman[91042]: 2025-12-06 08:55:49.242191419 +0000 UTC m=+0.387260231 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, tcib_managed=true, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, version=17.1.12, architecture=x86_64, url=https://www.redhat.com, vendor=Red Hat, Inc.) Dec 6 03:55:49 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:55:49 localhost podman[91050]: 2025-12-06 08:55:49.265595024 +0000 UTC m=+0.406022004 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, release=1761123044, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, distribution-scope=public, architecture=x86_64, tcib_managed=true, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container) Dec 6 03:55:49 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:55:49 localhost podman[91042]: 2025-12-06 08:55:49.320997439 +0000 UTC m=+0.466066291 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vcs-type=git, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, distribution-scope=public, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, release=1761123044, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Dec 6 03:55:49 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:55:49 localhost podman[91041]: 2025-12-06 08:55:49.35930749 +0000 UTC m=+0.508131647 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, architecture=x86_64, container_name=nova_migration_target, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, version=17.1.12, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=) Dec 6 03:55:49 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:55:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:55:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:55:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:55:53 localhost systemd[1]: tmp-crun.WhmE2A.mount: Deactivated successfully. Dec 6 03:55:53 localhost podman[91172]: 2025-12-06 08:55:53.933935195 +0000 UTC m=+0.096646827 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, io.openshift.expose-services=, url=https://www.redhat.com, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., container_name=ovn_controller, version=17.1.12, vcs-type=git, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team) Dec 6 03:55:53 localhost podman[91174]: 2025-12-06 08:55:53.979945991 +0000 UTC m=+0.137046671 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, vcs-type=git, io.buildah.version=1.41.4, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, config_id=tripleo_step4) Dec 6 03:55:54 localhost podman[91174]: 2025-12-06 08:55:54.03125695 +0000 UTC m=+0.188357580 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, version=17.1.12, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Dec 6 03:55:54 localhost podman[91173]: 2025-12-06 08:55:54.034283213 +0000 UTC m=+0.193080745 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, vcs-type=git, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, container_name=metrics_qdr, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public) Dec 6 03:55:54 localhost podman[91172]: 2025-12-06 08:55:54.082839677 +0000 UTC m=+0.245551309 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., version=17.1.12, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, url=https://www.redhat.com, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 6 03:55:54 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:55:54 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:55:54 localhost podman[91173]: 2025-12-06 08:55:54.219196787 +0000 UTC m=+0.377994369 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., managed_by=tripleo_ansible, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, vcs-type=git, architecture=x86_64, url=https://www.redhat.com, version=17.1.12, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, tcib_managed=true) Dec 6 03:55:54 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:55:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:55:57 localhost podman[91248]: 2025-12-06 08:55:57.897741721 +0000 UTC m=+0.057812339 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, version=17.1.12, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, tcib_managed=true) Dec 6 03:55:57 localhost podman[91248]: 2025-12-06 08:55:57.928957436 +0000 UTC m=+0.089028074 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.12, build-date=2025-11-19T00:36:58Z, distribution-scope=public, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 03:55:57 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 03:56:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:56:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:56:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:56:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:56:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:56:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:56:19 localhost podman[91351]: 2025-12-06 08:56:19.94451985 +0000 UTC m=+0.106893439 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, config_id=tripleo_step4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team) Dec 6 03:56:19 localhost podman[91351]: 2025-12-06 08:56:19.980195392 +0000 UTC m=+0.142568981 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, release=1761123044, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, architecture=x86_64, name=rhosp17/openstack-cron, version=17.1.12, url=https://www.redhat.com, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Dec 6 03:56:19 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:56:20 localhost podman[91352]: 2025-12-06 08:56:19.999821182 +0000 UTC m=+0.153798614 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vendor=Red Hat, Inc., url=https://www.redhat.com, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible) Dec 6 03:56:20 localhost podman[91352]: 2025-12-06 08:56:20.038126902 +0000 UTC m=+0.192104364 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, distribution-scope=public, name=rhosp17/openstack-collectd, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, url=https://www.redhat.com) Dec 6 03:56:20 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:56:20 localhost podman[91363]: 2025-12-06 08:56:20.055969978 +0000 UTC m=+0.203672248 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, tcib_managed=true, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, io.openshift.expose-services=, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 6 03:56:20 localhost podman[91353]: 2025-12-06 08:56:20.088847403 +0000 UTC m=+0.240271827 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, release=1761123044, io.buildah.version=1.41.4, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-nova-compute-container) Dec 6 03:56:20 localhost podman[91363]: 2025-12-06 08:56:20.109066542 +0000 UTC m=+0.256768782 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, config_id=tripleo_step4, build-date=2025-11-19T00:12:45Z, architecture=x86_64, managed_by=tripleo_ansible, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, url=https://www.redhat.com, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 03:56:20 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:56:20 localhost podman[91359]: 2025-12-06 08:56:20.165899609 +0000 UTC m=+0.314439045 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, architecture=x86_64, url=https://www.redhat.com, tcib_managed=true, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 03:56:20 localhost podman[91360]: 2025-12-06 08:56:20.207327626 +0000 UTC m=+0.354149690 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, architecture=x86_64, tcib_managed=true, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, release=1761123044, name=rhosp17/openstack-iscsid, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.openshift.expose-services=) Dec 6 03:56:20 localhost podman[91359]: 2025-12-06 08:56:20.231297089 +0000 UTC m=+0.379836535 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_compute, release=1761123044, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, distribution-scope=public) Dec 6 03:56:20 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:56:20 localhost podman[91360]: 2025-12-06 08:56:20.244458781 +0000 UTC m=+0.391280845 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=iscsid, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid) Dec 6 03:56:20 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:56:20 localhost podman[91353]: 2025-12-06 08:56:20.461153287 +0000 UTC m=+0.612577751 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, version=17.1.12, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git) Dec 6 03:56:20 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:56:20 localhost systemd[1]: tmp-crun.XhvPoy.mount: Deactivated successfully. Dec 6 03:56:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:56:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:56:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:56:24 localhost podman[91488]: 2025-12-06 08:56:24.912072017 +0000 UTC m=+0.079615405 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, version=17.1.12, vcs-type=git, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Dec 6 03:56:24 localhost systemd[1]: tmp-crun.qik24J.mount: Deactivated successfully. Dec 6 03:56:24 localhost podman[91487]: 2025-12-06 08:56:24.971235366 +0000 UTC m=+0.137139624 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true, name=rhosp17/openstack-ovn-controller, architecture=x86_64, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, config_id=tripleo_step4, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, release=1761123044, io.openshift.expose-services=, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12) Dec 6 03:56:25 localhost podman[91489]: 2025-12-06 08:56:25.021467502 +0000 UTC m=+0.177300522 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, tcib_managed=true, version=17.1.12, vcs-type=git, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team) Dec 6 03:56:25 localhost podman[91487]: 2025-12-06 08:56:25.027252479 +0000 UTC m=+0.193156707 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, io.openshift.expose-services=, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 6 03:56:25 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:56:25 localhost podman[91489]: 2025-12-06 08:56:25.071248344 +0000 UTC m=+0.227081354 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.expose-services=, container_name=ovn_metadata_agent, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Dec 6 03:56:25 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:56:25 localhost podman[91488]: 2025-12-06 08:56:25.118246682 +0000 UTC m=+0.285790080 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, tcib_managed=true, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, batch=17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, name=rhosp17/openstack-qdrouterd) Dec 6 03:56:25 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:56:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:56:28 localhost systemd[1]: tmp-crun.Z6fk8q.mount: Deactivated successfully. Dec 6 03:56:28 localhost podman[91563]: 2025-12-06 08:56:28.918032583 +0000 UTC m=+0.084414941 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, batch=17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=tripleo_step5, vcs-type=git, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, container_name=nova_compute) Dec 6 03:56:28 localhost podman[91563]: 2025-12-06 08:56:28.970526419 +0000 UTC m=+0.136908777 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 03:56:28 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 03:56:40 localhost sshd[91589]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:56:40 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 6 03:56:40 localhost recover_tripleo_nova_virtqemud[91592]: 61814 Dec 6 03:56:40 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 6 03:56:40 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 6 03:56:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:56:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:56:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:56:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:56:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:56:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:56:50 localhost podman[91594]: 2025-12-06 08:56:50.961721125 +0000 UTC m=+0.109492199 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, batch=17.1_20251118.1, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, url=https://www.redhat.com, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, version=17.1.12, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Dec 6 03:56:51 localhost podman[91594]: 2025-12-06 08:56:51.000413188 +0000 UTC m=+0.148184262 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12) Dec 6 03:56:51 localhost systemd[1]: tmp-crun.teLjuu.mount: Deactivated successfully. Dec 6 03:56:51 localhost podman[91596]: 2025-12-06 08:56:51.021149102 +0000 UTC m=+0.160482708 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, distribution-scope=public, version=17.1.12, config_id=tripleo_step4, tcib_managed=true, vcs-type=git, batch=17.1_20251118.1, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, vendor=Red Hat, Inc., io.buildah.version=1.41.4, container_name=ceilometer_agent_compute) Dec 6 03:56:51 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:56:51 localhost podman[91596]: 2025-12-06 08:56:51.052236282 +0000 UTC m=+0.191569928 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.12, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:56:51 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:56:51 localhost podman[91595]: 2025-12-06 08:56:51.071844402 +0000 UTC m=+0.214702535 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Dec 6 03:56:51 localhost podman[91593]: 2025-12-06 08:56:51.119840259 +0000 UTC m=+0.269256943 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, batch=17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, container_name=logrotate_crond, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, build-date=2025-11-18T22:49:32Z, release=1761123044) Dec 6 03:56:51 localhost podman[91601]: 2025-12-06 08:56:51.164933938 +0000 UTC m=+0.300866510 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, vcs-type=git, distribution-scope=public, container_name=iscsid, version=17.1.12, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible) Dec 6 03:56:51 localhost podman[91593]: 2025-12-06 08:56:51.186057594 +0000 UTC m=+0.335474278 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, url=https://www.redhat.com, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=17.1.12, name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, config_id=tripleo_step4) Dec 6 03:56:51 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:56:51 localhost podman[91601]: 2025-12-06 08:56:51.20326412 +0000 UTC m=+0.339196692 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, container_name=iscsid, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, distribution-scope=public, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid) Dec 6 03:56:51 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:56:51 localhost podman[91608]: 2025-12-06 08:56:51.26963765 +0000 UTC m=+0.400502407 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team) Dec 6 03:56:51 localhost podman[91608]: 2025-12-06 08:56:51.325355084 +0000 UTC m=+0.456219881 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, tcib_managed=true, io.buildah.version=1.41.4, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64) Dec 6 03:56:51 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:56:51 localhost podman[91595]: 2025-12-06 08:56:51.51232928 +0000 UTC m=+0.655187443 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-type=git, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 03:56:51 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:56:51 localhost systemd[1]: tmp-crun.aQgUDi.mount: Deactivated successfully. Dec 6 03:56:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:56:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:56:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:56:55 localhost podman[91728]: 2025-12-06 08:56:55.940442503 +0000 UTC m=+0.088890909 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, config_id=tripleo_step4, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 6 03:56:56 localhost systemd[1]: tmp-crun.S7j7LO.mount: Deactivated successfully. Dec 6 03:56:56 localhost podman[91726]: 2025-12-06 08:56:56.011453164 +0000 UTC m=+0.165369688 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible) Dec 6 03:56:56 localhost podman[91728]: 2025-12-06 08:56:56.019287973 +0000 UTC m=+0.167736339 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.4, tcib_managed=true, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, vcs-type=git, version=17.1.12, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, distribution-scope=public, batch=17.1_20251118.1, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team) Dec 6 03:56:56 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:56:56 localhost podman[91726]: 2025-12-06 08:56:56.042332528 +0000 UTC m=+0.196249052 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., version=17.1.12, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.buildah.version=1.41.4, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, vcs-type=git, build-date=2025-11-18T23:34:05Z, tcib_managed=true, io.openshift.expose-services=, architecture=x86_64) Dec 6 03:56:56 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:56:56 localhost podman[91727]: 2025-12-06 08:56:56.109082329 +0000 UTC m=+0.258252777 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, architecture=x86_64, batch=17.1_20251118.1, vcs-type=git, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, com.redhat.component=openstack-qdrouterd-container) Dec 6 03:56:56 localhost podman[91727]: 2025-12-06 08:56:56.310194688 +0000 UTC m=+0.459365086 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, container_name=metrics_qdr, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, version=17.1.12, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, vcs-type=git, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 6 03:56:56 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:56:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:56:59 localhost podman[91802]: 2025-12-06 08:56:59.944066306 +0000 UTC m=+0.100090371 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, architecture=x86_64, version=17.1.12, vcs-type=git, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, distribution-scope=public, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team) Dec 6 03:56:59 localhost podman[91802]: 2025-12-06 08:56:59.996669965 +0000 UTC m=+0.152694020 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, container_name=nova_compute, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, io.buildah.version=1.41.4, version=17.1.12) Dec 6 03:57:00 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 03:57:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:57:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:57:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:57:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:57:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:57:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:57:21 localhost podman[91908]: 2025-12-06 08:57:21.931450202 +0000 UTC m=+0.089796767 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1761123044, tcib_managed=true, vcs-type=git, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, url=https://www.redhat.com, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container) Dec 6 03:57:21 localhost podman[91907]: 2025-12-06 08:57:21.987680061 +0000 UTC m=+0.147333586 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, release=1761123044, architecture=x86_64, io.openshift.expose-services=, url=https://www.redhat.com, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.buildah.version=1.41.4, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target) Dec 6 03:57:22 localhost podman[91908]: 2025-12-06 08:57:22.015142741 +0000 UTC m=+0.173489276 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.41.4, version=17.1.12, managed_by=tripleo_ansible, tcib_managed=true, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044) Dec 6 03:57:22 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:57:22 localhost podman[91906]: 2025-12-06 08:57:22.030883302 +0000 UTC m=+0.187680510 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, config_id=tripleo_step3, distribution-scope=public, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, version=17.1.12, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, architecture=x86_64, batch=17.1_20251118.1, container_name=collectd, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 6 03:57:22 localhost podman[91906]: 2025-12-06 08:57:22.044924841 +0000 UTC m=+0.201722019 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, io.buildah.version=1.41.4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, architecture=x86_64, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3) Dec 6 03:57:22 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:57:22 localhost podman[91915]: 2025-12-06 08:57:21.951221856 +0000 UTC m=+0.101381770 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, url=https://www.redhat.com, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, architecture=x86_64, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., distribution-scope=public) Dec 6 03:57:22 localhost podman[91905]: 2025-12-06 08:57:22.127201587 +0000 UTC m=+0.290945657 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, batch=17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, version=17.1.12, com.redhat.component=openstack-cron-container, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.) Dec 6 03:57:22 localhost podman[91905]: 2025-12-06 08:57:22.135159041 +0000 UTC m=+0.298903131 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, container_name=logrotate_crond, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, distribution-scope=public, version=17.1.12, release=1761123044, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com) Dec 6 03:57:22 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:57:22 localhost podman[91919]: 2025-12-06 08:57:22.193010349 +0000 UTC m=+0.341567505 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, version=17.1.12, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1) Dec 6 03:57:22 localhost podman[91915]: 2025-12-06 08:57:22.236285383 +0000 UTC m=+0.386445327 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, name=rhosp17/openstack-iscsid, tcib_managed=true, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Dec 6 03:57:22 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:57:22 localhost podman[91919]: 2025-12-06 08:57:22.250381164 +0000 UTC m=+0.398938320 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 03:57:22 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:57:22 localhost podman[91907]: 2025-12-06 08:57:22.366312518 +0000 UTC m=+0.525965973 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc.) Dec 6 03:57:22 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:57:22 localhost systemd[1]: tmp-crun.hXVwMA.mount: Deactivated successfully. Dec 6 03:57:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:57:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:57:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:57:26 localhost systemd[1]: tmp-crun.uJ4Pla.mount: Deactivated successfully. Dec 6 03:57:26 localhost podman[92040]: 2025-12-06 08:57:26.93860221 +0000 UTC m=+0.098577235 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public, config_id=tripleo_step4, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 6 03:57:26 localhost podman[92040]: 2025-12-06 08:57:26.960876242 +0000 UTC m=+0.120851327 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, version=17.1.12, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 6 03:57:26 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:57:27 localhost podman[92041]: 2025-12-06 08:57:27.047404807 +0000 UTC m=+0.200317805 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.12, config_id=tripleo_step1, vcs-type=git, maintainer=OpenStack TripleO Team, release=1761123044, tcib_managed=true, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, architecture=x86_64, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 6 03:57:27 localhost podman[92042]: 2025-12-06 08:57:27.096456638 +0000 UTC m=+0.250540122 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, url=https://www.redhat.com, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:57:27 localhost podman[92042]: 2025-12-06 08:57:27.14528103 +0000 UTC m=+0.299364514 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., config_id=tripleo_step4, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, version=17.1.12, managed_by=tripleo_ansible, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, architecture=x86_64, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent) Dec 6 03:57:27 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:57:27 localhost podman[92041]: 2025-12-06 08:57:27.244129892 +0000 UTC m=+0.397042840 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, url=https://www.redhat.com, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container) Dec 6 03:57:27 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:57:27 localhost systemd[1]: tmp-crun.MWUd7S.mount: Deactivated successfully. Dec 6 03:57:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:57:30 localhost podman[92116]: 2025-12-06 08:57:30.929816137 +0000 UTC m=+0.085516655 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, batch=17.1_20251118.1, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, vcs-type=git, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team) Dec 6 03:57:30 localhost podman[92116]: 2025-12-06 08:57:30.987290235 +0000 UTC m=+0.142990723 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.buildah.version=1.41.4, version=17.1.12, managed_by=tripleo_ansible, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., url=https://www.redhat.com, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Dec 6 03:57:30 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 03:57:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:57:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:57:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:57:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:57:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:57:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:57:52 localhost podman[92142]: 2025-12-06 08:57:52.933724019 +0000 UTC m=+0.097437260 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, build-date=2025-11-18T22:51:28Z, distribution-scope=public, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, batch=17.1_20251118.1, vendor=Red Hat, Inc.) Dec 6 03:57:52 localhost podman[92142]: 2025-12-06 08:57:52.942231929 +0000 UTC m=+0.105945250 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, com.redhat.component=openstack-collectd-container, distribution-scope=public, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, release=1761123044, architecture=x86_64, container_name=collectd) Dec 6 03:57:52 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:57:52 localhost systemd[1]: tmp-crun.z0Gbkl.mount: Deactivated successfully. Dec 6 03:57:52 localhost podman[92141]: 2025-12-06 08:57:52.984687967 +0000 UTC m=+0.149376868 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, config_id=tripleo_step4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, io.openshift.expose-services=, release=1761123044, version=17.1.12, distribution-scope=public) Dec 6 03:57:52 localhost podman[92143]: 2025-12-06 08:57:52.991106293 +0000 UTC m=+0.151078620 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, vcs-type=git, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:57:53 localhost podman[92147]: 2025-12-06 08:57:53.034774799 +0000 UTC m=+0.189500896 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:57:53 localhost podman[92144]: 2025-12-06 08:57:53.09174557 +0000 UTC m=+0.249679015 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, managed_by=tripleo_ansible, release=1761123044, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc.) Dec 6 03:57:53 localhost podman[92147]: 2025-12-06 08:57:53.112057931 +0000 UTC m=+0.266784048 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, batch=17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, release=1761123044) Dec 6 03:57:53 localhost podman[92141]: 2025-12-06 08:57:53.116823807 +0000 UTC m=+0.281512698 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, release=1761123044, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, distribution-scope=public, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, name=rhosp17/openstack-cron, url=https://www.redhat.com, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Dec 6 03:57:53 localhost podman[92144]: 2025-12-06 08:57:53.15421367 +0000 UTC m=+0.312147135 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_id=tripleo_step4, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.41.4, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, distribution-scope=public, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 6 03:57:53 localhost podman[92145]: 2025-12-06 08:57:53.147295729 +0000 UTC m=+0.302319145 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.buildah.version=1.41.4, version=17.1.12, tcib_managed=true, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc.) Dec 6 03:57:53 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:57:53 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:57:53 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:57:53 localhost podman[92145]: 2025-12-06 08:57:53.237022872 +0000 UTC m=+0.392046258 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, version=17.1.12, config_id=tripleo_step3, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=iscsid, tcib_managed=true, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, url=https://www.redhat.com, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc.) Dec 6 03:57:53 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:57:53 localhost podman[92143]: 2025-12-06 08:57:53.363299383 +0000 UTC m=+0.523271710 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.12, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, tcib_managed=true, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, io.openshift.expose-services=, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git) Dec 6 03:57:53 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:57:55 localhost sshd[92272]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:57:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:57:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:57:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:57:57 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 6 03:57:57 localhost recover_tripleo_nova_virtqemud[92288]: 61814 Dec 6 03:57:57 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 6 03:57:57 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 6 03:57:57 localhost systemd[1]: tmp-crun.VJAFCE.mount: Deactivated successfully. Dec 6 03:57:57 localhost podman[92275]: 2025-12-06 08:57:57.93214901 +0000 UTC m=+0.083428762 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, version=17.1.12, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, distribution-scope=public, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc.) Dec 6 03:57:57 localhost podman[92275]: 2025-12-06 08:57:57.969048928 +0000 UTC m=+0.120328670 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-type=git, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 6 03:57:57 localhost systemd[1]: tmp-crun.h7llIe.mount: Deactivated successfully. Dec 6 03:57:57 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:57:57 localhost podman[92274]: 2025-12-06 08:57:57.985098619 +0000 UTC m=+0.139110744 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, distribution-scope=public, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, vcs-type=git, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, managed_by=tripleo_ansible) Dec 6 03:57:58 localhost podman[92273]: 2025-12-06 08:57:58.037966776 +0000 UTC m=+0.194153388 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, batch=17.1_20251118.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, release=1761123044, architecture=x86_64, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Dec 6 03:57:58 localhost podman[92273]: 2025-12-06 08:57:58.087266773 +0000 UTC m=+0.243453365 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., release=1761123044, vcs-type=git, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, version=17.1.12, container_name=ovn_controller, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4) Dec 6 03:57:58 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:57:58 localhost podman[92274]: 2025-12-06 08:57:58.18921473 +0000 UTC m=+0.343226835 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, version=17.1.12, build-date=2025-11-18T22:49:46Z, tcib_managed=true, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, distribution-scope=public, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, release=1761123044, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 6 03:57:58 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:58:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:58:01 localhost podman[92354]: 2025-12-06 08:58:01.924592562 +0000 UTC m=+0.084947538 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, config_id=tripleo_step5, io.buildah.version=1.41.4, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64) Dec 6 03:58:01 localhost podman[92354]: 2025-12-06 08:58:01.952460304 +0000 UTC m=+0.112815270 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, config_id=tripleo_step5, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, url=https://www.redhat.com, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 03:58:01 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 03:58:07 localhost sshd[92379]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:58:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:58:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:58:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:58:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:58:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:58:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:58:23 localhost systemd[1]: tmp-crun.0OhMO0.mount: Deactivated successfully. Dec 6 03:58:24 localhost podman[92459]: 2025-12-06 08:58:23.999624425 +0000 UTC m=+0.148982886 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, container_name=logrotate_crond, config_id=tripleo_step4, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team) Dec 6 03:58:24 localhost podman[92462]: 2025-12-06 08:58:23.959001844 +0000 UTC m=+0.107251961 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, managed_by=tripleo_ansible, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 6 03:58:24 localhost podman[92460]: 2025-12-06 08:58:24.019211654 +0000 UTC m=+0.168523513 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, url=https://www.redhat.com, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, distribution-scope=public, version=17.1.12, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, vcs-type=git, container_name=collectd, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Dec 6 03:58:24 localhost podman[92462]: 2025-12-06 08:58:24.04524097 +0000 UTC m=+0.193491077 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, tcib_managed=true, distribution-scope=public, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.4, managed_by=tripleo_ansible, release=1761123044, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git) Dec 6 03:58:24 localhost podman[92472]: 2025-12-06 08:58:24.0524122 +0000 UTC m=+0.189162006 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:58:24 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:58:24 localhost podman[92472]: 2025-12-06 08:58:24.080058095 +0000 UTC m=+0.216807911 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, distribution-scope=public, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_id=tripleo_step4, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, io.buildah.version=1.41.4) Dec 6 03:58:24 localhost podman[92463]: 2025-12-06 08:58:24.090493034 +0000 UTC m=+0.233695557 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, tcib_managed=true, release=1761123044, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.buildah.version=1.41.4, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, url=https://www.redhat.com) Dec 6 03:58:24 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:58:24 localhost podman[92463]: 2025-12-06 08:58:24.125097992 +0000 UTC m=+0.268300535 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, container_name=iscsid, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, url=https://www.redhat.com, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4) Dec 6 03:58:24 localhost podman[92459]: 2025-12-06 08:58:24.132573891 +0000 UTC m=+0.281932352 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, version=17.1.12, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, url=https://www.redhat.com, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, tcib_managed=true, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vendor=Red Hat, Inc.) Dec 6 03:58:24 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:58:24 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:58:24 localhost podman[92460]: 2025-12-06 08:58:24.185729186 +0000 UTC m=+0.335041005 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, distribution-scope=public, config_id=tripleo_step3, container_name=collectd, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, vcs-type=git, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Dec 6 03:58:24 localhost podman[92461]: 2025-12-06 08:58:24.031429458 +0000 UTC m=+0.181651605 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, io.buildah.version=1.41.4, tcib_managed=true, build-date=2025-11-19T00:36:58Z, vcs-type=git, container_name=nova_migration_target, distribution-scope=public, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1761123044, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 03:58:24 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:58:24 localhost podman[92461]: 2025-12-06 08:58:24.425782025 +0000 UTC m=+0.576004142 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, vcs-type=git, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, container_name=nova_migration_target, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 03:58:24 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:58:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:58:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:58:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:58:28 localhost systemd[1]: tmp-crun.3T3mx0.mount: Deactivated successfully. Dec 6 03:58:28 localhost podman[92592]: 2025-12-06 08:58:28.936284829 +0000 UTC m=+0.096054198 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, container_name=metrics_qdr, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1) Dec 6 03:58:28 localhost systemd[1]: tmp-crun.IlDEci.mount: Deactivated successfully. Dec 6 03:58:28 localhost podman[92593]: 2025-12-06 08:58:28.991946021 +0000 UTC m=+0.146088638 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, managed_by=tripleo_ansible, tcib_managed=true, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, version=17.1.12, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044) Dec 6 03:58:29 localhost podman[92591]: 2025-12-06 08:58:29.035377299 +0000 UTC m=+0.195883020 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, managed_by=tripleo_ansible, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, url=https://www.redhat.com, vcs-type=git, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, version=17.1.12, release=1761123044, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:58:29 localhost podman[92593]: 2025-12-06 08:58:29.072862295 +0000 UTC m=+0.227004922 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, vcs-type=git, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, config_id=tripleo_step4) Dec 6 03:58:29 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:58:29 localhost podman[92591]: 2025-12-06 08:58:29.086661487 +0000 UTC m=+0.247167168 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, tcib_managed=true, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, container_name=ovn_controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:58:29 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:58:29 localhost podman[92592]: 2025-12-06 08:58:29.139019058 +0000 UTC m=+0.298788477 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.buildah.version=1.41.4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, batch=17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, release=1761123044, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 03:58:29 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:58:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:58:32 localhost systemd[1]: tmp-crun.LYfBei.mount: Deactivated successfully. Dec 6 03:58:32 localhost podman[92665]: 2025-12-06 08:58:32.929903938 +0000 UTC m=+0.091666724 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, config_id=tripleo_step5, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, name=rhosp17/openstack-nova-compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 03:58:32 localhost podman[92665]: 2025-12-06 08:58:32.957306866 +0000 UTC m=+0.119069652 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-type=git, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, io.buildah.version=1.41.4, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:58:32 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 03:58:43 localhost sshd[92691]: main: sshd: ssh-rsa algorithm is disabled Dec 6 03:58:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:58:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:58:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:58:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:58:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:58:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:58:54 localhost systemd[1]: tmp-crun.eKJFFB.mount: Deactivated successfully. Dec 6 03:58:55 localhost podman[92697]: 2025-12-06 08:58:55.000519076 +0000 UTC m=+0.146969454 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, container_name=iscsid, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1) Dec 6 03:58:55 localhost podman[92697]: 2025-12-06 08:58:55.007666685 +0000 UTC m=+0.154117033 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.expose-services=, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, architecture=x86_64, com.redhat.component=openstack-iscsid-container, release=1761123044, config_id=tripleo_step3) Dec 6 03:58:55 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:58:55 localhost podman[92693]: 2025-12-06 08:58:55.052554207 +0000 UTC m=+0.206675930 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, distribution-scope=public, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, managed_by=tripleo_ansible, batch=17.1_20251118.1, architecture=x86_64, build-date=2025-11-18T22:49:32Z, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, container_name=logrotate_crond, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron) Dec 6 03:58:55 localhost podman[92694]: 2025-12-06 08:58:55.089084294 +0000 UTC m=+0.241703891 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://www.redhat.com, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., config_id=tripleo_step3, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, architecture=x86_64) Dec 6 03:58:55 localhost podman[92696]: 2025-12-06 08:58:55.108736145 +0000 UTC m=+0.257457413 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Dec 6 03:58:55 localhost podman[92704]: 2025-12-06 08:58:54.962723511 +0000 UTC m=+0.104640460 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vendor=Red Hat, Inc., release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, distribution-scope=public, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.expose-services=, version=17.1.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 03:58:55 localhost podman[92704]: 2025-12-06 08:58:55.143397855 +0000 UTC m=+0.285314774 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, release=1761123044, config_id=tripleo_step4) Dec 6 03:58:55 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:58:55 localhost podman[92695]: 2025-12-06 08:58:55.153738431 +0000 UTC m=+0.305483401 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, managed_by=tripleo_ansible, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, distribution-scope=public) Dec 6 03:58:55 localhost podman[92696]: 2025-12-06 08:58:55.163016794 +0000 UTC m=+0.311738022 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc., release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, io.buildah.version=1.41.4, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, url=https://www.redhat.com, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 6 03:58:55 localhost podman[92694]: 2025-12-06 08:58:55.174872698 +0000 UTC m=+0.327492265 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, release=1761123044, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Dec 6 03:58:55 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:58:55 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:58:55 localhost podman[92693]: 2025-12-06 08:58:55.218241973 +0000 UTC m=+0.372363726 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., distribution-scope=public, container_name=logrotate_crond, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, managed_by=tripleo_ansible, version=17.1.12, build-date=2025-11-18T22:49:32Z) Dec 6 03:58:55 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:58:55 localhost podman[92695]: 2025-12-06 08:58:55.536191045 +0000 UTC m=+0.687935975 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20251118.1, release=1761123044, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64) Dec 6 03:58:55 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:58:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:58:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:58:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:58:59 localhost podman[92825]: 2025-12-06 08:58:59.924326498 +0000 UTC m=+0.081192164 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, distribution-scope=public, name=rhosp17/openstack-ovn-controller, architecture=x86_64, vendor=Red Hat, Inc., url=https://www.redhat.com, tcib_managed=true, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, version=17.1.12, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 6 03:58:59 localhost podman[92825]: 2025-12-06 08:58:59.974642956 +0000 UTC m=+0.131508662 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, version=17.1.12, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Dec 6 03:58:59 localhost podman[92826]: 2025-12-06 08:58:59.990356277 +0000 UTC m=+0.145349185 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, container_name=metrics_qdr, distribution-scope=public, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, managed_by=tripleo_ansible, release=1761123044, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, version=17.1.12) Dec 6 03:58:59 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Deactivated successfully. Dec 6 03:59:00 localhost podman[92827]: 2025-12-06 08:59:00.044156962 +0000 UTC m=+0.196448567 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.4, release=1761123044, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, distribution-scope=public, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 03:59:00 localhost podman[92827]: 2025-12-06 08:59:00.091278663 +0000 UTC m=+0.243570298 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, config_id=tripleo_step4, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Dec 6 03:59:00 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:59:00 localhost podman[92826]: 2025-12-06 08:59:00.222746263 +0000 UTC m=+0.377739191 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, architecture=x86_64, version=17.1.12, release=1761123044, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team) Dec 6 03:59:00 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:59:00 localhost systemd[1]: tmp-crun.ECPrAw.mount: Deactivated successfully. Dec 6 03:59:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:59:03 localhost podman[92902]: 2025-12-06 08:59:03.925415035 +0000 UTC m=+0.084134554 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, container_name=nova_compute, release=1761123044) Dec 6 03:59:03 localhost podman[92902]: 2025-12-06 08:59:03.960434886 +0000 UTC m=+0.119154365 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, release=1761123044, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 03:59:03 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 03:59:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:59:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:59:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:59:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:59:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:59:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:59:25 localhost systemd[1]: tmp-crun.MdDEcP.mount: Deactivated successfully. Dec 6 03:59:25 localhost podman[93056]: 2025-12-06 08:59:25.959001789 +0000 UTC m=+0.107127597 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1761123044, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., url=https://www.redhat.com, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64) Dec 6 03:59:26 localhost podman[93062]: 2025-12-06 08:59:26.019617462 +0000 UTC m=+0.162634473 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, version=17.1.12, vcs-type=git, release=1761123044, architecture=x86_64, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, container_name=iscsid, io.buildah.version=1.41.4, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, batch=17.1_20251118.1) Dec 6 03:59:26 localhost podman[93056]: 2025-12-06 08:59:26.042254404 +0000 UTC m=+0.190380232 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://www.redhat.com, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vcs-type=git, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Dec 6 03:59:26 localhost podman[93062]: 2025-12-06 08:59:26.05616843 +0000 UTC m=+0.199185421 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20251118.1, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, version=17.1.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, vcs-type=git) Dec 6 03:59:26 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:59:26 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:59:26 localhost podman[93055]: 2025-12-06 08:59:25.99958537 +0000 UTC m=+0.154131714 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, architecture=x86_64, release=1761123044, tcib_managed=true, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com) Dec 6 03:59:26 localhost podman[93074]: 2025-12-06 08:59:26.106696325 +0000 UTC m=+0.247219800 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, version=17.1.12, tcib_managed=true, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 6 03:59:26 localhost podman[93074]: 2025-12-06 08:59:26.137114445 +0000 UTC m=+0.277637960 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, architecture=x86_64, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Dec 6 03:59:26 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:59:26 localhost podman[93053]: 2025-12-06 08:59:26.148333748 +0000 UTC m=+0.308050080 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 03:59:26 localhost podman[93053]: 2025-12-06 08:59:26.158282153 +0000 UTC m=+0.317998535 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, version=17.1.12, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, io.openshift.expose-services=, distribution-scope=public, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Dec 6 03:59:26 localhost podman[93054]: 2025-12-06 08:59:26.060957817 +0000 UTC m=+0.217452831 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, io.buildah.version=1.41.4, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 03:59:26 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:59:26 localhost podman[93054]: 2025-12-06 08:59:26.201161673 +0000 UTC m=+0.357656707 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd) Dec 6 03:59:26 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:59:26 localhost podman[93055]: 2025-12-06 08:59:26.365297981 +0000 UTC m=+0.519844325 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.4, release=1761123044, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, url=https://www.redhat.com, batch=17.1_20251118.1, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git) Dec 6 03:59:26 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 03:59:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 03:59:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 03:59:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 03:59:30 localhost systemd[1]: tmp-crun.IE8xSl.mount: Deactivated successfully. Dec 6 03:59:30 localhost podman[93186]: 2025-12-06 08:59:30.931156557 +0000 UTC m=+0.090050894 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, config_id=tripleo_step1, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, distribution-scope=public, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible) Dec 6 03:59:30 localhost podman[93187]: 2025-12-06 08:59:30.977314659 +0000 UTC m=+0.131698158 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, version=17.1.12, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., url=https://www.redhat.com, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Dec 6 03:59:31 localhost podman[93185]: 2025-12-06 08:59:31.033187157 +0000 UTC m=+0.190872127 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_id=tripleo_step4, release=1761123044, batch=17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, tcib_managed=true, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, container_name=ovn_controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 6 03:59:31 localhost podman[93187]: 2025-12-06 08:59:31.054707505 +0000 UTC m=+0.209090974 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044) Dec 6 03:59:31 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 03:59:31 localhost podman[93185]: 2025-12-06 08:59:31.111686268 +0000 UTC m=+0.269371228 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20251118.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, url=https://www.redhat.com, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.12, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, vcs-type=git, release=1761123044, container_name=ovn_controller, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller) Dec 6 03:59:31 localhost podman[93185]: unhealthy Dec 6 03:59:31 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Main process exited, code=exited, status=1/FAILURE Dec 6 03:59:31 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Failed with result 'exit-code'. Dec 6 03:59:31 localhost podman[93186]: 2025-12-06 08:59:31.15268375 +0000 UTC m=+0.311578077 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, container_name=metrics_qdr, url=https://www.redhat.com, io.openshift.expose-services=, version=17.1.12, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, distribution-scope=public) Dec 6 03:59:31 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 03:59:31 localhost systemd[1]: tmp-crun.vCPAmn.mount: Deactivated successfully. Dec 6 03:59:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 03:59:34 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 6 03:59:34 localhost recover_tripleo_nova_virtqemud[93272]: 61814 Dec 6 03:59:34 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 6 03:59:34 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 6 03:59:34 localhost systemd[1]: tmp-crun.yh1WPA.mount: Deactivated successfully. Dec 6 03:59:34 localhost podman[93265]: 2025-12-06 08:59:34.918174344 +0000 UTC m=+0.079866243 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, url=https://www.redhat.com) Dec 6 03:59:34 localhost podman[93265]: 2025-12-06 08:59:34.950935105 +0000 UTC m=+0.112627074 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute) Dec 6 03:59:34 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 03:59:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 03:59:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 03:59:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 03:59:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 03:59:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 03:59:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 03:59:56 localhost podman[93306]: 2025-12-06 08:59:56.957958556 +0000 UTC m=+0.095990906 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, batch=17.1_20251118.1) Dec 6 03:59:56 localhost podman[93306]: 2025-12-06 08:59:56.980268328 +0000 UTC m=+0.118300638 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, config_id=tripleo_step4, vcs-type=git, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, managed_by=tripleo_ansible, batch=17.1_20251118.1) Dec 6 03:59:56 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 03:59:57 localhost podman[93291]: 2025-12-06 08:59:57.000251189 +0000 UTC m=+0.156793405 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, url=https://www.redhat.com, vcs-type=git, batch=17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-11-18T22:49:32Z) Dec 6 03:59:57 localhost podman[93299]: 2025-12-06 08:59:56.93815868 +0000 UTC m=+0.085440134 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, release=1761123044, managed_by=tripleo_ansible, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 6 03:59:57 localhost podman[93292]: 2025-12-06 08:59:57.049686661 +0000 UTC m=+0.203274977 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, vcs-type=git, version=17.1.12, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com) Dec 6 03:59:57 localhost podman[93291]: 2025-12-06 08:59:57.061513632 +0000 UTC m=+0.218055878 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, managed_by=tripleo_ansible, release=1761123044, batch=17.1_20251118.1, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, architecture=x86_64, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Dec 6 03:59:57 localhost podman[93299]: 2025-12-06 08:59:57.076192951 +0000 UTC m=+0.223474455 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=ceilometer_agent_compute, version=17.1.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public) Dec 6 03:59:57 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 03:59:57 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 03:59:57 localhost podman[93293]: 2025-12-06 08:59:57.096317776 +0000 UTC m=+0.244587089 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12) Dec 6 03:59:57 localhost podman[93292]: 2025-12-06 08:59:57.113894134 +0000 UTC m=+0.267482400 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.12, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., container_name=collectd, distribution-scope=public, build-date=2025-11-18T22:51:28Z, tcib_managed=true, release=1761123044, io.openshift.expose-services=, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible) Dec 6 03:59:57 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 03:59:57 localhost podman[93305]: 2025-12-06 08:59:57.161030084 +0000 UTC m=+0.301174289 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.openshift.expose-services=, container_name=iscsid, vendor=Red Hat, Inc., tcib_managed=true, vcs-type=git, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, batch=17.1_20251118.1, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Dec 6 03:59:57 localhost podman[93305]: 2025-12-06 08:59:57.196146769 +0000 UTC m=+0.336290974 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_id=tripleo_step3, tcib_managed=true, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044) Dec 6 03:59:57 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 03:59:57 localhost podman[93293]: 2025-12-06 08:59:57.424000765 +0000 UTC m=+0.572270088 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, build-date=2025-11-19T00:36:58Z, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Dec 6 03:59:57 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 04:00:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 04:00:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 04:00:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 04:00:01 localhost podman[93427]: 2025-12-06 09:00:01.928799643 +0000 UTC m=+0.084480614 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., version=17.1.12, container_name=ovn_controller, url=https://www.redhat.com, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, release=1761123044, tcib_managed=true, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Dec 6 04:00:01 localhost podman[93427]: 2025-12-06 09:00:01.986947811 +0000 UTC m=+0.142628762 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, vcs-type=git, managed_by=tripleo_ansible, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, architecture=x86_64) Dec 6 04:00:01 localhost systemd[1]: tmp-crun.K0DruU.mount: Deactivated successfully. Dec 6 04:00:01 localhost podman[93427]: unhealthy Dec 6 04:00:02 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:00:02 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Failed with result 'exit-code'. Dec 6 04:00:02 localhost podman[93428]: 2025-12-06 09:00:01.990645634 +0000 UTC m=+0.146121389 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., container_name=metrics_qdr, io.openshift.expose-services=, batch=17.1_20251118.1, tcib_managed=true, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4) Dec 6 04:00:02 localhost podman[93429]: 2025-12-06 09:00:02.052208656 +0000 UTC m=+0.201016867 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 6 04:00:02 localhost podman[93429]: 2025-12-06 09:00:02.094278822 +0000 UTC m=+0.243087053 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, container_name=ovn_metadata_agent, release=1761123044, version=17.1.12, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, tcib_managed=true, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com) Dec 6 04:00:02 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Deactivated successfully. Dec 6 04:00:02 localhost podman[93428]: 2025-12-06 09:00:02.187161123 +0000 UTC m=+0.342636918 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, release=1761123044, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.buildah.version=1.41.4, config_id=tripleo_step1, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1) Dec 6 04:00:02 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 04:00:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 04:00:05 localhost systemd[1]: tmp-crun.Deqc42.mount: Deactivated successfully. Dec 6 04:00:05 localhost podman[93504]: 2025-12-06 09:00:05.929294552 +0000 UTC m=+0.093413197 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, io.buildah.version=1.41.4, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, container_name=nova_compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z) Dec 6 04:00:05 localhost podman[93504]: 2025-12-06 09:00:05.956809733 +0000 UTC m=+0.120928368 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, architecture=x86_64, build-date=2025-11-19T00:36:58Z, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, release=1761123044) Dec 6 04:00:05 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 04:00:09 localhost ceph-osd[31726]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 6 04:00:09 localhost ceph-osd[31726]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 5761 writes, 25K keys, 5761 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5761 writes, 760 syncs, 7.58 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 6 04:00:12 localhost ceph-osd[32665]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 6 04:00:12 localhost ceph-osd[32665]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.2 total, 600.0 interval#012Cumulative writes: 4879 writes, 21K keys, 4879 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4879 writes, 669 syncs, 7.29 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 6 04:00:23 localhost sshd[93592]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:00:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 04:00:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 04:00:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 04:00:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 04:00:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 04:00:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 04:00:27 localhost systemd[1]: tmp-crun.N4G8Ks.mount: Deactivated successfully. Dec 6 04:00:27 localhost podman[93608]: 2025-12-06 09:00:27.960014232 +0000 UTC m=+0.107583801 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, batch=17.1_20251118.1, distribution-scope=public, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4) Dec 6 04:00:27 localhost podman[93611]: 2025-12-06 09:00:27.966012325 +0000 UTC m=+0.103488595 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, container_name=ceilometer_agent_compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z) Dec 6 04:00:28 localhost podman[93622]: 2025-12-06 09:00:28.00574956 +0000 UTC m=+0.141191918 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team) Dec 6 04:00:28 localhost podman[93622]: 2025-12-06 09:00:28.013208708 +0000 UTC m=+0.148651056 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, io.buildah.version=1.41.4, version=17.1.12, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, distribution-scope=public, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, architecture=x86_64, tcib_managed=true, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 04:00:28 localhost podman[93608]: 2025-12-06 09:00:28.018090547 +0000 UTC m=+0.165660076 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20251118.1, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vcs-type=git, config_id=tripleo_step4, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, release=1761123044, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public) Dec 6 04:00:28 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 04:00:28 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 04:00:28 localhost podman[93609]: 2025-12-06 09:00:28.060712471 +0000 UTC m=+0.207713573 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team) Dec 6 04:00:28 localhost podman[93623]: 2025-12-06 09:00:28.016492529 +0000 UTC m=+0.141201009 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, release=1761123044, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, io.openshift.expose-services=, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1) Dec 6 04:00:28 localhost podman[93610]: 2025-12-06 09:00:28.113512584 +0000 UTC m=+0.252846331 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.buildah.version=1.41.4, tcib_managed=true, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, release=1761123044, version=17.1.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, architecture=x86_64, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 04:00:28 localhost podman[93611]: 2025-12-06 09:00:28.118796546 +0000 UTC m=+0.256272816 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-type=git, release=1761123044, architecture=x86_64) Dec 6 04:00:28 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 04:00:28 localhost podman[93609]: 2025-12-06 09:00:28.143197572 +0000 UTC m=+0.290198654 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, container_name=collectd, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, architecture=x86_64) Dec 6 04:00:28 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 04:00:28 localhost podman[93623]: 2025-12-06 09:00:28.195855733 +0000 UTC m=+0.320564273 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, distribution-scope=public, architecture=x86_64) Dec 6 04:00:28 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 04:00:28 localhost podman[93610]: 2025-12-06 09:00:28.517203578 +0000 UTC m=+0.656537325 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, container_name=nova_migration_target, vcs-type=git, build-date=2025-11-19T00:36:58Z, distribution-scope=public, config_id=tripleo_step4, url=https://www.redhat.com) Dec 6 04:00:28 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 04:00:28 localhost systemd[1]: tmp-crun.AYP1Ff.mount: Deactivated successfully. Dec 6 04:00:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 04:00:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 04:00:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 04:00:32 localhost podman[93737]: 2025-12-06 09:00:32.929673153 +0000 UTC m=+0.092075007 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, container_name=ovn_controller, tcib_managed=true, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, architecture=x86_64) Dec 6 04:00:32 localhost podman[93737]: 2025-12-06 09:00:32.981471006 +0000 UTC m=+0.143872830 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, version=17.1.12, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, url=https://www.redhat.com, container_name=ovn_controller, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller) Dec 6 04:00:32 localhost podman[93737]: unhealthy Dec 6 04:00:32 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:00:32 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Failed with result 'exit-code'. Dec 6 04:00:33 localhost podman[93738]: 2025-12-06 09:00:33.062994858 +0000 UTC m=+0.225060672 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, url=https://www.redhat.com, tcib_managed=true, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, config_id=tripleo_step1, container_name=metrics_qdr) Dec 6 04:00:33 localhost podman[93739]: 2025-12-06 09:00:32.989725288 +0000 UTC m=+0.151462832 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, release=1761123044, io.openshift.expose-services=, distribution-scope=public, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1) Dec 6 04:00:33 localhost podman[93739]: 2025-12-06 09:00:33.12224758 +0000 UTC m=+0.283985114 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, release=1761123044, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 6 04:00:33 localhost podman[93739]: unhealthy Dec 6 04:00:33 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:00:33 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Failed with result 'exit-code'. Dec 6 04:00:33 localhost podman[93738]: 2025-12-06 09:00:33.254073611 +0000 UTC m=+0.416139365 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, container_name=metrics_qdr, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, release=1761123044) Dec 6 04:00:33 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 04:00:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 04:00:36 localhost podman[93807]: 2025-12-06 09:00:36.902861106 +0000 UTC m=+0.063186413 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, vcs-type=git, batch=17.1_20251118.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, config_id=tripleo_step5, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible) Dec 6 04:00:36 localhost podman[93807]: 2025-12-06 09:00:36.933114271 +0000 UTC m=+0.093439628 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, vcs-type=git, batch=17.1_20251118.1, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 04:00:36 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 04:00:38 localhost sshd[93834]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:00:40 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 6 04:00:40 localhost recover_tripleo_nova_virtqemud[93837]: 61814 Dec 6 04:00:40 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 6 04:00:40 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 6 04:00:50 localhost sshd[93838]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:00:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 04:00:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 04:00:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 04:00:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 04:00:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 04:00:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 04:00:58 localhost podman[93845]: 2025-12-06 09:00:58.941771726 +0000 UTC m=+0.095664006 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, url=https://www.redhat.com, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, architecture=x86_64) Dec 6 04:00:58 localhost podman[93845]: 2025-12-06 09:00:58.973117834 +0000 UTC m=+0.127010054 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.openshift.expose-services=, version=17.1.12, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc.) Dec 6 04:00:58 localhost systemd[1]: tmp-crun.pXTOOh.mount: Deactivated successfully. Dec 6 04:00:58 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 04:00:58 localhost podman[93842]: 2025-12-06 09:00:58.99422903 +0000 UTC m=+0.149879454 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, container_name=nova_migration_target, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=) Dec 6 04:00:59 localhost podman[93841]: 2025-12-06 09:00:59.05374834 +0000 UTC m=+0.209718894 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, batch=17.1_20251118.1, version=17.1.12, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, vendor=Red Hat, Inc., container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, architecture=x86_64, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 04:00:59 localhost podman[93841]: 2025-12-06 09:00:59.089153553 +0000 UTC m=+0.245124057 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, batch=17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, version=17.1.12, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, architecture=x86_64, config_id=tripleo_step3, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=) Dec 6 04:00:59 localhost podman[93844]: 2025-12-06 09:00:59.089104101 +0000 UTC m=+0.242352782 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, release=1761123044, distribution-scope=public, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, version=17.1.12, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, config_id=tripleo_step3, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid) Dec 6 04:00:59 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 04:00:59 localhost podman[93843]: 2025-12-06 09:00:59.147904469 +0000 UTC m=+0.297681154 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, release=1761123044, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute) Dec 6 04:00:59 localhost podman[93840]: 2025-12-06 09:00:59.197716231 +0000 UTC m=+0.358171951 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., version=17.1.12, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, managed_by=tripleo_ansible, io.openshift.expose-services=) Dec 6 04:00:59 localhost podman[93843]: 2025-12-06 09:00:59.201835038 +0000 UTC m=+0.351611633 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, vendor=Red Hat, Inc., io.buildah.version=1.41.4, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, release=1761123044, tcib_managed=true, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, batch=17.1_20251118.1, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 04:00:59 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 04:00:59 localhost podman[93844]: 2025-12-06 09:00:59.220371644 +0000 UTC m=+0.373620315 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, version=17.1.12, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, build-date=2025-11-18T23:44:13Z) Dec 6 04:00:59 localhost podman[93840]: 2025-12-06 09:00:59.230462363 +0000 UTC m=+0.390918043 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true, version=17.1.12, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2025-11-18T22:49:32Z) Dec 6 04:00:59 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 04:00:59 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 04:00:59 localhost podman[93842]: 2025-12-06 09:00:59.335541966 +0000 UTC m=+0.491192380 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, container_name=nova_migration_target, tcib_managed=true, vcs-type=git, io.buildah.version=1.41.4, config_id=tripleo_step4, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 6 04:00:59 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 04:01:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 04:01:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 04:01:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 04:01:03 localhost podman[94000]: 2025-12-06 09:01:03.931143899 +0000 UTC m=+0.072122636 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4, release=1761123044, architecture=x86_64, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, version=17.1.12, config_id=tripleo_step4, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com) Dec 6 04:01:03 localhost podman[94001]: 2025-12-06 09:01:03.994037873 +0000 UTC m=+0.129192962 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, architecture=x86_64, vendor=Red Hat, Inc., release=1761123044, distribution-scope=public, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, tcib_managed=true, container_name=metrics_qdr, config_id=tripleo_step1) Dec 6 04:01:04 localhost podman[94000]: 2025-12-06 09:01:04.017923683 +0000 UTC m=+0.158902480 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, build-date=2025-11-18T23:34:05Z, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.buildah.version=1.41.4, batch=17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, url=https://www.redhat.com, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 04:01:04 localhost podman[94000]: unhealthy Dec 6 04:01:04 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:01:04 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Failed with result 'exit-code'. Dec 6 04:01:04 localhost podman[94005]: 2025-12-06 09:01:04.115643651 +0000 UTC m=+0.246209290 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-type=git, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4) Dec 6 04:01:04 localhost podman[94005]: 2025-12-06 09:01:04.132789165 +0000 UTC m=+0.263354884 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_id=tripleo_step4, architecture=x86_64, container_name=ovn_metadata_agent, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, distribution-scope=public) Dec 6 04:01:04 localhost podman[94005]: unhealthy Dec 6 04:01:04 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:01:04 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Failed with result 'exit-code'. Dec 6 04:01:04 localhost podman[94001]: 2025-12-06 09:01:04.160589725 +0000 UTC m=+0.295744844 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.expose-services=, version=17.1.12, distribution-scope=public, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., container_name=metrics_qdr) Dec 6 04:01:04 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 04:01:04 localhost systemd[1]: tmp-crun.KTq1EJ.mount: Deactivated successfully. Dec 6 04:01:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 04:01:07 localhost systemd[1]: tmp-crun.5EzbAX.mount: Deactivated successfully. Dec 6 04:01:07 localhost podman[94069]: 2025-12-06 09:01:07.915844297 +0000 UTC m=+0.078111361 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.12, io.buildah.version=1.41.4, release=1761123044, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, managed_by=tripleo_ansible, batch=17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-nova-compute, tcib_managed=true) Dec 6 04:01:07 localhost podman[94069]: 2025-12-06 09:01:07.970186838 +0000 UTC m=+0.132453902 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, vcs-type=git, name=rhosp17/openstack-nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., batch=17.1_20251118.1, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 04:01:07 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 04:01:13 localhost sshd[94097]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:01:23 localhost sshd[94101]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:01:29 localhost podman[94285]: Dec 6 04:01:29 localhost podman[94285]: 2025-12-06 09:01:29.19031188 +0000 UTC m=+0.074429259 container create 7f3dca6ac65f1d31830e218d7dcb68e508e052d0fdc564e048a996a8a3fa03bb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_bartik, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, RELEASE=main, vcs-type=git, GIT_CLEAN=True, architecture=x86_64, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, version=7, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, GIT_BRANCH=main, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 04:01:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 04:01:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 04:01:29 localhost systemd[1]: Started libpod-conmon-7f3dca6ac65f1d31830e218d7dcb68e508e052d0fdc564e048a996a8a3fa03bb.scope. Dec 6 04:01:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 04:01:29 localhost systemd[1]: Started libcrun container. Dec 6 04:01:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 04:01:29 localhost podman[94285]: 2025-12-06 09:01:29.161431892 +0000 UTC m=+0.045549301 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 04:01:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 04:01:29 localhost podman[94285]: 2025-12-06 09:01:29.272191956 +0000 UTC m=+0.156309345 container init 7f3dca6ac65f1d31830e218d7dcb68e508e052d0fdc564e048a996a8a3fa03bb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_bartik, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, RELEASE=main, io.buildah.version=1.41.4, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True) Dec 6 04:01:29 localhost podman[94285]: 2025-12-06 09:01:29.286521807 +0000 UTC m=+0.170639186 container start 7f3dca6ac65f1d31830e218d7dcb68e508e052d0fdc564e048a996a8a3fa03bb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_bartik, architecture=x86_64, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, name=rhceph, description=Red Hat Ceph Storage 7, RELEASE=main, GIT_CLEAN=True, io.buildah.version=1.41.4, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 6 04:01:29 localhost podman[94285]: 2025-12-06 09:01:29.286836956 +0000 UTC m=+0.170954405 container attach 7f3dca6ac65f1d31830e218d7dcb68e508e052d0fdc564e048a996a8a3fa03bb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_bartik, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, name=rhceph, io.openshift.expose-services=, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, ceph=True, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 6 04:01:29 localhost agitated_bartik[94308]: 167 167 Dec 6 04:01:29 localhost systemd[1]: libpod-7f3dca6ac65f1d31830e218d7dcb68e508e052d0fdc564e048a996a8a3fa03bb.scope: Deactivated successfully. Dec 6 04:01:29 localhost podman[94285]: 2025-12-06 09:01:29.293644766 +0000 UTC m=+0.177762165 container died 7f3dca6ac65f1d31830e218d7dcb68e508e052d0fdc564e048a996a8a3fa03bb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_bartik, release=1763362218, architecture=x86_64, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, RELEASE=main, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, GIT_CLEAN=True, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, name=rhceph, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 6 04:01:29 localhost podman[94325]: 2025-12-06 09:01:29.344929152 +0000 UTC m=+0.068364362 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, tcib_managed=true, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, container_name=logrotate_crond, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, name=rhosp17/openstack-cron) Dec 6 04:01:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 04:01:29 localhost podman[94324]: 2025-12-06 09:01:29.377869575 +0000 UTC m=+0.112991505 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, config_id=tripleo_step3, release=1761123044) Dec 6 04:01:29 localhost podman[94325]: 2025-12-06 09:01:29.383944751 +0000 UTC m=+0.107379951 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, architecture=x86_64, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, version=17.1.12, tcib_managed=true, io.buildah.version=1.41.4, vcs-type=git, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, managed_by=tripleo_ansible) Dec 6 04:01:29 localhost podman[94301]: 2025-12-06 09:01:29.397164018 +0000 UTC m=+0.155950525 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, config_id=tripleo_step4, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1) Dec 6 04:01:29 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 04:01:29 localhost podman[94300]: 2025-12-06 09:01:29.381818996 +0000 UTC m=+0.149542077 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, distribution-scope=public, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, tcib_managed=true) Dec 6 04:01:29 localhost podman[94300]: 2025-12-06 09:01:29.466014983 +0000 UTC m=+0.233738054 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20251118.1, architecture=x86_64, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, tcib_managed=true, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.12, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, release=1761123044, managed_by=tripleo_ansible, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 04:01:29 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 04:01:29 localhost podman[94348]: 2025-12-06 09:01:29.500852915 +0000 UTC m=+0.199469763 container remove 7f3dca6ac65f1d31830e218d7dcb68e508e052d0fdc564e048a996a8a3fa03bb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_bartik, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, GIT_CLEAN=True, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , name=rhceph, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, version=7, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 6 04:01:29 localhost systemd[1]: libpod-conmon-7f3dca6ac65f1d31830e218d7dcb68e508e052d0fdc564e048a996a8a3fa03bb.scope: Deactivated successfully. Dec 6 04:01:29 localhost podman[94324]: 2025-12-06 09:01:29.514244045 +0000 UTC m=+0.249365935 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.4, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 04:01:29 localhost podman[94307]: 2025-12-06 09:01:29.485169652 +0000 UTC m=+0.237929714 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, release=1761123044, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, version=17.1.12, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc.) Dec 6 04:01:29 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 04:01:29 localhost podman[94307]: 2025-12-06 09:01:29.568972488 +0000 UTC m=+0.321732490 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, version=17.1.12, architecture=x86_64, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, build-date=2025-11-19T00:11:48Z, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible) Dec 6 04:01:29 localhost podman[94392]: 2025-12-06 09:01:29.584589147 +0000 UTC m=+0.198769009 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, tcib_managed=true, version=17.1.12, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, container_name=nova_migration_target, distribution-scope=public) Dec 6 04:01:29 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 04:01:29 localhost podman[94301]: 2025-12-06 09:01:29.60127528 +0000 UTC m=+0.360061767 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, config_id=tripleo_step4, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64) Dec 6 04:01:29 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 04:01:29 localhost podman[94458]: Dec 6 04:01:29 localhost podman[94458]: 2025-12-06 09:01:29.700184171 +0000 UTC m=+0.056225189 container create 7fce6dc53bca70caf5d6f4d45f30d1c9be5d53588a91b413dc475e2a7b2b867e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_meitner, io.openshift.expose-services=, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, ceph=True, build-date=2025-11-26T19:44:28Z, architecture=x86_64, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 6 04:01:29 localhost systemd[1]: Started libpod-conmon-7fce6dc53bca70caf5d6f4d45f30d1c9be5d53588a91b413dc475e2a7b2b867e.scope. Dec 6 04:01:29 localhost systemd[1]: Started libcrun container. Dec 6 04:01:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1acac956fccca1a85ff31a1bd7a29aed0c43ba5133fb96f91825c244fd91b6a5/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 6 04:01:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1acac956fccca1a85ff31a1bd7a29aed0c43ba5133fb96f91825c244fd91b6a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 6 04:01:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1acac956fccca1a85ff31a1bd7a29aed0c43ba5133fb96f91825c244fd91b6a5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 6 04:01:29 localhost podman[94458]: 2025-12-06 09:01:29.762526687 +0000 UTC m=+0.118567725 container init 7fce6dc53bca70caf5d6f4d45f30d1c9be5d53588a91b413dc475e2a7b2b867e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_meitner, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, architecture=x86_64, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, CEPH_POINT_RELEASE=, ceph=True, version=7, GIT_BRANCH=main, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux ) Dec 6 04:01:29 localhost podman[94458]: 2025-12-06 09:01:29.675152162 +0000 UTC m=+0.031193210 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 04:01:29 localhost podman[94458]: 2025-12-06 09:01:29.775677791 +0000 UTC m=+0.131718809 container start 7fce6dc53bca70caf5d6f4d45f30d1c9be5d53588a91b413dc475e2a7b2b867e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_meitner, architecture=x86_64, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , vcs-type=git, release=1763362218, vendor=Red Hat, Inc., ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, GIT_CLEAN=True) Dec 6 04:01:29 localhost podman[94458]: 2025-12-06 09:01:29.775832106 +0000 UTC m=+0.131873124 container attach 7fce6dc53bca70caf5d6f4d45f30d1c9be5d53588a91b413dc475e2a7b2b867e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_meitner, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, version=7, io.openshift.expose-services=, release=1763362218, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, ceph=True, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, vcs-type=git, RELEASE=main, GIT_CLEAN=True) Dec 6 04:01:29 localhost podman[94392]: 2025-12-06 09:01:29.944266783 +0000 UTC m=+0.558446615 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vendor=Red Hat, Inc., version=17.1.12, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, batch=17.1_20251118.1, distribution-scope=public, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team) Dec 6 04:01:29 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 04:01:30 localhost systemd[1]: var-lib-containers-storage-overlay-aba8d85722a36c2d5d486b4e2905099ea4ad38b03e7cd3b4be6862ca6ac2936f-merged.mount: Deactivated successfully. Dec 6 04:01:30 localhost adoring_meitner[94473]: [ Dec 6 04:01:30 localhost adoring_meitner[94473]: { Dec 6 04:01:30 localhost adoring_meitner[94473]: "available": false, Dec 6 04:01:30 localhost adoring_meitner[94473]: "ceph_device": false, Dec 6 04:01:30 localhost adoring_meitner[94473]: "device_id": "QEMU_DVD-ROM_QM00001", Dec 6 04:01:30 localhost adoring_meitner[94473]: "lsm_data": {}, Dec 6 04:01:30 localhost adoring_meitner[94473]: "lvs": [], Dec 6 04:01:30 localhost adoring_meitner[94473]: "path": "/dev/sr0", Dec 6 04:01:30 localhost adoring_meitner[94473]: "rejected_reasons": [ Dec 6 04:01:30 localhost adoring_meitner[94473]: "Has a FileSystem", Dec 6 04:01:30 localhost adoring_meitner[94473]: "Insufficient space (<5GB)" Dec 6 04:01:30 localhost adoring_meitner[94473]: ], Dec 6 04:01:30 localhost adoring_meitner[94473]: "sys_api": { Dec 6 04:01:30 localhost adoring_meitner[94473]: "actuators": null, Dec 6 04:01:30 localhost adoring_meitner[94473]: "device_nodes": "sr0", Dec 6 04:01:30 localhost adoring_meitner[94473]: "human_readable_size": "482.00 KB", Dec 6 04:01:30 localhost adoring_meitner[94473]: "id_bus": "ata", Dec 6 04:01:30 localhost adoring_meitner[94473]: "model": "QEMU DVD-ROM", Dec 6 04:01:30 localhost adoring_meitner[94473]: "nr_requests": "2", Dec 6 04:01:30 localhost adoring_meitner[94473]: "partitions": {}, Dec 6 04:01:30 localhost adoring_meitner[94473]: "path": "/dev/sr0", Dec 6 04:01:30 localhost adoring_meitner[94473]: "removable": "1", Dec 6 04:01:30 localhost adoring_meitner[94473]: "rev": "2.5+", Dec 6 04:01:30 localhost adoring_meitner[94473]: "ro": "0", Dec 6 04:01:30 localhost adoring_meitner[94473]: "rotational": "1", Dec 6 04:01:30 localhost adoring_meitner[94473]: "sas_address": "", Dec 6 04:01:30 localhost adoring_meitner[94473]: "sas_device_handle": "", Dec 6 04:01:30 localhost adoring_meitner[94473]: "scheduler_mode": "mq-deadline", Dec 6 04:01:30 localhost adoring_meitner[94473]: "sectors": 0, Dec 6 04:01:30 localhost adoring_meitner[94473]: "sectorsize": "2048", Dec 6 04:01:30 localhost adoring_meitner[94473]: "size": 493568.0, Dec 6 04:01:30 localhost adoring_meitner[94473]: "support_discard": "0", Dec 6 04:01:30 localhost adoring_meitner[94473]: "type": "disk", Dec 6 04:01:30 localhost adoring_meitner[94473]: "vendor": "QEMU" Dec 6 04:01:30 localhost adoring_meitner[94473]: } Dec 6 04:01:30 localhost adoring_meitner[94473]: } Dec 6 04:01:30 localhost adoring_meitner[94473]: ] Dec 6 04:01:30 localhost systemd[1]: libpod-7fce6dc53bca70caf5d6f4d45f30d1c9be5d53588a91b413dc475e2a7b2b867e.scope: Deactivated successfully. Dec 6 04:01:30 localhost podman[94458]: 2025-12-06 09:01:30.748450059 +0000 UTC m=+1.104491087 container died 7fce6dc53bca70caf5d6f4d45f30d1c9be5d53588a91b413dc475e2a7b2b867e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_meitner, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, version=7, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, com.redhat.component=rhceph-container, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, ceph=True, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, description=Red Hat Ceph Storage 7, release=1763362218, maintainer=Guillaume Abrioux ) Dec 6 04:01:30 localhost systemd[1]: var-lib-containers-storage-overlay-1acac956fccca1a85ff31a1bd7a29aed0c43ba5133fb96f91825c244fd91b6a5-merged.mount: Deactivated successfully. Dec 6 04:01:30 localhost podman[96588]: 2025-12-06 09:01:30.833181943 +0000 UTC m=+0.076317616 container remove 7fce6dc53bca70caf5d6f4d45f30d1c9be5d53588a91b413dc475e2a7b2b867e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_meitner, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, GIT_CLEAN=True, GIT_BRANCH=main, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, version=7, io.buildah.version=1.41.4, architecture=x86_64, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container) Dec 6 04:01:30 localhost systemd[1]: libpod-conmon-7fce6dc53bca70caf5d6f4d45f30d1c9be5d53588a91b413dc475e2a7b2b867e.scope: Deactivated successfully. Dec 6 04:01:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 04:01:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 04:01:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 04:01:34 localhost podman[96737]: 2025-12-06 09:01:34.913918424 +0000 UTC m=+0.073466328 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, release=1761123044, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, container_name=metrics_qdr, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, batch=17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, version=17.1.12) Dec 6 04:01:34 localhost podman[96736]: 2025-12-06 09:01:34.953285364 +0000 UTC m=+0.113228131 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.buildah.version=1.41.4, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, architecture=x86_64, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 04:01:34 localhost podman[96736]: 2025-12-06 09:01:34.969051899 +0000 UTC m=+0.128994646 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_id=tripleo_step4, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, version=17.1.12, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller) Dec 6 04:01:34 localhost podman[96736]: unhealthy Dec 6 04:01:34 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:01:34 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Failed with result 'exit-code'. Dec 6 04:01:35 localhost systemd[1]: tmp-crun.S4ORjR.mount: Deactivated successfully. Dec 6 04:01:35 localhost podman[96738]: 2025-12-06 09:01:35.022521953 +0000 UTC m=+0.179246731 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, version=17.1.12, distribution-scope=public, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 6 04:01:35 localhost podman[96738]: 2025-12-06 09:01:35.040126223 +0000 UTC m=+0.196850991 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, url=https://www.redhat.com, vcs-type=git, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4) Dec 6 04:01:35 localhost podman[96738]: unhealthy Dec 6 04:01:35 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:01:35 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Failed with result 'exit-code'. Dec 6 04:01:35 localhost podman[96737]: 2025-12-06 09:01:35.130380317 +0000 UTC m=+0.289928281 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, distribution-scope=public, architecture=x86_64, version=17.1.12, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 04:01:35 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 04:01:36 localhost rhsm-service[6617]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 6 04:01:37 localhost rhsm-service[6617]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Dec 6 04:01:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 04:01:38 localhost podman[96862]: 2025-12-06 09:01:38.905058233 +0000 UTC m=+0.071987524 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.buildah.version=1.41.4, managed_by=tripleo_ansible, version=17.1.12, container_name=nova_compute, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container) Dec 6 04:01:38 localhost podman[96862]: 2025-12-06 09:01:38.932375942 +0000 UTC m=+0.099305263 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, vcs-type=git, version=17.1.12, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_id=tripleo_step5, release=1761123044, container_name=nova_compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 04:01:38 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 04:01:49 localhost sshd[96888]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:01:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 04:01:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 04:01:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 04:01:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 04:01:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 04:01:59 localhost podman[96890]: 2025-12-06 09:01:59.914456595 +0000 UTC m=+0.073640084 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, architecture=x86_64, vcs-type=git, batch=17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, name=rhosp17/openstack-cron, version=17.1.12, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond) Dec 6 04:01:59 localhost podman[96890]: 2025-12-06 09:01:59.92601737 +0000 UTC m=+0.085200829 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, distribution-scope=public, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64) Dec 6 04:01:59 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 04:02:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 04:02:00 localhost podman[96902]: 2025-12-06 09:02:00.007213335 +0000 UTC m=+0.148864476 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, version=17.1.12, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, build-date=2025-11-19T00:12:45Z) Dec 6 04:02:00 localhost podman[96892]: 2025-12-06 09:02:00.036267749 +0000 UTC m=+0.185214763 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, release=1761123044, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public) Dec 6 04:02:00 localhost podman[96898]: 2025-12-06 09:01:59.986456518 +0000 UTC m=+0.132896056 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, distribution-scope=public, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, config_id=tripleo_step3, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, vcs-type=git) Dec 6 04:02:00 localhost podman[96964]: 2025-12-06 09:02:00.075978039 +0000 UTC m=+0.066087332 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 04:02:00 localhost podman[96891]: 2025-12-06 09:02:00.084511292 +0000 UTC m=+0.237003916 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, version=17.1.12, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, tcib_managed=true, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.buildah.version=1.41.4) Dec 6 04:02:00 localhost podman[96891]: 2025-12-06 09:02:00.097016696 +0000 UTC m=+0.249509330 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, version=17.1.12, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, batch=17.1_20251118.1, url=https://www.redhat.com) Dec 6 04:02:00 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 04:02:00 localhost podman[96898]: 2025-12-06 09:02:00.1202343 +0000 UTC m=+0.266673798 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, release=1761123044, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, build-date=2025-11-18T23:44:13Z, architecture=x86_64, tcib_managed=true, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git) Dec 6 04:02:00 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 04:02:00 localhost podman[96902]: 2025-12-06 09:02:00.14922322 +0000 UTC m=+0.290874341 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.buildah.version=1.41.4, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 6 04:02:00 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 04:02:00 localhost podman[96892]: 2025-12-06 09:02:00.169104641 +0000 UTC m=+0.318051645 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, config_id=tripleo_step4) Dec 6 04:02:00 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 04:02:00 localhost podman[96964]: 2025-12-06 09:02:00.427016398 +0000 UTC m=+0.417125671 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vendor=Red Hat, Inc., tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12) Dec 6 04:02:00 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 04:02:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 04:02:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 04:02:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 04:02:05 localhost systemd[1]: tmp-crun.DcARIZ.mount: Deactivated successfully. Dec 6 04:02:05 localhost podman[97021]: 2025-12-06 09:02:05.937144033 +0000 UTC m=+0.096589369 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, release=1761123044, io.buildah.version=1.41.4, io.openshift.expose-services=, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, config_id=tripleo_step4, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Dec 6 04:02:05 localhost podman[97021]: 2025-12-06 09:02:05.956109566 +0000 UTC m=+0.115554882 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, config_id=tripleo_step4, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, distribution-scope=public, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.) Dec 6 04:02:05 localhost podman[97023]: 2025-12-06 09:02:05.974667756 +0000 UTC m=+0.128117548 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, architecture=x86_64, config_id=tripleo_step4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, release=1761123044, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 6 04:02:06 localhost podman[97021]: unhealthy Dec 6 04:02:06 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:02:06 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Failed with result 'exit-code'. Dec 6 04:02:06 localhost podman[97022]: 2025-12-06 09:02:06.083423189 +0000 UTC m=+0.239810041 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, architecture=x86_64, container_name=metrics_qdr, version=17.1.12, batch=17.1_20251118.1, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, io.buildah.version=1.41.4, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 04:02:06 localhost podman[97023]: 2025-12-06 09:02:06.116920988 +0000 UTC m=+0.270370750 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vcs-type=git, version=17.1.12, io.openshift.expose-services=, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 04:02:06 localhost podman[97023]: unhealthy Dec 6 04:02:06 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:02:06 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Failed with result 'exit-code'. Dec 6 04:02:06 localhost podman[97022]: 2025-12-06 09:02:06.282125936 +0000 UTC m=+0.438512828 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2025-11-18T22:49:46Z, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, release=1761123044, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd) Dec 6 04:02:06 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 04:02:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 04:02:09 localhost systemd[1]: tmp-crun.V6oP0B.mount: Deactivated successfully. Dec 6 04:02:09 localhost podman[97087]: 2025-12-06 09:02:09.928686034 +0000 UTC m=+0.088881193 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, container_name=nova_compute, url=https://www.redhat.com, distribution-scope=public, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, release=1761123044, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z) Dec 6 04:02:09 localhost podman[97087]: 2025-12-06 09:02:09.961326437 +0000 UTC m=+0.121521616 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, url=https://www.redhat.com, io.openshift.expose-services=, distribution-scope=public, release=1761123044, version=17.1.12, build-date=2025-11-19T00:36:58Z, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 6 04:02:09 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 04:02:21 localhost sshd[97114]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:02:26 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 6 04:02:26 localhost recover_tripleo_nova_virtqemud[97116]: 61814 Dec 6 04:02:26 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 6 04:02:26 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 6 04:02:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 04:02:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 04:02:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 04:02:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 04:02:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 04:02:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 04:02:30 localhost systemd[1]: tmp-crun.q4nTia.mount: Deactivated successfully. Dec 6 04:02:30 localhost podman[97132]: 2025-12-06 09:02:30.965654785 +0000 UTC m=+0.110520587 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, container_name=ceilometer_agent_ipmi, version=17.1.12, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 6 04:02:31 localhost podman[97119]: 2025-12-06 09:02:31.007856062 +0000 UTC m=+0.164136035 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vcs-type=git, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 04:02:31 localhost podman[97132]: 2025-12-06 09:02:31.011126103 +0000 UTC m=+0.155991945 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1761123044, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, config_id=tripleo_step4, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20251118.1) Dec 6 04:02:31 localhost podman[97117]: 2025-12-06 09:02:31.020043857 +0000 UTC m=+0.181539811 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., version=17.1.12, maintainer=OpenStack TripleO Team, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, managed_by=tripleo_ansible, release=1761123044, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-cron, tcib_managed=true, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Dec 6 04:02:31 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 04:02:31 localhost podman[97117]: 2025-12-06 09:02:31.028977512 +0000 UTC m=+0.190473466 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 04:02:31 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 04:02:31 localhost podman[97118]: 2025-12-06 09:02:30.950059576 +0000 UTC m=+0.108879457 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, distribution-scope=public, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3) Dec 6 04:02:31 localhost podman[97120]: 2025-12-06 09:02:31.074696247 +0000 UTC m=+0.227247036 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, release=1761123044, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64) Dec 6 04:02:31 localhost podman[97118]: 2025-12-06 09:02:31.136024022 +0000 UTC m=+0.294843893 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, name=rhosp17/openstack-collectd, config_id=tripleo_step3, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, release=1761123044, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true) Dec 6 04:02:31 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 04:02:31 localhost podman[97126]: 2025-12-06 09:02:31.103626936 +0000 UTC m=+0.255981649 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, distribution-scope=public, version=17.1.12, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, release=1761123044, container_name=iscsid, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true) Dec 6 04:02:31 localhost podman[97120]: 2025-12-06 09:02:31.16298041 +0000 UTC m=+0.315531209 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, url=https://www.redhat.com, distribution-scope=public, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 04:02:31 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 04:02:31 localhost podman[97126]: 2025-12-06 09:02:31.186132852 +0000 UTC m=+0.338487565 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, version=17.1.12, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, tcib_managed=true, vcs-type=git, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, architecture=x86_64, container_name=iscsid, vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Dec 6 04:02:31 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 04:02:31 localhost podman[97119]: 2025-12-06 09:02:31.379079612 +0000 UTC m=+0.535359575 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, url=https://www.redhat.com, release=1761123044, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, version=17.1.12, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, distribution-scope=public) Dec 6 04:02:31 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 04:02:35 localhost sshd[97323]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:02:36 localhost sshd[97325]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:02:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 04:02:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 04:02:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 04:02:36 localhost podman[97329]: 2025-12-06 09:02:36.839965101 +0000 UTC m=+0.088977825 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Dec 6 04:02:36 localhost podman[97327]: 2025-12-06 09:02:36.879000171 +0000 UTC m=+0.130632645 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, batch=17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, tcib_managed=true, release=1761123044) Dec 6 04:02:36 localhost podman[97329]: 2025-12-06 09:02:36.909664084 +0000 UTC m=+0.158676818 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.12) Dec 6 04:02:36 localhost podman[97327]: 2025-12-06 09:02:36.928171863 +0000 UTC m=+0.179804387 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, vcs-type=git, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.buildah.version=1.41.4, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20251118.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com) Dec 6 04:02:36 localhost podman[97327]: unhealthy Dec 6 04:02:36 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:02:36 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Failed with result 'exit-code'. Dec 6 04:02:36 localhost podman[97329]: unhealthy Dec 6 04:02:36 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:02:36 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Failed with result 'exit-code'. Dec 6 04:02:36 localhost podman[97328]: 2025-12-06 09:02:36.935524328 +0000 UTC m=+0.187203734 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, tcib_managed=true, container_name=metrics_qdr, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 6 04:02:37 localhost podman[97328]: 2025-12-06 09:02:37.18902188 +0000 UTC m=+0.440701276 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, version=17.1.12, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, vcs-type=git, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=) Dec 6 04:02:37 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 04:02:37 localhost systemd[1]: tmp-crun.ah9q9I.mount: Deactivated successfully. Dec 6 04:02:38 localhost sshd[97398]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:02:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 04:02:40 localhost podman[97400]: 2025-12-06 09:02:40.521157742 +0000 UTC m=+0.086920872 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, tcib_managed=true, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, release=1761123044, container_name=nova_compute, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, architecture=x86_64, config_id=tripleo_step5, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 04:02:40 localhost podman[97400]: 2025-12-06 09:02:40.548107651 +0000 UTC m=+0.113870781 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, build-date=2025-11-19T00:36:58Z, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20251118.1) Dec 6 04:02:40 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 04:02:58 localhost sshd[97426]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:03:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 04:03:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 04:03:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 04:03:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 04:03:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 04:03:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 04:03:01 localhost podman[97440]: 2025-12-06 09:03:01.942709306 +0000 UTC m=+0.091661018 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, vendor=Red Hat, Inc., url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z) Dec 6 04:03:01 localhost podman[97429]: 2025-12-06 09:03:01.926035774 +0000 UTC m=+0.084531509 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, container_name=collectd, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, name=rhosp17/openstack-collectd, tcib_managed=true, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, batch=17.1_20251118.1, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Dec 6 04:03:01 localhost podman[97440]: 2025-12-06 09:03:01.992032742 +0000 UTC m=+0.140984444 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, vendor=Red Hat, Inc., release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, version=17.1.12) Dec 6 04:03:02 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 04:03:02 localhost podman[97428]: 2025-12-06 09:03:01.977914969 +0000 UTC m=+0.135478436 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, name=rhosp17/openstack-cron, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z) Dec 6 04:03:02 localhost podman[97431]: 2025-12-06 09:03:02.037918752 +0000 UTC m=+0.187179694 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, version=17.1.12, io.buildah.version=1.41.4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container) Dec 6 04:03:02 localhost podman[97432]: 2025-12-06 09:03:02.084583107 +0000 UTC m=+0.237035227 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, url=https://www.redhat.com, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, version=17.1.12, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 04:03:02 localhost podman[97431]: 2025-12-06 09:03:02.08828276 +0000 UTC m=+0.237543692 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, version=17.1.12, distribution-scope=public, architecture=x86_64, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 6 04:03:02 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 04:03:02 localhost podman[97428]: 2025-12-06 09:03:02.110475703 +0000 UTC m=+0.268039230 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, config_id=tripleo_step4, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, version=17.1.12, managed_by=tripleo_ansible, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, distribution-scope=public, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron) Dec 6 04:03:02 localhost podman[97432]: 2025-12-06 09:03:02.119118068 +0000 UTC m=+0.271570198 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-type=git, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, container_name=iscsid, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 6 04:03:02 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 04:03:02 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 04:03:02 localhost podman[97430]: 2025-12-06 09:03:02.189655726 +0000 UTC m=+0.343357484 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, name=rhosp17/openstack-nova-compute, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, release=1761123044, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1) Dec 6 04:03:02 localhost podman[97429]: 2025-12-06 09:03:02.213704475 +0000 UTC m=+0.372200250 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, release=1761123044, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, config_id=tripleo_step3, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=collectd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, architecture=x86_64) Dec 6 04:03:02 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 04:03:02 localhost podman[97430]: 2025-12-06 09:03:02.592318932 +0000 UTC m=+0.746020730 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, distribution-scope=public, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container) Dec 6 04:03:02 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 04:03:02 localhost systemd[1]: tmp-crun.iPBtZC.mount: Deactivated successfully. Dec 6 04:03:04 localhost sshd[97561]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:03:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 04:03:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 04:03:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 04:03:07 localhost systemd[1]: tmp-crun.ZpAJTr.mount: Deactivated successfully. Dec 6 04:03:07 localhost podman[97564]: 2025-12-06 09:03:07.929908532 +0000 UTC m=+0.094216406 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., vcs-type=git, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 04:03:07 localhost systemd[1]: tmp-crun.O52ROa.mount: Deactivated successfully. Dec 6 04:03:07 localhost podman[97565]: 2025-12-06 09:03:07.979027062 +0000 UTC m=+0.141302014 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=17.1.12, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4) Dec 6 04:03:08 localhost podman[97565]: 2025-12-06 09:03:08.024459449 +0000 UTC m=+0.186734411 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, architecture=x86_64, managed_by=tripleo_ansible, version=17.1.12, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 04:03:08 localhost podman[97563]: 2025-12-06 09:03:08.02451742 +0000 UTC m=+0.188942097 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, release=1761123044, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, url=https://www.redhat.com, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 6 04:03:08 localhost podman[97565]: unhealthy Dec 6 04:03:08 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:03:08 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Failed with result 'exit-code'. Dec 6 04:03:08 localhost podman[97563]: 2025-12-06 09:03:08.110040789 +0000 UTC m=+0.274465426 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, distribution-scope=public, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=) Dec 6 04:03:08 localhost podman[97563]: unhealthy Dec 6 04:03:08 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:03:08 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Failed with result 'exit-code'. Dec 6 04:03:08 localhost podman[97564]: 2025-12-06 09:03:08.167601998 +0000 UTC m=+0.331909912 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, url=https://www.redhat.com, release=1761123044, vcs-type=git, batch=17.1_20251118.1, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, io.openshift.expose-services=, version=17.1.12, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, distribution-scope=public, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z) Dec 6 04:03:08 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 04:03:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 04:03:10 localhost podman[97629]: 2025-12-06 09:03:10.919017793 +0000 UTC m=+0.072764807 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.expose-services=, container_name=nova_compute, distribution-scope=public, release=1761123044, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container) Dec 6 04:03:10 localhost podman[97629]: 2025-12-06 09:03:10.94490543 +0000 UTC m=+0.098652484 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, vendor=Red Hat, Inc., release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, architecture=x86_64, url=https://www.redhat.com, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 04:03:10 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 04:03:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 04:03:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 04:03:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 04:03:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 04:03:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 04:03:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 04:03:32 localhost systemd[1]: tmp-crun.K30iuB.mount: Deactivated successfully. Dec 6 04:03:32 localhost podman[97658]: 2025-12-06 09:03:32.947917312 +0000 UTC m=+0.094735193 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, version=17.1.12, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, url=https://www.redhat.com, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 6 04:03:32 localhost podman[97660]: 2025-12-06 09:03:32.987630702 +0000 UTC m=+0.135614719 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, io.openshift.expose-services=) Dec 6 04:03:33 localhost podman[97660]: 2025-12-06 09:03:33.009239447 +0000 UTC m=+0.157223464 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, release=1761123044, distribution-scope=public, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, version=17.1.12, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=) Dec 6 04:03:33 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 04:03:33 localhost podman[97656]: 2025-12-06 09:03:33.056665354 +0000 UTC m=+0.214391521 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp17/openstack-cron, vcs-type=git, batch=17.1_20251118.1, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, container_name=logrotate_crond, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Dec 6 04:03:33 localhost podman[97656]: 2025-12-06 09:03:33.093200377 +0000 UTC m=+0.250926514 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, url=https://www.redhat.com, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, architecture=x86_64, container_name=logrotate_crond, vendor=Red Hat, Inc., release=1761123044, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4) Dec 6 04:03:33 localhost podman[97665]: 2025-12-06 09:03:33.106044901 +0000 UTC m=+0.249933932 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-type=git, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, release=1761123044, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, maintainer=OpenStack TripleO Team, config_id=tripleo_step3) Dec 6 04:03:33 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 04:03:33 localhost podman[97667]: 2025-12-06 09:03:33.143983968 +0000 UTC m=+0.286009982 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, container_name=ceilometer_agent_ipmi, release=1761123044, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, architecture=x86_64, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 04:03:33 localhost podman[97665]: 2025-12-06 09:03:33.167217232 +0000 UTC m=+0.311106233 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid) Dec 6 04:03:33 localhost podman[97667]: 2025-12-06 09:03:33.172150324 +0000 UTC m=+0.314176368 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, tcib_managed=true) Dec 6 04:03:33 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 04:03:33 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 04:03:33 localhost podman[97657]: 2025-12-06 09:03:33.240972698 +0000 UTC m=+0.392241766 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.buildah.version=1.41.4, version=17.1.12) Dec 6 04:03:33 localhost podman[97657]: 2025-12-06 09:03:33.25404446 +0000 UTC m=+0.405313588 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step3, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, container_name=collectd, batch=17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=) Dec 6 04:03:33 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 04:03:33 localhost podman[97658]: 2025-12-06 09:03:33.303064347 +0000 UTC m=+0.449882198 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, version=17.1.12) Dec 6 04:03:33 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 04:03:34 localhost systemd[1]: tmp-crun.Ju61ui.mount: Deactivated successfully. Dec 6 04:03:34 localhost podman[97884]: 2025-12-06 09:03:34.421292316 +0000 UTC m=+0.090624707 container exec ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, architecture=x86_64, maintainer=Guillaume Abrioux , name=rhceph, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, release=1763362218) Dec 6 04:03:34 localhost podman[97884]: 2025-12-06 09:03:34.52687144 +0000 UTC m=+0.196203801 container exec_died ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, ceph=True, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=rhceph-container, RELEASE=main) Dec 6 04:03:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 04:03:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 04:03:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 04:03:38 localhost systemd[1]: tmp-crun.RUo0Es.mount: Deactivated successfully. Dec 6 04:03:38 localhost podman[98028]: 2025-12-06 09:03:38.947840888 +0000 UTC m=+0.102997086 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.expose-services=, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, release=1761123044, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, version=17.1.12, name=rhosp17/openstack-ovn-controller, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible) Dec 6 04:03:38 localhost systemd[1]: tmp-crun.TITrZe.mount: Deactivated successfully. Dec 6 04:03:38 localhost podman[98030]: 2025-12-06 09:03:38.993251834 +0000 UTC m=+0.144503823 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, version=17.1.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4) Dec 6 04:03:39 localhost podman[98030]: 2025-12-06 09:03:39.00809894 +0000 UTC m=+0.159350919 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., release=1761123044, managed_by=tripleo_ansible, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Dec 6 04:03:39 localhost podman[98030]: unhealthy Dec 6 04:03:39 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:03:39 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Failed with result 'exit-code'. Dec 6 04:03:39 localhost podman[98028]: 2025-12-06 09:03:39.032253603 +0000 UTC m=+0.187409801 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, batch=17.1_20251118.1, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, name=rhosp17/openstack-ovn-controller, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://www.redhat.com, io.buildah.version=1.41.4, managed_by=tripleo_ansible) Dec 6 04:03:39 localhost podman[98028]: unhealthy Dec 6 04:03:39 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:03:39 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Failed with result 'exit-code'. Dec 6 04:03:39 localhost podman[98029]: 2025-12-06 09:03:39.088500391 +0000 UTC m=+0.244223577 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, io.buildah.version=1.41.4, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, vcs-type=git, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 6 04:03:39 localhost podman[98029]: 2025-12-06 09:03:39.286174886 +0000 UTC m=+0.441898102 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, vcs-type=git, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, batch=17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team) Dec 6 04:03:39 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 04:03:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 04:03:41 localhost podman[98099]: 2025-12-06 09:03:41.94027335 +0000 UTC m=+0.096885099 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.expose-services=, version=17.1.12, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, managed_by=tripleo_ansible, container_name=nova_compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container) Dec 6 04:03:41 localhost podman[98099]: 2025-12-06 09:03:41.996648533 +0000 UTC m=+0.153260222 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, release=1761123044, tcib_managed=true) Dec 6 04:03:42 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 04:03:45 localhost sshd[98125]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:03:46 localhost sshd[98127]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:04:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 04:04:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 04:04:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 04:04:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 04:04:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 04:04:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 04:04:04 localhost podman[98143]: 2025-12-06 09:04:03.968799596 +0000 UTC m=+0.107496155 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4) Dec 6 04:04:04 localhost podman[98131]: 2025-12-06 09:04:04.026956814 +0000 UTC m=+0.166794858 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, version=17.1.12, release=1761123044, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, container_name=nova_migration_target, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, url=https://www.redhat.com) Dec 6 04:04:04 localhost podman[98130]: 2025-12-06 09:04:03.938996741 +0000 UTC m=+0.085697106 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, container_name=collectd, release=1761123044, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, vcs-type=git, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, io.buildah.version=1.41.4, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Dec 6 04:04:04 localhost podman[98143]: 2025-12-06 09:04:04.048254509 +0000 UTC m=+0.186951058 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, distribution-scope=public, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044) Dec 6 04:04:04 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 04:04:04 localhost podman[98130]: 2025-12-06 09:04:04.129092292 +0000 UTC m=+0.275792697 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-11-18T22:51:28Z, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, distribution-scope=public, io.buildah.version=1.41.4, url=https://www.redhat.com, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container) Dec 6 04:04:04 localhost podman[98144]: 2025-12-06 09:04:04.137283184 +0000 UTC m=+0.266307486 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 6 04:04:04 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 04:04:04 localhost podman[98132]: 2025-12-06 09:04:04.002373428 +0000 UTC m=+0.140211160 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, config_id=tripleo_step4, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 04:04:04 localhost podman[98129]: 2025-12-06 09:04:04.103945429 +0000 UTC m=+0.251106698 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, release=1761123044, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, config_id=tripleo_step4, io.buildah.version=1.41.4, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=) Dec 6 04:04:04 localhost podman[98144]: 2025-12-06 09:04:04.162009974 +0000 UTC m=+0.291034316 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, batch=17.1_20251118.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-type=git, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, version=17.1.12, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 6 04:04:04 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 04:04:04 localhost podman[98132]: 2025-12-06 09:04:04.182870806 +0000 UTC m=+0.320708498 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, batch=17.1_20251118.1, io.buildah.version=1.41.4, vendor=Red Hat, Inc., release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible) Dec 6 04:04:04 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 04:04:04 localhost podman[98129]: 2025-12-06 09:04:04.237126663 +0000 UTC m=+0.384287952 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vcs-type=git, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp17/openstack-cron, version=17.1.12, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 6 04:04:04 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 04:04:04 localhost podman[98131]: 2025-12-06 09:04:04.421598493 +0000 UTC m=+0.561436547 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, release=1761123044, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., tcib_managed=true, container_name=nova_migration_target, io.openshift.expose-services=, maintainer=OpenStack TripleO Team) Dec 6 04:04:04 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 04:04:04 localhost systemd[1]: tmp-crun.seJCPj.mount: Deactivated successfully. Dec 6 04:04:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 04:04:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 04:04:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 04:04:09 localhost sshd[98281]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:04:09 localhost systemd[1]: tmp-crun.4qnkEq.mount: Deactivated successfully. Dec 6 04:04:09 localhost podman[98259]: 2025-12-06 09:04:09.915802957 +0000 UTC m=+0.078797312 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.buildah.version=1.41.4, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, io.openshift.expose-services=, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, url=https://www.redhat.com) Dec 6 04:04:09 localhost podman[98259]: 2025-12-06 09:04:09.92729452 +0000 UTC m=+0.090288855 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, vcs-type=git, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, distribution-scope=public, io.buildah.version=1.41.4, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, version=17.1.12) Dec 6 04:04:09 localhost podman[98259]: unhealthy Dec 6 04:04:09 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:04:09 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Failed with result 'exit-code'. Dec 6 04:04:09 localhost systemd[1]: tmp-crun.pib6Sr.mount: Deactivated successfully. Dec 6 04:04:10 localhost podman[98260]: 2025-12-06 09:04:10.020891228 +0000 UTC m=+0.179286302 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.buildah.version=1.41.4, version=17.1.12, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 6 04:04:10 localhost podman[98261]: 2025-12-06 09:04:09.989133931 +0000 UTC m=+0.143813791 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., version=17.1.12, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.openshift.expose-services=, config_id=tripleo_step4, maintainer=OpenStack TripleO Team) Dec 6 04:04:10 localhost podman[98261]: 2025-12-06 09:04:10.072087261 +0000 UTC m=+0.226767141 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., version=17.1.12, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1) Dec 6 04:04:10 localhost podman[98261]: unhealthy Dec 6 04:04:10 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:04:10 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Failed with result 'exit-code'. Dec 6 04:04:10 localhost podman[98260]: 2025-12-06 09:04:10.232000516 +0000 UTC m=+0.390395590 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, tcib_managed=true, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 04:04:10 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 04:04:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 04:04:12 localhost systemd[1]: tmp-crun.YD4fZM.mount: Deactivated successfully. Dec 6 04:04:12 localhost podman[98331]: 2025-12-06 09:04:12.92450612 +0000 UTC m=+0.087527471 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=) Dec 6 04:04:12 localhost podman[98331]: 2025-12-06 09:04:12.954150971 +0000 UTC m=+0.117172322 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, config_id=tripleo_step5, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-type=git, io.buildah.version=1.41.4, architecture=x86_64, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute) Dec 6 04:04:12 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 04:04:14 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 6 04:04:14 localhost recover_tripleo_nova_virtqemud[98359]: 61814 Dec 6 04:04:14 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 6 04:04:14 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 6 04:04:24 localhost sshd[98360]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:04:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 04:04:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 04:04:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 04:04:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 04:04:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 04:04:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 04:04:34 localhost systemd[1]: tmp-crun.y8WNVu.mount: Deactivated successfully. Dec 6 04:04:34 localhost podman[98364]: 2025-12-06 09:04:34.942538345 +0000 UTC m=+0.092866775 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.12, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, vcs-type=git, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, maintainer=OpenStack TripleO Team) Dec 6 04:04:34 localhost systemd[1]: tmp-crun.TBBO4Q.mount: Deactivated successfully. Dec 6 04:04:35 localhost podman[98363]: 2025-12-06 09:04:35.002731474 +0000 UTC m=+0.157734338 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, distribution-scope=public, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20251118.1, architecture=x86_64, version=17.1.12, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 6 04:04:35 localhost podman[98363]: 2025-12-06 09:04:35.012151015 +0000 UTC m=+0.167153849 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, distribution-scope=public, architecture=x86_64, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 04:04:35 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 04:04:35 localhost podman[98362]: 2025-12-06 09:04:35.090995488 +0000 UTC m=+0.245734034 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, tcib_managed=true) Dec 6 04:04:35 localhost podman[98368]: 2025-12-06 09:04:35.15159939 +0000 UTC m=+0.296467582 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.12, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, release=1761123044, container_name=iscsid, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible) Dec 6 04:04:35 localhost podman[98368]: 2025-12-06 09:04:35.160087801 +0000 UTC m=+0.304956083 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vcs-type=git, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 6 04:04:35 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 04:04:35 localhost podman[98379]: 2025-12-06 09:04:35.193962853 +0000 UTC m=+0.336656679 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, io.buildah.version=1.41.4, release=1761123044, vendor=Red Hat, Inc., distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi) Dec 6 04:04:35 localhost podman[98365]: 2025-12-06 09:04:35.245566278 +0000 UTC m=+0.393273488 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, config_id=tripleo_step4) Dec 6 04:04:35 localhost podman[98379]: 2025-12-06 09:04:35.254066709 +0000 UTC m=+0.396760556 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.41.4, config_id=tripleo_step4, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 04:04:35 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 04:04:35 localhost podman[98364]: 2025-12-06 09:04:35.267494682 +0000 UTC m=+0.417823072 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, managed_by=tripleo_ansible, vendor=Red Hat, Inc., tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4) Dec 6 04:04:35 localhost podman[98362]: 2025-12-06 09:04:35.275168618 +0000 UTC m=+0.429907114 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, config_id=tripleo_step4, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 04:04:35 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 04:04:35 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 04:04:35 localhost podman[98365]: 2025-12-06 09:04:35.294041268 +0000 UTC m=+0.441748468 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, batch=17.1_20251118.1, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, version=17.1.12, vendor=Red Hat, Inc., url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 6 04:04:35 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 04:04:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 04:04:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 04:04:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 04:04:40 localhost systemd[1]: tmp-crun.bc2sfZ.mount: Deactivated successfully. Dec 6 04:04:40 localhost podman[98574]: 2025-12-06 09:04:40.940568894 +0000 UTC m=+0.102124260 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, distribution-scope=public, url=https://www.redhat.com, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 6 04:04:40 localhost podman[98574]: 2025-12-06 09:04:40.954045399 +0000 UTC m=+0.115600775 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, vcs-type=git, batch=17.1_20251118.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller) Dec 6 04:04:40 localhost podman[98574]: unhealthy Dec 6 04:04:40 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:04:40 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Failed with result 'exit-code'. Dec 6 04:04:41 localhost podman[98575]: 2025-12-06 09:04:41.038387071 +0000 UTC m=+0.197653397 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, vendor=Red Hat, Inc., architecture=x86_64, url=https://www.redhat.com, io.buildah.version=1.41.4, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 6 04:04:41 localhost podman[98576]: 2025-12-06 09:04:41.080900667 +0000 UTC m=+0.236453948 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, version=17.1.12, release=1761123044, url=https://www.redhat.com, io.openshift.expose-services=, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent) Dec 6 04:04:41 localhost podman[98576]: 2025-12-06 09:04:41.096108474 +0000 UTC m=+0.251661775 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, container_name=ovn_metadata_agent, vcs-type=git, config_id=tripleo_step4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn) Dec 6 04:04:41 localhost podman[98576]: unhealthy Dec 6 04:04:41 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:04:41 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Failed with result 'exit-code'. Dec 6 04:04:41 localhost podman[98575]: 2025-12-06 09:04:41.26221741 +0000 UTC m=+0.421483726 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, container_name=metrics_qdr, io.buildah.version=1.41.4, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, config_id=tripleo_step1, tcib_managed=true, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, batch=17.1_20251118.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible) Dec 6 04:04:41 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 04:04:41 localhost systemd[1]: tmp-crun.72pldl.mount: Deactivated successfully. Dec 6 04:04:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 04:04:43 localhost podman[98643]: 2025-12-06 09:04:43.956742797 +0000 UTC m=+0.109814497 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.openshift.expose-services=, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Dec 6 04:04:43 localhost podman[98643]: 2025-12-06 09:04:43.98547899 +0000 UTC m=+0.138550710 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, tcib_managed=true, container_name=nova_compute, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 6 04:04:44 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 04:04:50 localhost sshd[98669]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:04:53 localhost sshd[98671]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:05:03 localhost sshd[98673]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:05:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 04:05:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 04:05:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 04:05:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 04:05:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 04:05:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 04:05:05 localhost systemd[1]: tmp-crun.8ZQNse.mount: Deactivated successfully. Dec 6 04:05:05 localhost podman[98695]: 2025-12-06 09:05:05.962109471 +0000 UTC m=+0.097556800 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, batch=17.1_20251118.1, distribution-scope=public, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container) Dec 6 04:05:05 localhost podman[98678]: 2025-12-06 09:05:05.933181611 +0000 UTC m=+0.081024182 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, release=1761123044, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=tripleo_step4, vendor=Red Hat, Inc., distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 04:05:05 localhost podman[98695]: 2025-12-06 09:05:05.991113372 +0000 UTC m=+0.126560721 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, managed_by=tripleo_ansible, release=1761123044, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, architecture=x86_64, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 6 04:05:06 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 04:05:06 localhost podman[98689]: 2025-12-06 09:05:06.03367507 +0000 UTC m=+0.172925387 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, url=https://www.redhat.com, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.12, architecture=x86_64, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Dec 6 04:05:06 localhost podman[98676]: 2025-12-06 09:05:05.987265154 +0000 UTC m=+0.139711115 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, version=17.1.12, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.buildah.version=1.41.4, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1) Dec 6 04:05:06 localhost podman[98689]: 2025-12-06 09:05:06.068030185 +0000 UTC m=+0.207280482 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, batch=17.1_20251118.1, container_name=iscsid, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, architecture=x86_64, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, config_id=tripleo_step3, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, release=1761123044, vendor=Red Hat, Inc.) Dec 6 04:05:06 localhost podman[98675]: 2025-12-06 09:05:06.087850465 +0000 UTC m=+0.240765371 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, release=1761123044, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, url=https://www.redhat.com, name=rhosp17/openstack-cron, vcs-type=git, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 6 04:05:06 localhost podman[98676]: 2025-12-06 09:05:06.118942351 +0000 UTC m=+0.271388312 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, vendor=Red Hat, Inc., container_name=collectd, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 04:05:06 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 04:05:06 localhost podman[98678]: 2025-12-06 09:05:06.170147584 +0000 UTC m=+0.317990225 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., version=17.1.12, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, io.openshift.expose-services=) Dec 6 04:05:06 localhost podman[98675]: 2025-12-06 09:05:06.170578837 +0000 UTC m=+0.323493793 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, vendor=Red Hat, Inc., batch=17.1_20251118.1, container_name=logrotate_crond, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, release=1761123044, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vcs-type=git) Dec 6 04:05:06 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 04:05:06 localhost podman[98677]: 2025-12-06 09:05:06.25689152 +0000 UTC m=+0.396578120 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.openshift.expose-services=, version=17.1.12, tcib_managed=true, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, distribution-scope=public, architecture=x86_64, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 04:05:06 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 04:05:06 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 04:05:06 localhost podman[98677]: 2025-12-06 09:05:06.644536484 +0000 UTC m=+0.784223054 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, config_id=tripleo_step4, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, distribution-scope=public, tcib_managed=true, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., vcs-type=git, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 04:05:06 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 04:05:06 localhost systemd[1]: tmp-crun.UK9czI.mount: Deactivated successfully. Dec 6 04:05:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 04:05:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 04:05:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 04:05:11 localhost systemd[1]: tmp-crun.TwmfMp.mount: Deactivated successfully. Dec 6 04:05:11 localhost podman[98806]: 2025-12-06 09:05:11.954072574 +0000 UTC m=+0.084870380 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, batch=17.1_20251118.1, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, release=1761123044, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Dec 6 04:05:11 localhost systemd[1]: tmp-crun.9lUYNJ.mount: Deactivated successfully. Dec 6 04:05:12 localhost podman[98807]: 2025-12-06 09:05:11.999724667 +0000 UTC m=+0.127653165 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, distribution-scope=public, build-date=2025-11-19T00:14:25Z, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, container_name=ovn_metadata_agent, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 04:05:12 localhost podman[98805]: 2025-12-06 09:05:11.929060105 +0000 UTC m=+0.065816224 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, release=1761123044, container_name=ovn_controller, maintainer=OpenStack TripleO Team, distribution-scope=public, config_id=tripleo_step4) Dec 6 04:05:12 localhost podman[98807]: 2025-12-06 09:05:12.037434075 +0000 UTC m=+0.165362563 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, config_id=tripleo_step4, vcs-type=git, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64) Dec 6 04:05:12 localhost podman[98807]: unhealthy Dec 6 04:05:12 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:05:12 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Failed with result 'exit-code'. Dec 6 04:05:12 localhost podman[98805]: 2025-12-06 09:05:12.062102374 +0000 UTC m=+0.198858453 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., distribution-scope=public, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4) Dec 6 04:05:12 localhost podman[98805]: unhealthy Dec 6 04:05:12 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:05:12 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Failed with result 'exit-code'. Dec 6 04:05:12 localhost podman[98806]: 2025-12-06 09:05:12.124085538 +0000 UTC m=+0.254883324 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, version=17.1.12, architecture=x86_64, release=1761123044, com.redhat.component=openstack-qdrouterd-container) Dec 6 04:05:12 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 04:05:13 localhost sshd[98871]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:05:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 04:05:14 localhost podman[98873]: 2025-12-06 09:05:14.914293316 +0000 UTC m=+0.075383928 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.openshift.expose-services=, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, release=1761123044, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.buildah.version=1.41.4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64) Dec 6 04:05:14 localhost podman[98873]: 2025-12-06 09:05:14.946235108 +0000 UTC m=+0.107325720 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-nova-compute, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com) Dec 6 04:05:14 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 04:05:23 localhost sshd[98901]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:05:27 localhost sshd[98903]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:05:28 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 6 04:05:28 localhost recover_tripleo_nova_virtqemud[98906]: 61814 Dec 6 04:05:28 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 6 04:05:28 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 6 04:05:29 localhost sshd[98907]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:05:31 localhost sshd[98909]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:05:33 localhost sshd[98911]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:05:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 04:05:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 04:05:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 04:05:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 04:05:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 04:05:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 04:05:36 localhost podman[98913]: 2025-12-06 09:05:36.952955625 +0000 UTC m=+0.108543766 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, config_id=tripleo_step4, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.component=openstack-cron-container, tcib_managed=true, build-date=2025-11-18T22:49:32Z, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, container_name=logrotate_crond, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1) Dec 6 04:05:36 localhost systemd[1]: tmp-crun.ookG0d.mount: Deactivated successfully. Dec 6 04:05:37 localhost podman[98915]: 2025-12-06 09:05:37.001667373 +0000 UTC m=+0.148365281 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, architecture=x86_64, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=) Dec 6 04:05:37 localhost podman[98914]: 2025-12-06 09:05:37.073739418 +0000 UTC m=+0.226865514 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, distribution-scope=public, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4) Dec 6 04:05:37 localhost podman[98914]: 2025-12-06 09:05:37.084101526 +0000 UTC m=+0.237227622 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, version=17.1.12, tcib_managed=true, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 04:05:37 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 04:05:37 localhost podman[98928]: 2025-12-06 09:05:37.131289947 +0000 UTC m=+0.269325559 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, vcs-type=git, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=) Dec 6 04:05:37 localhost podman[98913]: 2025-12-06 09:05:37.201923768 +0000 UTC m=+0.357511939 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, version=17.1.12, managed_by=tripleo_ansible, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Dec 6 04:05:37 localhost podman[98928]: 2025-12-06 09:05:37.207140788 +0000 UTC m=+0.345176390 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 04:05:37 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 04:05:37 localhost podman[98917]: 2025-12-06 09:05:37.284802395 +0000 UTC m=+0.429238013 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, release=1761123044, vcs-type=git, version=17.1.12, batch=17.1_20251118.1, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 6 04:05:37 localhost podman[98927]: 2025-12-06 09:05:37.343943283 +0000 UTC m=+0.484414690 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, vendor=Red Hat, Inc., url=https://www.redhat.com, release=1761123044, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid) Dec 6 04:05:37 localhost podman[98927]: 2025-12-06 09:05:37.351287108 +0000 UTC m=+0.491758505 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, architecture=x86_64, batch=17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=iscsid, url=https://www.redhat.com) Dec 6 04:05:37 localhost podman[98917]: 2025-12-06 09:05:37.364633798 +0000 UTC m=+0.509069426 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, vcs-type=git) Dec 6 04:05:37 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 04:05:37 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 04:05:37 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 04:05:37 localhost podman[98915]: 2025-12-06 09:05:37.433205236 +0000 UTC m=+0.579902964 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, tcib_managed=true, release=1761123044, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, build-date=2025-11-19T00:36:58Z) Dec 6 04:05:37 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 04:05:39 localhost sshd[99110]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:05:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 04:05:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 04:05:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 04:05:42 localhost systemd[1]: tmp-crun.NWhy5H.mount: Deactivated successfully. Dec 6 04:05:42 localhost podman[99129]: 2025-12-06 09:05:42.94133991 +0000 UTC m=+0.094128935 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, version=17.1.12, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, release=1761123044, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.expose-services=) Dec 6 04:05:43 localhost podman[99129]: 2025-12-06 09:05:43.031854531 +0000 UTC m=+0.184643546 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, batch=17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=) Dec 6 04:05:43 localhost podman[99129]: unhealthy Dec 6 04:05:43 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:05:43 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Failed with result 'exit-code'. Dec 6 04:05:43 localhost podman[99127]: 2025-12-06 09:05:43.1475968 +0000 UTC m=+0.303842960 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.buildah.version=1.41.4, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, release=1761123044, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64) Dec 6 04:05:43 localhost podman[99127]: 2025-12-06 09:05:43.191171298 +0000 UTC m=+0.347417448 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, version=17.1.12, url=https://www.redhat.com, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, build-date=2025-11-18T23:34:05Z, tcib_managed=true, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1) Dec 6 04:05:43 localhost podman[99127]: unhealthy Dec 6 04:05:43 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:05:43 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Failed with result 'exit-code'. Dec 6 04:05:43 localhost podman[99128]: 2025-12-06 09:05:43.194218142 +0000 UTC m=+0.347316476 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, architecture=x86_64, release=1761123044, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, managed_by=tripleo_ansible) Dec 6 04:05:43 localhost podman[99128]: 2025-12-06 09:05:43.403181825 +0000 UTC m=+0.556280149 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., tcib_managed=true, vcs-type=git, distribution-scope=public, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 6 04:05:43 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 04:05:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 04:05:45 localhost podman[99197]: 2025-12-06 09:05:45.924194898 +0000 UTC m=+0.078302588 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, tcib_managed=true, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, release=1761123044, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container) Dec 6 04:05:45 localhost podman[99197]: 2025-12-06 09:05:45.949203657 +0000 UTC m=+0.103311377 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, tcib_managed=true, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, release=1761123044, url=https://www.redhat.com, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public) Dec 6 04:05:45 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 04:05:58 localhost sshd[99221]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:06:02 localhost sshd[99223]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:06:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 04:06:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 04:06:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 04:06:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 04:06:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 04:06:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 04:06:07 localhost systemd[1]: tmp-crun.dinvyc.mount: Deactivated successfully. Dec 6 04:06:07 localhost podman[99242]: 2025-12-06 09:06:07.969967464 +0000 UTC m=+0.105731380 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z) Dec 6 04:06:07 localhost podman[99224]: 2025-12-06 09:06:07.925828318 +0000 UTC m=+0.084344484 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, release=1761123044, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, version=17.1.12, tcib_managed=true, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, name=rhosp17/openstack-cron) Dec 6 04:06:08 localhost podman[99224]: 2025-12-06 09:06:08.010414498 +0000 UTC m=+0.168930624 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.12, batch=17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, release=1761123044, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z) Dec 6 04:06:08 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 04:06:08 localhost podman[99242]: 2025-12-06 09:06:08.020444896 +0000 UTC m=+0.156208822 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team) Dec 6 04:06:08 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 04:06:08 localhost podman[99233]: 2025-12-06 09:06:07.956058637 +0000 UTC m=+0.100610354 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1761123044, version=17.1.12, config_id=tripleo_step3, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, container_name=iscsid, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., url=https://www.redhat.com, architecture=x86_64, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 04:06:08 localhost podman[99225]: 2025-12-06 09:06:08.082318947 +0000 UTC m=+0.237074026 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, release=1761123044, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vcs-type=git, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 04:06:08 localhost podman[99225]: 2025-12-06 09:06:08.092969255 +0000 UTC m=+0.247724344 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, version=17.1.12, vcs-type=git, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, architecture=x86_64, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team) Dec 6 04:06:08 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 04:06:08 localhost podman[99226]: 2025-12-06 09:06:08.135304576 +0000 UTC m=+0.287319181 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-type=git, tcib_managed=true, release=1761123044, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Dec 6 04:06:08 localhost podman[99233]: 2025-12-06 09:06:08.142155687 +0000 UTC m=+0.286707374 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, config_id=tripleo_step3, distribution-scope=public, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Dec 6 04:06:08 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 04:06:08 localhost podman[99231]: 2025-12-06 09:06:08.189370868 +0000 UTC m=+0.338466414 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, vcs-type=git, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc.) Dec 6 04:06:08 localhost podman[99231]: 2025-12-06 09:06:08.212247961 +0000 UTC m=+0.361343527 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z) Dec 6 04:06:08 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 04:06:08 localhost podman[99226]: 2025-12-06 09:06:08.515425939 +0000 UTC m=+0.667440574 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1761123044, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, container_name=nova_migration_target, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_id=tripleo_step4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 04:06:08 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 04:06:12 localhost sshd[99353]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:06:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 04:06:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 04:06:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 04:06:13 localhost systemd[1]: tmp-crun.pYxPR9.mount: Deactivated successfully. Dec 6 04:06:13 localhost podman[99356]: 2025-12-06 09:06:13.935411843 +0000 UTC m=+0.095489906 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 04:06:13 localhost systemd[1]: tmp-crun.lE5FC5.mount: Deactivated successfully. Dec 6 04:06:13 localhost podman[99355]: 2025-12-06 09:06:13.990165405 +0000 UTC m=+0.150057672 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, batch=17.1_20251118.1, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z) Dec 6 04:06:14 localhost podman[99357]: 2025-12-06 09:06:14.050133619 +0000 UTC m=+0.202612739 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, batch=17.1_20251118.1, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, tcib_managed=true, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, release=1761123044, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 6 04:06:14 localhost podman[99355]: 2025-12-06 09:06:14.059486976 +0000 UTC m=+0.219379283 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4, version=17.1.12, container_name=ovn_controller, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 6 04:06:14 localhost podman[99355]: unhealthy Dec 6 04:06:14 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:06:14 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Failed with result 'exit-code'. Dec 6 04:06:14 localhost podman[99357]: 2025-12-06 09:06:14.073349022 +0000 UTC m=+0.225828172 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., release=1761123044, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, config_id=tripleo_step4, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn) Dec 6 04:06:14 localhost podman[99357]: unhealthy Dec 6 04:06:14 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:06:14 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Failed with result 'exit-code'. Dec 6 04:06:14 localhost podman[99356]: 2025-12-06 09:06:14.131149019 +0000 UTC m=+0.291227122 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, release=1761123044, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, architecture=x86_64) Dec 6 04:06:14 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 04:06:14 localhost systemd[1]: tmp-crun.4V3ikd.mount: Deactivated successfully. Dec 6 04:06:16 localhost sshd[99424]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:06:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 04:06:16 localhost podman[99426]: 2025-12-06 09:06:16.92503008 +0000 UTC m=+0.086389836 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, io.buildah.version=1.41.4, release=1761123044) Dec 6 04:06:16 localhost podman[99426]: 2025-12-06 09:06:16.974930134 +0000 UTC m=+0.136289870 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, name=rhosp17/openstack-nova-compute, architecture=x86_64, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com) Dec 6 04:06:16 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 04:06:37 localhost sshd[99453]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:06:38 localhost sshd[99455]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:06:38 localhost sshd[99457]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:06:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 04:06:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 04:06:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 04:06:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 04:06:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 04:06:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 04:06:38 localhost podman[99468]: 2025-12-06 09:06:38.888161916 +0000 UTC m=+0.088697948 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, build-date=2025-11-18T23:44:13Z, container_name=iscsid, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, com.redhat.component=openstack-iscsid-container, release=1761123044, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, vcs-type=git, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_id=tripleo_step3) Dec 6 04:06:38 localhost podman[99468]: 2025-12-06 09:06:38.904680624 +0000 UTC m=+0.105216676 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, container_name=iscsid, distribution-scope=public, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044) Dec 6 04:06:38 localhost systemd[1]: tmp-crun.Uoykr4.mount: Deactivated successfully. Dec 6 04:06:38 localhost podman[99475]: 2025-12-06 09:06:38.914279678 +0000 UTC m=+0.106106182 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.buildah.version=1.41.4, version=17.1.12, build-date=2025-11-19T00:12:45Z, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., tcib_managed=true, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, batch=17.1_20251118.1, architecture=x86_64) Dec 6 04:06:38 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 04:06:38 localhost podman[99460]: 2025-12-06 09:06:38.989598943 +0000 UTC m=+0.197284954 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, version=17.1.12, tcib_managed=true, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, distribution-scope=public, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., url=https://www.redhat.com, batch=17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git) Dec 6 04:06:39 localhost podman[99460]: 2025-12-06 09:06:39.011106545 +0000 UTC m=+0.218792556 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, version=17.1.12, config_id=tripleo_step3, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible) Dec 6 04:06:39 localhost podman[99475]: 2025-12-06 09:06:39.018213392 +0000 UTC m=+0.210039886 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_id=tripleo_step4, url=https://www.redhat.com, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.openshift.expose-services=) Dec 6 04:06:39 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 04:06:39 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 04:06:39 localhost podman[99459]: 2025-12-06 09:06:39.087403829 +0000 UTC m=+0.299462975 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, version=17.1.12, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, release=1761123044, io.buildah.version=1.41.4, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container) Dec 6 04:06:39 localhost podman[99459]: 2025-12-06 09:06:39.0913181 +0000 UTC m=+0.303377256 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, tcib_managed=true, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-cron-container, version=17.1.12, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron) Dec 6 04:06:39 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 04:06:39 localhost podman[99467]: 2025-12-06 09:06:39.136689085 +0000 UTC m=+0.335573796 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, version=17.1.12, vcs-type=git, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4) Dec 6 04:06:39 localhost podman[99467]: 2025-12-06 09:06:39.159051182 +0000 UTC m=+0.357935923 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1761123044, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, version=17.1.12, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Dec 6 04:06:39 localhost podman[99461]: 2025-12-06 09:06:38.959820679 +0000 UTC m=+0.164139427 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, vcs-type=git, config_id=tripleo_step4, release=1761123044, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, container_name=nova_migration_target) Dec 6 04:06:39 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 04:06:39 localhost podman[99461]: 2025-12-06 09:06:39.365174856 +0000 UTC m=+0.569493604 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, io.buildah.version=1.41.4, vcs-type=git) Dec 6 04:06:39 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 04:06:39 localhost systemd[1]: tmp-crun.IGpqc3.mount: Deactivated successfully. Dec 6 04:06:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 04:06:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 04:06:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 04:06:44 localhost podman[99671]: 2025-12-06 09:06:44.941633639 +0000 UTC m=+0.089779390 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 6 04:06:45 localhost systemd[1]: tmp-crun.fVUDJm.mount: Deactivated successfully. Dec 6 04:06:45 localhost podman[99671]: 2025-12-06 09:06:45.01224781 +0000 UTC m=+0.160393591 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, release=1761123044, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.12, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-11-19T00:14:25Z) Dec 6 04:06:45 localhost podman[99670]: 2025-12-06 09:06:45.055005183 +0000 UTC m=+0.203553316 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, distribution-scope=public, container_name=metrics_qdr, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, batch=17.1_20251118.1, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git) Dec 6 04:06:45 localhost podman[99669]: 2025-12-06 09:06:45.013416376 +0000 UTC m=+0.162002490 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, container_name=ovn_controller, io.openshift.expose-services=, version=17.1.12, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, config_id=tripleo_step4, tcib_managed=true, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1) Dec 6 04:06:45 localhost podman[99671]: unhealthy Dec 6 04:06:45 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:06:45 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Failed with result 'exit-code'. Dec 6 04:06:45 localhost podman[99669]: 2025-12-06 09:06:45.148268671 +0000 UTC m=+0.296854775 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.openshift.expose-services=, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z) Dec 6 04:06:45 localhost podman[99669]: unhealthy Dec 6 04:06:45 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:06:45 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Failed with result 'exit-code'. Dec 6 04:06:45 localhost podman[99670]: 2025-12-06 09:06:45.244501078 +0000 UTC m=+0.393049241 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, architecture=x86_64, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 6 04:06:45 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 04:06:45 localhost sshd[99739]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:06:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 04:06:47 localhost systemd[1]: tmp-crun.DxoMrc.mount: Deactivated successfully. Dec 6 04:06:47 localhost podman[99741]: 2025-12-06 09:06:47.922055172 +0000 UTC m=+0.083859458 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, architecture=x86_64, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, vcs-type=git, batch=17.1_20251118.1, config_id=tripleo_step5, container_name=nova_compute, maintainer=OpenStack TripleO Team, version=17.1.12, release=1761123044, url=https://www.redhat.com, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 6 04:06:47 localhost podman[99741]: 2025-12-06 09:06:47.957309395 +0000 UTC m=+0.119113671 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.12, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, container_name=nova_compute, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, architecture=x86_64, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, release=1761123044, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Dec 6 04:06:47 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 04:06:59 localhost sshd[99768]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:07:03 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 6 04:07:03 localhost recover_tripleo_nova_virtqemud[99771]: 61814 Dec 6 04:07:03 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 6 04:07:03 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 6 04:07:05 localhost sshd[99772]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:07:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 04:07:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 04:07:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 04:07:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 04:07:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 04:07:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 04:07:09 localhost systemd[1]: tmp-crun.RO2BP8.mount: Deactivated successfully. Dec 6 04:07:09 localhost systemd[1]: tmp-crun.mJglJB.mount: Deactivated successfully. Dec 6 04:07:09 localhost podman[99776]: 2025-12-06 09:07:09.965241376 +0000 UTC m=+0.116775779 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_id=tripleo_step4, url=https://www.redhat.com, vendor=Red Hat, Inc., managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1) Dec 6 04:07:09 localhost podman[99782]: 2025-12-06 09:07:09.999378556 +0000 UTC m=+0.144287066 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, url=https://www.redhat.com, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_id=tripleo_step4, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.expose-services=, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Dec 6 04:07:10 localhost podman[99788]: 2025-12-06 09:07:10.00634861 +0000 UTC m=+0.149675431 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, url=https://www.redhat.com, io.buildah.version=1.41.4, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Dec 6 04:07:10 localhost podman[99782]: 2025-12-06 09:07:10.025637543 +0000 UTC m=+0.170546063 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, managed_by=tripleo_ansible, url=https://www.redhat.com, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, architecture=x86_64, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 6 04:07:10 localhost podman[99774]: 2025-12-06 09:07:09.980404853 +0000 UTC m=+0.141741778 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, container_name=logrotate_crond, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, url=https://www.redhat.com, io.buildah.version=1.41.4, distribution-scope=public, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.openshift.expose-services=) Dec 6 04:07:10 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 04:07:10 localhost podman[99788]: 2025-12-06 09:07:10.045266286 +0000 UTC m=+0.188593117 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, url=https://www.redhat.com, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, release=1761123044, vcs-type=git, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team) Dec 6 04:07:10 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 04:07:10 localhost podman[99774]: 2025-12-06 09:07:10.059444352 +0000 UTC m=+0.220781307 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team) Dec 6 04:07:10 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 04:07:10 localhost podman[99775]: 2025-12-06 09:07:09.929010033 +0000 UTC m=+0.088861522 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, url=https://www.redhat.com, batch=17.1_20251118.1, vendor=Red Hat, Inc., version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, release=1761123044, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd) Dec 6 04:07:10 localhost podman[99793]: 2025-12-06 09:07:10.111988357 +0000 UTC m=+0.249612193 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, release=1761123044, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 04:07:10 localhost podman[99793]: 2025-12-06 09:07:10.162359705 +0000 UTC m=+0.299983531 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64) Dec 6 04:07:10 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 04:07:10 localhost podman[99775]: 2025-12-06 09:07:10.213813026 +0000 UTC m=+0.373664545 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step3, tcib_managed=true, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, release=1761123044, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Dec 6 04:07:10 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 04:07:10 localhost podman[99776]: 2025-12-06 09:07:10.334354301 +0000 UTC m=+0.485888724 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, tcib_managed=true) Dec 6 04:07:10 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 04:07:10 localhost sshd[99906]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:07:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 04:07:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 04:07:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 04:07:15 localhost systemd[1]: tmp-crun.1ZjeTy.mount: Deactivated successfully. Dec 6 04:07:15 localhost podman[99909]: 2025-12-06 09:07:15.935700769 +0000 UTC m=+0.096428286 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, config_id=tripleo_step1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true) Dec 6 04:07:16 localhost podman[99910]: 2025-12-06 09:07:16.024471296 +0000 UTC m=+0.182074487 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, io.buildah.version=1.41.4, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 6 04:07:16 localhost podman[99910]: 2025-12-06 09:07:16.040266732 +0000 UTC m=+0.197869943 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, architecture=x86_64, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4) Dec 6 04:07:16 localhost podman[99910]: unhealthy Dec 6 04:07:16 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:07:16 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Failed with result 'exit-code'. Dec 6 04:07:16 localhost podman[99909]: 2025-12-06 09:07:16.129083982 +0000 UTC m=+0.289811499 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vendor=Red Hat, Inc., version=17.1.12, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 04:07:16 localhost podman[99908]: 2025-12-06 09:07:16.137828941 +0000 UTC m=+0.300693574 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, release=1761123044, version=17.1.12, tcib_managed=true, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, architecture=x86_64, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 6 04:07:16 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 04:07:16 localhost podman[99908]: 2025-12-06 09:07:16.17295839 +0000 UTC m=+0.335823043 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, build-date=2025-11-18T23:34:05Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.expose-services=, container_name=ovn_controller, release=1761123044) Dec 6 04:07:16 localhost podman[99908]: unhealthy Dec 6 04:07:16 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:07:16 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Failed with result 'exit-code'. Dec 6 04:07:16 localhost systemd[1]: tmp-crun.15w74S.mount: Deactivated successfully. Dec 6 04:07:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 04:07:18 localhost systemd[1]: tmp-crun.MsJQtC.mount: Deactivated successfully. Dec 6 04:07:18 localhost podman[99978]: 2025-12-06 09:07:18.920981611 +0000 UTC m=+0.083909700 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, container_name=nova_compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, tcib_managed=true, distribution-scope=public, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.buildah.version=1.41.4) Dec 6 04:07:18 localhost podman[99978]: 2025-12-06 09:07:18.952070486 +0000 UTC m=+0.114998616 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, release=1761123044, container_name=nova_compute) Dec 6 04:07:18 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 04:07:23 localhost sshd[100005]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:07:24 localhost sshd[100007]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:07:26 localhost sshd[100009]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:07:37 localhost systemd[1]: session-28.scope: Deactivated successfully. Dec 6 04:07:37 localhost systemd[1]: session-28.scope: Consumed 7min 1.517s CPU time. Dec 6 04:07:37 localhost systemd-logind[766]: Session 28 logged out. Waiting for processes to exit. Dec 6 04:07:37 localhost systemd-logind[766]: Removed session 28. Dec 6 04:07:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 04:07:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 04:07:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 04:07:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 04:07:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 04:07:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 04:07:40 localhost systemd[1]: tmp-crun.mLHnxh.mount: Deactivated successfully. Dec 6 04:07:41 localhost podman[100011]: 2025-12-06 09:07:40.942316387 +0000 UTC m=+0.096657712 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.openshift.expose-services=, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, version=17.1.12, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64) Dec 6 04:07:41 localhost podman[100013]: 2025-12-06 09:07:41.052304837 +0000 UTC m=+0.199806332 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, architecture=x86_64, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., vcs-type=git, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public) Dec 6 04:07:41 localhost podman[100012]: 2025-12-06 09:07:40.995739398 +0000 UTC m=+0.147592506 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, release=1761123044, url=https://www.redhat.com, tcib_managed=true, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., config_id=tripleo_step3, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z) Dec 6 04:07:41 localhost podman[100035]: 2025-12-06 09:07:41.107313478 +0000 UTC m=+0.232818747 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vcs-type=git, release=1761123044, architecture=x86_64, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com) Dec 6 04:07:41 localhost podman[100035]: 2025-12-06 09:07:41.114881161 +0000 UTC m=+0.240386350 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, tcib_managed=true, io.openshift.expose-services=, container_name=iscsid, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Dec 6 04:07:41 localhost podman[100036]: 2025-12-06 09:07:40.971016259 +0000 UTC m=+0.092519325 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, config_id=tripleo_step4, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, tcib_managed=true, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12) Dec 6 04:07:41 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 04:07:41 localhost podman[100011]: 2025-12-06 09:07:41.129231952 +0000 UTC m=+0.283573267 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, batch=17.1_20251118.1, container_name=logrotate_crond, io.openshift.expose-services=, url=https://www.redhat.com, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4) Dec 6 04:07:41 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 04:07:41 localhost podman[100036]: 2025-12-06 09:07:41.157076188 +0000 UTC m=+0.278579224 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, tcib_managed=true, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=) Dec 6 04:07:41 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 04:07:41 localhost podman[100029]: 2025-12-06 09:07:41.024811893 +0000 UTC m=+0.155379707 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, architecture=x86_64, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, vcs-type=git, release=1761123044, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, distribution-scope=public, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute) Dec 6 04:07:41 localhost podman[100012]: 2025-12-06 09:07:41.181455297 +0000 UTC m=+0.333308445 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, architecture=x86_64, vcs-type=git, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., batch=17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.expose-services=, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team) Dec 6 04:07:41 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 04:07:41 localhost podman[100029]: 2025-12-06 09:07:41.209195479 +0000 UTC m=+0.339763283 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, release=1761123044, io.openshift.expose-services=, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 04:07:41 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 04:07:41 localhost podman[100013]: 2025-12-06 09:07:41.433058809 +0000 UTC m=+0.580560254 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, release=1761123044, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12) Dec 6 04:07:41 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 04:07:45 localhost sshd[100224]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:07:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 04:07:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 04:07:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 04:07:46 localhost systemd[1]: tmp-crun.G8FLAy.mount: Deactivated successfully. Dec 6 04:07:46 localhost podman[100227]: 2025-12-06 09:07:46.784415774 +0000 UTC m=+0.094299509 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, config_id=tripleo_step1, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.expose-services=, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 6 04:07:46 localhost systemd[1]: tmp-crun.MwSbCq.mount: Deactivated successfully. Dec 6 04:07:46 localhost podman[100226]: 2025-12-06 09:07:46.889969858 +0000 UTC m=+0.201042189 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, release=1761123044, vendor=Red Hat, Inc., config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, io.buildah.version=1.41.4, version=17.1.12, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, architecture=x86_64) Dec 6 04:07:46 localhost podman[100228]: 2025-12-06 09:07:46.922046484 +0000 UTC m=+0.228909857 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, version=17.1.12, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, batch=17.1_20251118.1, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Dec 6 04:07:46 localhost podman[100226]: 2025-12-06 09:07:46.932909108 +0000 UTC m=+0.243981469 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, version=17.1.12, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 6 04:07:46 localhost podman[100226]: unhealthy Dec 6 04:07:46 localhost podman[100228]: 2025-12-06 09:07:46.941240504 +0000 UTC m=+0.248103847 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Dec 6 04:07:46 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:07:46 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Failed with result 'exit-code'. Dec 6 04:07:46 localhost podman[100228]: unhealthy Dec 6 04:07:46 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:07:46 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Failed with result 'exit-code'. Dec 6 04:07:46 localhost podman[100227]: 2025-12-06 09:07:46.969040938 +0000 UTC m=+0.278924583 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, release=1761123044, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.12, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4) Dec 6 04:07:46 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 04:07:47 localhost systemd[1]: Stopping User Manager for UID 1003... Dec 6 04:07:47 localhost systemd[35904]: Activating special unit Exit the Session... Dec 6 04:07:47 localhost systemd[35904]: Removed slice User Background Tasks Slice. Dec 6 04:07:47 localhost systemd[35904]: Stopped target Main User Target. Dec 6 04:07:47 localhost systemd[35904]: Stopped target Basic System. Dec 6 04:07:47 localhost systemd[35904]: Stopped target Paths. Dec 6 04:07:47 localhost systemd[35904]: Stopped target Sockets. Dec 6 04:07:47 localhost systemd[35904]: Stopped target Timers. Dec 6 04:07:47 localhost systemd[35904]: Stopped Mark boot as successful after the user session has run 2 minutes. Dec 6 04:07:47 localhost systemd[35904]: Stopped Daily Cleanup of User's Temporary Directories. Dec 6 04:07:47 localhost systemd[35904]: Closed D-Bus User Message Bus Socket. Dec 6 04:07:47 localhost systemd[35904]: Stopped Create User's Volatile Files and Directories. Dec 6 04:07:47 localhost systemd[35904]: Removed slice User Application Slice. Dec 6 04:07:47 localhost systemd[35904]: Reached target Shutdown. Dec 6 04:07:47 localhost systemd[35904]: Finished Exit the Session. Dec 6 04:07:47 localhost systemd[35904]: Reached target Exit the Session. Dec 6 04:07:47 localhost systemd[1]: user@1003.service: Deactivated successfully. Dec 6 04:07:47 localhost systemd[1]: Stopped User Manager for UID 1003. Dec 6 04:07:47 localhost systemd[1]: user@1003.service: Consumed 4.248s CPU time, read 0B from disk, written 7.0K to disk. Dec 6 04:07:47 localhost systemd[1]: Stopping User Runtime Directory /run/user/1003... Dec 6 04:07:47 localhost systemd[1]: user-runtime-dir@1003.service: Deactivated successfully. Dec 6 04:07:47 localhost systemd[1]: Stopped User Runtime Directory /run/user/1003. Dec 6 04:07:47 localhost systemd[1]: Removed slice User Slice of UID 1003. Dec 6 04:07:47 localhost systemd[1]: user-1003.slice: Consumed 7min 5.795s CPU time. Dec 6 04:07:47 localhost systemd[1]: run-user-1003.mount: Deactivated successfully. Dec 6 04:07:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 04:07:49 localhost podman[100295]: 2025-12-06 09:07:49.929319872 +0000 UTC m=+0.081868007 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, batch=17.1_20251118.1, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, url=https://www.redhat.com, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, release=1761123044) Dec 6 04:07:49 localhost podman[100295]: 2025-12-06 09:07:49.981785015 +0000 UTC m=+0.134333100 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step5, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, container_name=nova_compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 04:07:49 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 04:07:57 localhost sshd[100322]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:07:59 localhost sshd[100324]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:08:11 localhost sshd[100325]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:08:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 04:08:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 04:08:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 04:08:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 04:08:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 04:08:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 04:08:11 localhost podman[100328]: 2025-12-06 09:08:11.957596493 +0000 UTC m=+0.107031971 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, url=https://www.redhat.com, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, container_name=collectd) Dec 6 04:08:11 localhost podman[100328]: 2025-12-06 09:08:11.967072474 +0000 UTC m=+0.116508002 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, version=17.1.12, managed_by=tripleo_ansible, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044) Dec 6 04:08:11 localhost systemd[1]: tmp-crun.bgvr0g.mount: Deactivated successfully. Dec 6 04:08:12 localhost podman[100335]: 2025-12-06 09:08:12.011930973 +0000 UTC m=+0.150711993 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., distribution-scope=public, release=1761123044, managed_by=tripleo_ansible, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, tcib_managed=true, build-date=2025-11-19T00:11:48Z, architecture=x86_64, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team) Dec 6 04:08:12 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 04:08:12 localhost podman[100327]: 2025-12-06 09:08:12.067889603 +0000 UTC m=+0.223645455 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20251118.1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, distribution-scope=public, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git) Dec 6 04:08:12 localhost podman[100335]: 2025-12-06 09:08:12.072510925 +0000 UTC m=+0.211291975 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4) Dec 6 04:08:12 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 04:08:12 localhost podman[100347]: 2025-12-06 09:08:11.984308754 +0000 UTC m=+0.115629485 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, batch=17.1_20251118.1, io.openshift.expose-services=, distribution-scope=public, version=17.1.12, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 6 04:08:12 localhost podman[100327]: 2025-12-06 09:08:12.10422112 +0000 UTC m=+0.259976912 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, distribution-scope=public, tcib_managed=true, batch=17.1_20251118.1) Dec 6 04:08:12 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 04:08:12 localhost podman[100347]: 2025-12-06 09:08:12.118116007 +0000 UTC m=+0.249436748 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.expose-services=, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4) Dec 6 04:08:12 localhost podman[100341]: 2025-12-06 09:08:12.118730835 +0000 UTC m=+0.247603190 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, build-date=2025-11-18T23:44:13Z, vcs-type=git, io.buildah.version=1.41.4, architecture=x86_64, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3) Dec 6 04:08:12 localhost podman[100341]: 2025-12-06 09:08:12.127469414 +0000 UTC m=+0.256341779 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., container_name=iscsid, version=17.1.12, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-iscsid-container) Dec 6 04:08:12 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 04:08:12 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 04:08:12 localhost podman[100333]: 2025-12-06 09:08:12.271042586 +0000 UTC m=+0.415002556 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, architecture=x86_64, batch=17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, version=17.1.12, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Dec 6 04:08:12 localhost podman[100333]: 2025-12-06 09:08:12.636371025 +0000 UTC m=+0.780331035 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://www.redhat.com, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, io.buildah.version=1.41.4, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, version=17.1.12, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git) Dec 6 04:08:12 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 04:08:13 localhost sshd[100456]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:08:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 04:08:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 04:08:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 04:08:17 localhost podman[100458]: 2025-12-06 09:08:17.923024612 +0000 UTC m=+0.079003049 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., io.buildah.version=1.41.4, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, tcib_managed=true) Dec 6 04:08:17 localhost podman[100458]: 2025-12-06 09:08:17.940106267 +0000 UTC m=+0.096084744 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, distribution-scope=public, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, config_id=tripleo_step4, version=17.1.12, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 6 04:08:17 localhost podman[100458]: unhealthy Dec 6 04:08:17 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:08:17 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Failed with result 'exit-code'. Dec 6 04:08:17 localhost podman[100460]: 2025-12-06 09:08:17.988108122 +0000 UTC m=+0.137679173 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., version=17.1.12, release=1761123044, io.buildah.version=1.41.4, distribution-scope=public, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-11-19T00:14:25Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Dec 6 04:08:18 localhost podman[100460]: 2025-12-06 09:08:18.031279369 +0000 UTC m=+0.180850430 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 6 04:08:18 localhost podman[100460]: unhealthy Dec 6 04:08:18 localhost systemd[1]: tmp-crun.jzlcgT.mount: Deactivated successfully. Dec 6 04:08:18 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:08:18 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Failed with result 'exit-code'. Dec 6 04:08:18 localhost podman[100459]: 2025-12-06 09:08:18.052614864 +0000 UTC m=+0.205519147 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, batch=17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_id=tripleo_step1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 6 04:08:18 localhost podman[100459]: 2025-12-06 09:08:18.255348586 +0000 UTC m=+0.408252849 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, release=1761123044, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, batch=17.1_20251118.1) Dec 6 04:08:18 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 04:08:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 04:08:20 localhost podman[100526]: 2025-12-06 09:08:20.916610441 +0000 UTC m=+0.076438801 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, vendor=Red Hat, Inc., release=1761123044, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Dec 6 04:08:20 localhost podman[100526]: 2025-12-06 09:08:20.948138969 +0000 UTC m=+0.107967339 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team) Dec 6 04:08:20 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 04:08:36 localhost sshd[100550]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:08:36 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 6 04:08:36 localhost recover_tripleo_nova_virtqemud[100553]: 61814 Dec 6 04:08:36 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 6 04:08:36 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 6 04:08:41 localhost sshd[100554]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:08:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 04:08:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 04:08:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 04:08:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 04:08:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 04:08:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 04:08:42 localhost podman[100570]: 2025-12-06 09:08:42.882202873 +0000 UTC m=+0.099488949 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, release=1761123044, distribution-scope=public, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step4, vcs-type=git, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron) Dec 6 04:08:42 localhost podman[100572]: 2025-12-06 09:08:42.938130601 +0000 UTC m=+0.147926827 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 04:08:42 localhost systemd[1]: tmp-crun.3wfapI.mount: Deactivated successfully. Dec 6 04:08:42 localhost podman[100584]: 2025-12-06 09:08:42.987983393 +0000 UTC m=+0.182065366 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, tcib_managed=true, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, version=17.1.12) Dec 6 04:08:43 localhost podman[100571]: 2025-12-06 09:08:43.045618245 +0000 UTC m=+0.260399874 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, version=17.1.12, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, tcib_managed=true) Dec 6 04:08:43 localhost podman[100571]: 2025-12-06 09:08:43.079581039 +0000 UTC m=+0.294362668 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, tcib_managed=true, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, architecture=x86_64, version=17.1.12, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd) Dec 6 04:08:43 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 04:08:43 localhost podman[100578]: 2025-12-06 09:08:43.101895845 +0000 UTC m=+0.307537334 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, release=1761123044, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4, distribution-scope=public, maintainer=OpenStack TripleO Team) Dec 6 04:08:43 localhost podman[100578]: 2025-12-06 09:08:43.155977057 +0000 UTC m=+0.361618516 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, vcs-type=git, version=17.1.12, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, io.buildah.version=1.41.4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 6 04:08:43 localhost podman[100570]: 2025-12-06 09:08:43.169163662 +0000 UTC m=+0.386449808 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, container_name=logrotate_crond, managed_by=tripleo_ansible, version=17.1.12, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4, vendor=Red Hat, Inc., name=rhosp17/openstack-cron) Dec 6 04:08:43 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 04:08:43 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 04:08:43 localhost podman[100594]: 2025-12-06 09:08:43.157881846 +0000 UTC m=+0.347056068 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, url=https://www.redhat.com, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, release=1761123044, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi) Dec 6 04:08:43 localhost podman[100584]: 2025-12-06 09:08:43.221574393 +0000 UTC m=+0.415656376 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, architecture=x86_64, version=17.1.12, com.redhat.component=openstack-iscsid-container, vcs-type=git, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=iscsid, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, distribution-scope=public) Dec 6 04:08:43 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 04:08:43 localhost podman[100594]: 2025-12-06 09:08:43.238309268 +0000 UTC m=+0.427483460 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, distribution-scope=public, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, release=1761123044, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 6 04:08:43 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 04:08:43 localhost podman[100572]: 2025-12-06 09:08:43.274905243 +0000 UTC m=+0.484701419 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1) Dec 6 04:08:43 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 04:08:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 04:08:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 04:08:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 04:08:48 localhost systemd[1]: tmp-crun.IVyE09.mount: Deactivated successfully. Dec 6 04:08:48 localhost podman[100763]: 2025-12-06 09:08:48.922432219 +0000 UTC m=+0.088642135 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, tcib_managed=true, version=17.1.12, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, distribution-scope=public, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 04:08:48 localhost podman[100763]: 2025-12-06 09:08:48.961749568 +0000 UTC m=+0.127959454 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, name=rhosp17/openstack-ovn-controller, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, vcs-type=git, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, distribution-scope=public) Dec 6 04:08:48 localhost podman[100763]: unhealthy Dec 6 04:08:48 localhost podman[100764]: 2025-12-06 09:08:48.972488488 +0000 UTC m=+0.132139523 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.buildah.version=1.41.4, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-type=git, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Dec 6 04:08:48 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:08:48 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Failed with result 'exit-code'. Dec 6 04:08:49 localhost podman[100765]: 2025-12-06 09:08:48.966936457 +0000 UTC m=+0.123734524 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, release=1761123044, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, tcib_managed=true, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 6 04:08:49 localhost podman[100765]: 2025-12-06 09:08:49.046566404 +0000 UTC m=+0.203364521 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., version=17.1.12, managed_by=tripleo_ansible, url=https://www.redhat.com, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, release=1761123044, maintainer=OpenStack TripleO Team) Dec 6 04:08:49 localhost podman[100765]: unhealthy Dec 6 04:08:49 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:08:49 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Failed with result 'exit-code'. Dec 6 04:08:49 localhost podman[100764]: 2025-12-06 09:08:49.150018164 +0000 UTC m=+0.309669179 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, container_name=metrics_qdr, managed_by=tripleo_ansible, io.buildah.version=1.41.4, architecture=x86_64, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Dec 6 04:08:49 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 04:08:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 04:08:51 localhost podman[100830]: 2025-12-06 09:08:51.916179582 +0000 UTC m=+0.080280718 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, config_id=tripleo_step5, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.12, vcs-type=git, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 04:08:51 localhost podman[100830]: 2025-12-06 09:08:51.969217792 +0000 UTC m=+0.133318888 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, version=17.1.12, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 04:08:51 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 04:08:58 localhost sshd[100856]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:09:11 localhost sshd[100858]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:09:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 04:09:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 04:09:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 04:09:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 04:09:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 04:09:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 04:09:13 localhost systemd[1]: tmp-crun.mkkIxW.mount: Deactivated successfully. Dec 6 04:09:13 localhost podman[100860]: 2025-12-06 09:09:13.936032102 +0000 UTC m=+0.092284307 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, url=https://www.redhat.com, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.component=openstack-collectd-container, version=17.1.12, container_name=collectd, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 6 04:09:13 localhost podman[100859]: 2025-12-06 09:09:13.978401774 +0000 UTC m=+0.136390643 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., release=1761123044, container_name=logrotate_crond, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team) Dec 6 04:09:13 localhost podman[100874]: 2025-12-06 09:09:13.998486231 +0000 UTC m=+0.140496089 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, batch=17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.12, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container) Dec 6 04:09:14 localhost podman[100860]: 2025-12-06 09:09:14.026886474 +0000 UTC m=+0.183138699 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, distribution-scope=public, io.buildah.version=1.41.4) Dec 6 04:09:14 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 04:09:14 localhost podman[100861]: 2025-12-06 09:09:14.045090394 +0000 UTC m=+0.197764749 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.41.4, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, managed_by=tripleo_ansible, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, distribution-scope=public, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true) Dec 6 04:09:14 localhost podman[100862]: 2025-12-06 09:09:13.950003442 +0000 UTC m=+0.098441797 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, version=17.1.12, io.buildah.version=1.41.4, architecture=x86_64, release=1761123044) Dec 6 04:09:14 localhost podman[100874]: 2025-12-06 09:09:14.056218906 +0000 UTC m=+0.198228794 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, release=1761123044, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git) Dec 6 04:09:14 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 04:09:14 localhost podman[100862]: 2025-12-06 09:09:14.083108092 +0000 UTC m=+0.231546507 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.12, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, batch=17.1_20251118.1, release=1761123044, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 6 04:09:14 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 04:09:14 localhost podman[100859]: 2025-12-06 09:09:14.116385695 +0000 UTC m=+0.274374624 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, architecture=x86_64, batch=17.1_20251118.1, version=17.1.12, config_id=tripleo_step4, vcs-type=git, container_name=logrotate_crond, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true) Dec 6 04:09:14 localhost podman[100869]: 2025-12-06 09:09:14.126384233 +0000 UTC m=+0.273637352 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, config_id=tripleo_step3, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, release=1761123044, batch=17.1_20251118.1) Dec 6 04:09:14 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 04:09:14 localhost podman[100869]: 2025-12-06 09:09:14.139045262 +0000 UTC m=+0.286298381 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, managed_by=tripleo_ansible, config_id=tripleo_step3, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, container_name=iscsid, distribution-scope=public, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, version=17.1.12, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 04:09:14 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 04:09:14 localhost podman[100861]: 2025-12-06 09:09:14.445084777 +0000 UTC m=+0.597759142 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, batch=17.1_20251118.1, release=1761123044, vcs-type=git, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, container_name=nova_migration_target, io.buildah.version=1.41.4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 04:09:14 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 04:09:14 localhost sshd[100993]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:09:17 localhost sshd[100996]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:09:18 localhost sshd[100998]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:09:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 04:09:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 04:09:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 04:09:19 localhost podman[101001]: 2025-12-06 09:09:19.923736474 +0000 UTC m=+0.084667233 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, managed_by=tripleo_ansible, container_name=metrics_qdr, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.expose-services=) Dec 6 04:09:19 localhost podman[101002]: 2025-12-06 09:09:19.975203816 +0000 UTC m=+0.133346169 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, architecture=x86_64) Dec 6 04:09:20 localhost podman[101002]: 2025-12-06 09:09:20.027139872 +0000 UTC m=+0.185282265 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, version=17.1.12, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z) Dec 6 04:09:20 localhost podman[101002]: unhealthy Dec 6 04:09:20 localhost podman[101000]: 2025-12-06 09:09:20.035689425 +0000 UTC m=+0.199137341 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_id=tripleo_step4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, architecture=x86_64, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, release=1761123044, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc.) Dec 6 04:09:20 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:09:20 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Failed with result 'exit-code'. Dec 6 04:09:20 localhost podman[101000]: 2025-12-06 09:09:20.054273947 +0000 UTC m=+0.217721833 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, container_name=ovn_controller, batch=17.1_20251118.1, distribution-scope=public) Dec 6 04:09:20 localhost podman[101000]: unhealthy Dec 6 04:09:20 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:09:20 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Failed with result 'exit-code'. Dec 6 04:09:20 localhost podman[101001]: 2025-12-06 09:09:20.118386867 +0000 UTC m=+0.279317636 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, version=17.1.12, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container) Dec 6 04:09:20 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 04:09:20 localhost systemd[1]: tmp-crun.KGT7Mg.mount: Deactivated successfully. Dec 6 04:09:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 04:09:22 localhost podman[101069]: 2025-12-06 09:09:22.917933261 +0000 UTC m=+0.078748531 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, architecture=x86_64, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, tcib_managed=true) Dec 6 04:09:22 localhost podman[101069]: 2025-12-06 09:09:22.975231112 +0000 UTC m=+0.136046382 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1761123044, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, container_name=nova_compute, io.openshift.expose-services=, config_id=tripleo_step5) Dec 6 04:09:22 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 04:09:38 localhost sshd[101095]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:09:38 localhost sshd[101097]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:09:43 localhost sshd[101099]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:09:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 04:09:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 04:09:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 04:09:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 04:09:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 04:09:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 04:09:44 localhost podman[101116]: 2025-12-06 09:09:44.514824738 +0000 UTC m=+0.095414906 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z) Dec 6 04:09:44 localhost systemd[1]: tmp-crun.jbKgNZ.mount: Deactivated successfully. Dec 6 04:09:44 localhost podman[101180]: 2025-12-06 09:09:44.578158909 +0000 UTC m=+0.070410969 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_step4, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vcs-type=git, version=17.1.12, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, tcib_managed=true) Dec 6 04:09:44 localhost podman[101131]: 2025-12-06 09:09:44.554897367 +0000 UTC m=+0.111044836 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container) Dec 6 04:09:44 localhost podman[101117]: 2025-12-06 09:09:44.62450528 +0000 UTC m=+0.200946831 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, version=17.1.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.buildah.version=1.41.4, vendor=Red Hat, Inc., container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 6 04:09:44 localhost podman[101117]: 2025-12-06 09:09:44.634004772 +0000 UTC m=+0.210446343 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, release=1761123044, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, architecture=x86_64, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4) Dec 6 04:09:44 localhost podman[101131]: 2025-12-06 09:09:44.640027516 +0000 UTC m=+0.196174985 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, version=17.1.12, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible) Dec 6 04:09:44 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 04:09:44 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 04:09:44 localhost podman[101122]: 2025-12-06 09:09:44.680896639 +0000 UTC m=+0.250003586 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, architecture=x86_64, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public) Dec 6 04:09:44 localhost podman[101116]: 2025-12-06 09:09:44.699500189 +0000 UTC m=+0.280090367 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, batch=17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 04:09:44 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 04:09:44 localhost podman[101129]: 2025-12-06 09:09:44.711141966 +0000 UTC m=+0.276276510 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., version=17.1.12, url=https://www.redhat.com, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid) Dec 6 04:09:44 localhost podman[101129]: 2025-12-06 09:09:44.718915674 +0000 UTC m=+0.284050238 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, distribution-scope=public, version=17.1.12, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, vcs-type=git, release=1761123044) Dec 6 04:09:44 localhost podman[101122]: 2025-12-06 09:09:44.726047593 +0000 UTC m=+0.295154520 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container) Dec 6 04:09:44 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 04:09:44 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 04:09:44 localhost podman[101180]: 2025-12-06 09:09:44.925341163 +0000 UTC m=+0.417593283 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.41.4, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, container_name=nova_migration_target) Dec 6 04:09:44 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 04:09:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 04:09:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 04:09:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 04:09:50 localhost systemd[1]: tmp-crun.YokzOp.mount: Deactivated successfully. Dec 6 04:09:50 localhost podman[101310]: 2025-12-06 09:09:50.920542784 +0000 UTC m=+0.076124835 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, url=https://www.redhat.com, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, vendor=Red Hat, Inc., version=17.1.12, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4) Dec 6 04:09:50 localhost podman[101309]: 2025-12-06 09:09:50.978391637 +0000 UTC m=+0.133545215 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, distribution-scope=public, container_name=ovn_controller, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, version=17.1.12, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vcs-type=git) Dec 6 04:09:50 localhost podman[101311]: 2025-12-06 09:09:50.947146169 +0000 UTC m=+0.094995573 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_id=tripleo_step4, managed_by=tripleo_ansible, release=1761123044, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 6 04:09:51 localhost podman[101309]: 2025-12-06 09:09:51.012936096 +0000 UTC m=+0.168089674 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vendor=Red Hat, Inc., url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, maintainer=OpenStack TripleO Team, container_name=ovn_controller) Dec 6 04:09:51 localhost podman[101309]: unhealthy Dec 6 04:09:51 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:09:51 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Failed with result 'exit-code'. Dec 6 04:09:51 localhost podman[101311]: 2025-12-06 09:09:51.033128425 +0000 UTC m=+0.180977829 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, tcib_managed=true, container_name=ovn_metadata_agent, url=https://www.redhat.com, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Dec 6 04:09:51 localhost podman[101311]: unhealthy Dec 6 04:09:51 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:09:51 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Failed with result 'exit-code'. Dec 6 04:09:51 localhost podman[101310]: 2025-12-06 09:09:51.134222714 +0000 UTC m=+0.289804775 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, vcs-type=git) Dec 6 04:09:51 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 04:09:52 localhost sshd[101374]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:09:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 04:09:53 localhost systemd[1]: tmp-crun.uppUns.mount: Deactivated successfully. Dec 6 04:09:53 localhost podman[101376]: 2025-12-06 09:09:53.923263176 +0000 UTC m=+0.084334246 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-type=git, container_name=nova_compute, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, version=17.1.12, url=https://www.redhat.com, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4) Dec 6 04:09:53 localhost podman[101376]: 2025-12-06 09:09:53.952600585 +0000 UTC m=+0.113671675 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, version=17.1.12, name=rhosp17/openstack-nova-compute, vcs-type=git, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true) Dec 6 04:09:53 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 04:10:07 localhost sshd[101401]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:10:09 localhost ceph-osd[31726]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 6 04:10:09 localhost ceph-osd[31726]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 5761 writes, 25K keys, 5761 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5761 writes, 760 syncs, 7.58 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 6 04:10:12 localhost ceph-osd[32665]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 6 04:10:12 localhost ceph-osd[32665]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.2 total, 600.0 interval#012Cumulative writes: 4879 writes, 21K keys, 4879 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 4879 writes, 669 syncs, 7.29 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 6 04:10:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 04:10:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 04:10:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 04:10:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 04:10:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 04:10:14 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 6 04:10:14 localhost recover_tripleo_nova_virtqemud[101431]: 61814 Dec 6 04:10:14 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 6 04:10:14 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 6 04:10:14 localhost podman[101410]: 2025-12-06 09:10:14.935367927 +0000 UTC m=+0.082190790 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, architecture=x86_64) Dec 6 04:10:14 localhost podman[101410]: 2025-12-06 09:10:14.94395289 +0000 UTC m=+0.090775763 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, tcib_managed=true, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible) Dec 6 04:10:14 localhost systemd[1]: tmp-crun.H2WkKn.mount: Deactivated successfully. Dec 6 04:10:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 04:10:14 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 04:10:14 localhost podman[101403]: 2025-12-06 09:10:14.98310515 +0000 UTC m=+0.140728625 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., version=17.1.12, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z) Dec 6 04:10:14 localhost podman[101404]: 2025-12-06 09:10:14.992934692 +0000 UTC m=+0.145053768 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1) Dec 6 04:10:15 localhost podman[101404]: 2025-12-06 09:10:15.02811478 +0000 UTC m=+0.180233856 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, release=1761123044, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, version=17.1.12, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., container_name=collectd, build-date=2025-11-18T22:51:28Z, architecture=x86_64, batch=17.1_20251118.1, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 04:10:15 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 04:10:15 localhost podman[101412]: 2025-12-06 09:10:14.956542217 +0000 UTC m=+0.099241744 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vcs-type=git, io.buildah.version=1.41.4, distribution-scope=public, batch=17.1_20251118.1, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4) Dec 6 04:10:15 localhost podman[101477]: 2025-12-06 09:10:15.031211806 +0000 UTC m=+0.058269177 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, release=1761123044, version=17.1.12, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Dec 6 04:10:15 localhost podman[101405]: 2025-12-06 09:10:15.085422928 +0000 UTC m=+0.234280424 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, distribution-scope=public, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, config_id=tripleo_step4, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 6 04:10:15 localhost podman[101412]: 2025-12-06 09:10:15.088609395 +0000 UTC m=+0.231308862 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=ceilometer_agent_ipmi, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 04:10:15 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 04:10:15 localhost podman[101405]: 2025-12-06 09:10:15.101930274 +0000 UTC m=+0.250787780 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, vendor=Red Hat, Inc., tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 6 04:10:15 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 04:10:15 localhost podman[101403]: 2025-12-06 09:10:15.118312806 +0000 UTC m=+0.275936261 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, distribution-scope=public, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, io.openshift.expose-services=) Dec 6 04:10:15 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 04:10:15 localhost podman[101477]: 2025-12-06 09:10:15.371109536 +0000 UTC m=+0.398166967 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, version=17.1.12, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, vendor=Red Hat, Inc., release=1761123044, architecture=x86_64, vcs-type=git, build-date=2025-11-19T00:36:58Z, distribution-scope=public, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 04:10:15 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 04:10:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 04:10:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 04:10:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 04:10:21 localhost podman[101536]: 2025-12-06 09:10:21.93310214 +0000 UTC m=+0.086114441 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=ovn_metadata_agent, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, version=17.1.12, managed_by=tripleo_ansible, batch=17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true) Dec 6 04:10:21 localhost systemd[1]: tmp-crun.P0TMfw.mount: Deactivated successfully. Dec 6 04:10:21 localhost podman[101535]: 2025-12-06 09:10:21.99603967 +0000 UTC m=+0.152574269 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1761123044, io.buildah.version=1.41.4, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, architecture=x86_64, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, vcs-type=git, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible) Dec 6 04:10:22 localhost podman[101536]: 2025-12-06 09:10:22.001143566 +0000 UTC m=+0.154155847 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, release=1761123044, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4) Dec 6 04:10:22 localhost podman[101536]: unhealthy Dec 6 04:10:22 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:10:22 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Failed with result 'exit-code'. Dec 6 04:10:22 localhost podman[101534]: 2025-12-06 09:10:22.085339198 +0000 UTC m=+0.245221100 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, vcs-type=git, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, release=1761123044) Dec 6 04:10:22 localhost podman[101534]: 2025-12-06 09:10:22.107113054 +0000 UTC m=+0.266994946 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vendor=Red Hat, Inc., config_id=tripleo_step4, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, batch=17.1_20251118.1, architecture=x86_64, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 04:10:22 localhost podman[101534]: unhealthy Dec 6 04:10:22 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:10:22 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Failed with result 'exit-code'. Dec 6 04:10:22 localhost podman[101535]: 2025-12-06 09:10:22.216975333 +0000 UTC m=+0.373509882 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, tcib_managed=true, vcs-type=git, version=17.1.12, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 04:10:22 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 04:10:23 localhost sshd[101604]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:10:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 04:10:24 localhost systemd[1]: tmp-crun.CE9OrP.mount: Deactivated successfully. Dec 6 04:10:24 localhost podman[101606]: 2025-12-06 09:10:24.143943535 +0000 UTC m=+0.092086233 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, maintainer=OpenStack TripleO Team, distribution-scope=public, version=17.1.12, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, release=1761123044, name=rhosp17/openstack-nova-compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 6 04:10:24 localhost podman[101606]: 2025-12-06 09:10:24.197170967 +0000 UTC m=+0.145313625 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, version=17.1.12, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, container_name=nova_compute, io.buildah.version=1.41.4, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, batch=17.1_20251118.1) Dec 6 04:10:24 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 04:10:32 localhost sshd[101631]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:10:38 localhost sshd[101633]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:10:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 04:10:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 04:10:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 04:10:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 04:10:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 04:10:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 04:10:45 localhost systemd[1]: tmp-crun.WNAUQQ.mount: Deactivated successfully. Dec 6 04:10:45 localhost systemd[1]: tmp-crun.lunY6X.mount: Deactivated successfully. Dec 6 04:10:45 localhost podman[101636]: 2025-12-06 09:10:45.954169442 +0000 UTC m=+0.115600284 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, release=1761123044, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vendor=Red Hat, Inc., container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container) Dec 6 04:10:45 localhost podman[101638]: 2025-12-06 09:10:45.998921334 +0000 UTC m=+0.155975863 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 6 04:10:46 localhost podman[101636]: 2025-12-06 09:10:46.042071356 +0000 UTC m=+0.203502208 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.41.4, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64) Dec 6 04:10:46 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 04:10:46 localhost podman[101638]: 2025-12-06 09:10:46.052981291 +0000 UTC m=+0.210035810 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.12, config_id=tripleo_step4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, container_name=ceilometer_agent_compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, distribution-scope=public, architecture=x86_64) Dec 6 04:10:46 localhost podman[101637]: 2025-12-06 09:10:46.060022927 +0000 UTC m=+0.217282182 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z) Dec 6 04:10:46 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 04:10:46 localhost podman[101639]: 2025-12-06 09:10:45.922061487 +0000 UTC m=+0.080468537 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, container_name=iscsid, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public) Dec 6 04:10:46 localhost podman[101639]: 2025-12-06 09:10:46.110071862 +0000 UTC m=+0.268478922 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, release=1761123044, config_id=tripleo_step3, distribution-scope=public, tcib_managed=true, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Dec 6 04:10:46 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 04:10:46 localhost podman[101635]: 2025-12-06 09:10:45.976829177 +0000 UTC m=+0.140282912 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, distribution-scope=public, vendor=Red Hat, Inc., container_name=logrotate_crond, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.buildah.version=1.41.4, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 6 04:10:46 localhost podman[101635]: 2025-12-06 09:10:46.161063285 +0000 UTC m=+0.324517030 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, config_id=tripleo_step4, container_name=logrotate_crond, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team) Dec 6 04:10:46 localhost podman[101643]: 2025-12-06 09:10:46.11556926 +0000 UTC m=+0.264769658 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.41.4, url=https://www.redhat.com, version=17.1.12, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, release=1761123044) Dec 6 04:10:46 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 04:10:46 localhost podman[101643]: 2025-12-06 09:10:46.200251826 +0000 UTC m=+0.349452284 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, build-date=2025-11-19T00:12:45Z, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64) Dec 6 04:10:46 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 04:10:46 localhost podman[101637]: 2025-12-06 09:10:46.452273492 +0000 UTC m=+0.609532787 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, url=https://www.redhat.com, batch=17.1_20251118.1, version=17.1.12, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.buildah.version=1.41.4, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, architecture=x86_64, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 04:10:46 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 04:10:46 localhost sshd[101827]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:10:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 04:10:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 04:10:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 04:10:52 localhost podman[101844]: 2025-12-06 09:10:52.92504181 +0000 UTC m=+0.087428901 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, release=1761123044, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_id=tripleo_step4, io.openshift.expose-services=, version=17.1.12, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc.) Dec 6 04:10:52 localhost podman[101846]: 2025-12-06 09:10:52.9709949 +0000 UTC m=+0.128794930 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vcs-type=git, tcib_managed=true, config_id=tripleo_step4, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Dec 6 04:10:53 localhost podman[101844]: 2025-12-06 09:10:53.022324693 +0000 UTC m=+0.184711784 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, vcs-type=git, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, managed_by=tripleo_ansible) Dec 6 04:10:53 localhost podman[101844]: unhealthy Dec 6 04:10:53 localhost podman[101845]: 2025-12-06 09:10:53.032254948 +0000 UTC m=+0.192442191 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, batch=17.1_20251118.1, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, distribution-scope=public, container_name=metrics_qdr, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 04:10:53 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:10:53 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Failed with result 'exit-code'. Dec 6 04:10:53 localhost podman[101846]: 2025-12-06 09:10:53.060551255 +0000 UTC m=+0.218351255 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, distribution-scope=public, url=https://www.redhat.com, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, vcs-type=git, container_name=ovn_metadata_agent, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Dec 6 04:10:53 localhost podman[101846]: unhealthy Dec 6 04:10:53 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:10:53 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Failed with result 'exit-code'. Dec 6 04:10:53 localhost podman[101845]: 2025-12-06 09:10:53.239199531 +0000 UTC m=+0.399386794 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, container_name=metrics_qdr, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.41.4, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, distribution-scope=public, config_id=tripleo_step1, url=https://www.redhat.com, vcs-type=git, managed_by=tripleo_ansible) Dec 6 04:10:53 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 04:10:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 04:10:54 localhost podman[101910]: 2025-12-06 09:10:54.932857243 +0000 UTC m=+0.084469851 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, tcib_managed=true, io.buildah.version=1.41.4, managed_by=tripleo_ansible) Dec 6 04:10:54 localhost podman[101910]: 2025-12-06 09:10:54.966785483 +0000 UTC m=+0.118398091 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1761123044, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.buildah.version=1.41.4) Dec 6 04:10:54 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 04:10:58 localhost sshd[101935]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:11:02 localhost sshd[101937]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:11:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 04:11:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 04:11:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 04:11:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 04:11:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 04:11:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 04:11:16 localhost systemd[1]: tmp-crun.OuG76Y.mount: Deactivated successfully. Dec 6 04:11:16 localhost podman[101941]: 2025-12-06 09:11:16.929097014 +0000 UTC m=+0.081426897 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, url=https://www.redhat.com, io.buildah.version=1.41.4, config_id=tripleo_step4, release=1761123044, batch=17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 04:11:17 localhost podman[101942]: 2025-12-06 09:11:17.002192365 +0000 UTC m=+0.150440983 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://www.redhat.com, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, tcib_managed=true, version=17.1.12, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 6 04:11:17 localhost podman[101939]: 2025-12-06 09:11:17.049121844 +0000 UTC m=+0.205992766 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, url=https://www.redhat.com, vcs-type=git, tcib_managed=true, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, release=1761123044, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4) Dec 6 04:11:17 localhost podman[101942]: 2025-12-06 09:11:17.050072723 +0000 UTC m=+0.198321271 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, tcib_managed=true, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Dec 6 04:11:17 localhost podman[101940]: 2025-12-06 09:11:16.955996758 +0000 UTC m=+0.109028292 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, release=1761123044, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, config_id=tripleo_step3, name=rhosp17/openstack-collectd, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 6 04:11:17 localhost podman[101948]: 2025-12-06 09:11:17.06468533 +0000 UTC m=+0.206935674 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, release=1761123044, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, managed_by=tripleo_ansible, container_name=iscsid, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, tcib_managed=true, build-date=2025-11-18T23:44:13Z) Dec 6 04:11:17 localhost podman[101960]: 2025-12-06 09:11:17.018070032 +0000 UTC m=+0.156255211 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1761123044, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, url=https://www.redhat.com, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 6 04:11:17 localhost podman[101940]: 2025-12-06 09:11:17.09205895 +0000 UTC m=+0.245090484 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, architecture=x86_64, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, vcs-type=git, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, managed_by=tripleo_ansible, container_name=collectd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3) Dec 6 04:11:17 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 04:11:17 localhost podman[101939]: 2025-12-06 09:11:17.109585937 +0000 UTC m=+0.266456909 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., io.buildah.version=1.41.4, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, architecture=x86_64, io.openshift.expose-services=, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z) Dec 6 04:11:17 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 04:11:17 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 04:11:17 localhost podman[101948]: 2025-12-06 09:11:17.148545601 +0000 UTC m=+0.290796005 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible) Dec 6 04:11:17 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 04:11:17 localhost podman[101960]: 2025-12-06 09:11:17.204521117 +0000 UTC m=+0.342706266 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, version=17.1.12, url=https://www.redhat.com, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 6 04:11:17 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 04:11:17 localhost podman[101941]: 2025-12-06 09:11:17.261176334 +0000 UTC m=+0.413506307 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, container_name=nova_migration_target, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team) Dec 6 04:11:17 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 04:11:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 04:11:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 04:11:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 04:11:23 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 6 04:11:23 localhost recover_tripleo_nova_virtqemud[102086]: 61814 Dec 6 04:11:23 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 6 04:11:23 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 6 04:11:23 localhost systemd[1]: tmp-crun.O0Tccd.mount: Deactivated successfully. Dec 6 04:11:23 localhost podman[102068]: 2025-12-06 09:11:23.943789128 +0000 UTC m=+0.098829911 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.41.4, io.openshift.expose-services=, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, batch=17.1_20251118.1, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 6 04:11:23 localhost systemd[1]: tmp-crun.wb1wxm.mount: Deactivated successfully. Dec 6 04:11:23 localhost podman[102069]: 2025-12-06 09:11:23.986068614 +0000 UTC m=+0.138593231 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vcs-type=git, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4) Dec 6 04:11:24 localhost podman[102067]: 2025-12-06 09:11:24.023194441 +0000 UTC m=+0.182591378 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, release=1761123044, tcib_managed=true, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, distribution-scope=public, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, version=17.1.12) Dec 6 04:11:24 localhost podman[102069]: 2025-12-06 09:11:24.027154193 +0000 UTC m=+0.179678760 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1761123044, vcs-type=git, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, distribution-scope=public, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 6 04:11:24 localhost podman[102069]: unhealthy Dec 6 04:11:24 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:11:24 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Failed with result 'exit-code'. Dec 6 04:11:24 localhost podman[102067]: 2025-12-06 09:11:24.089426862 +0000 UTC m=+0.248823809 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, container_name=ovn_controller, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, tcib_managed=true) Dec 6 04:11:24 localhost podman[102067]: unhealthy Dec 6 04:11:24 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:11:24 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Failed with result 'exit-code'. Dec 6 04:11:24 localhost podman[102068]: 2025-12-06 09:11:24.165163154 +0000 UTC m=+0.320203867 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, version=17.1.12, io.buildah.version=1.41.4, architecture=x86_64, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Dec 6 04:11:24 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 04:11:25 localhost sshd[102137]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:11:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 04:11:25 localhost podman[102139]: 2025-12-06 09:11:25.902648439 +0000 UTC m=+0.069086959 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, url=https://www.redhat.com, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, version=17.1.12, config_id=tripleo_step5, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 04:11:25 localhost podman[102139]: 2025-12-06 09:11:25.963200705 +0000 UTC m=+0.129639225 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, config_id=tripleo_step5, container_name=nova_compute, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, build-date=2025-11-19T00:36:58Z) Dec 6 04:11:25 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 04:11:29 localhost sshd[102165]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:11:35 localhost sshd[102167]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:11:47 localhost sshd[102169]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:11:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 04:11:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 04:11:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 04:11:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 04:11:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 04:11:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 04:11:47 localhost podman[102173]: 2025-12-06 09:11:47.929489751 +0000 UTC m=+0.073014590 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, config_id=tripleo_step4, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=) Dec 6 04:11:47 localhost podman[102172]: 2025-12-06 09:11:47.989127079 +0000 UTC m=+0.134732552 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, batch=17.1_20251118.1, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd) Dec 6 04:11:48 localhost podman[102172]: 2025-12-06 09:11:48.001974053 +0000 UTC m=+0.147579556 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20251118.1, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, container_name=collectd, name=rhosp17/openstack-collectd, version=17.1.12, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Dec 6 04:11:48 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 04:11:48 localhost podman[102175]: 2025-12-06 09:11:48.046735475 +0000 UTC m=+0.187586022 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, architecture=x86_64, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Dec 6 04:11:48 localhost podman[102171]: 2025-12-06 09:11:48.105261559 +0000 UTC m=+0.254088380 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, vcs-type=git, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.expose-services=) Dec 6 04:11:48 localhost podman[102171]: 2025-12-06 09:11:48.136560929 +0000 UTC m=+0.285387750 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, distribution-scope=public, com.redhat.component=openstack-cron-container, architecture=x86_64, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, container_name=logrotate_crond, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Dec 6 04:11:48 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 04:11:48 localhost podman[102174]: 2025-12-06 09:11:48.159071719 +0000 UTC m=+0.300743440 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, io.openshift.expose-services=, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git) Dec 6 04:11:48 localhost podman[102175]: 2025-12-06 09:11:48.183073534 +0000 UTC m=+0.323924071 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20251118.1, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-iscsid-container, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com) Dec 6 04:11:48 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 04:11:48 localhost podman[102174]: 2025-12-06 09:11:48.215046555 +0000 UTC m=+0.356718266 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, tcib_managed=true, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.expose-services=, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, version=17.1.12, release=1761123044, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 04:11:48 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 04:11:48 localhost podman[102181]: 2025-12-06 09:11:48.253120861 +0000 UTC m=+0.390561963 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, release=1761123044, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 6 04:11:48 localhost podman[102181]: 2025-12-06 09:11:48.280891863 +0000 UTC m=+0.418332955 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_id=tripleo_step4, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, architecture=x86_64, vcs-type=git) Dec 6 04:11:48 localhost sshd[102300]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:11:48 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 04:11:48 localhost podman[102173]: 2025-12-06 09:11:48.300813233 +0000 UTC m=+0.444338152 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, batch=17.1_20251118.1, container_name=nova_migration_target, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.4, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 04:11:48 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 04:11:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 04:11:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 04:11:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 04:11:54 localhost systemd[1]: tmp-crun.kK5qeh.mount: Deactivated successfully. Dec 6 04:11:55 localhost podman[102432]: 2025-12-06 09:11:55.013301634 +0000 UTC m=+0.173026285 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-type=git, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, version=17.1.12, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Dec 6 04:11:55 localhost podman[102433]: 2025-12-06 09:11:54.99980353 +0000 UTC m=+0.159063477 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step1, release=1761123044, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 6 04:11:55 localhost podman[102432]: 2025-12-06 09:11:55.055064094 +0000 UTC m=+0.214788655 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1761123044, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, distribution-scope=public, version=17.1.12, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 04:11:55 localhost podman[102432]: unhealthy Dec 6 04:11:55 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:11:55 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Failed with result 'exit-code'. Dec 6 04:11:55 localhost podman[102434]: 2025-12-06 09:11:55.017846653 +0000 UTC m=+0.175822052 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, architecture=x86_64, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, container_name=ovn_metadata_agent, vcs-type=git, release=1761123044, tcib_managed=true, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, vendor=Red Hat, Inc., io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 04:11:55 localhost podman[102434]: 2025-12-06 09:11:55.100219289 +0000 UTC m=+0.258194698 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1761123044, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 04:11:55 localhost podman[102434]: unhealthy Dec 6 04:11:55 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:11:55 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Failed with result 'exit-code'. Dec 6 04:11:55 localhost podman[102433]: 2025-12-06 09:11:55.188453393 +0000 UTC m=+0.347713340 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z) Dec 6 04:11:55 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 04:11:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 04:11:56 localhost podman[102502]: 2025-12-06 09:11:56.916412306 +0000 UTC m=+0.079166828 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, release=1761123044, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, tcib_managed=true, url=https://www.redhat.com, vcs-type=git) Dec 6 04:11:56 localhost podman[102502]: 2025-12-06 09:11:56.945059714 +0000 UTC m=+0.107814176 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, config_id=tripleo_step5, batch=17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.12, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team) Dec 6 04:11:56 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 04:11:59 localhost sshd[102529]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:12:05 localhost sshd[102531]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:12:11 localhost sshd[102533]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:12:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 04:12:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 04:12:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 04:12:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 04:12:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 04:12:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 04:12:18 localhost podman[102538]: 2025-12-06 09:12:18.964298879 +0000 UTC m=+0.112407687 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, distribution-scope=public, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, container_name=nova_migration_target, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, release=1761123044, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, config_id=tripleo_step4, io.buildah.version=1.41.4) Dec 6 04:12:19 localhost podman[102537]: 2025-12-06 09:12:19.002390347 +0000 UTC m=+0.157130068 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, managed_by=tripleo_ansible, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, url=https://www.redhat.com, io.buildah.version=1.41.4, vcs-type=git, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, release=1761123044, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z) Dec 6 04:12:19 localhost podman[102537]: 2025-12-06 09:12:19.012970852 +0000 UTC m=+0.167710573 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, release=1761123044, version=17.1.12, io.openshift.expose-services=, distribution-scope=public, batch=17.1_20251118.1, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 6 04:12:19 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 04:12:19 localhost podman[102536]: 2025-12-06 09:12:19.054539195 +0000 UTC m=+0.210739681 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, version=17.1.12, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, batch=17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, tcib_managed=true, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 04:12:19 localhost podman[102561]: 2025-12-06 09:12:19.115207536 +0000 UTC m=+0.245366623 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vcs-type=git, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container) Dec 6 04:12:19 localhost podman[102536]: 2025-12-06 09:12:19.138287123 +0000 UTC m=+0.294487659 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, managed_by=tripleo_ansible, release=1761123044, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, build-date=2025-11-18T22:49:32Z, vendor=Red Hat, Inc., config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container) Dec 6 04:12:19 localhost podman[102561]: 2025-12-06 09:12:19.150128797 +0000 UTC m=+0.280287934 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, version=17.1.12, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, config_id=tripleo_step4, tcib_managed=true, batch=17.1_20251118.1, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com) Dec 6 04:12:19 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 04:12:19 localhost podman[102544]: 2025-12-06 09:12:19.163396653 +0000 UTC m=+0.308505429 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, release=1761123044, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4) Dec 6 04:12:19 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 04:12:19 localhost podman[102544]: 2025-12-06 09:12:19.200118038 +0000 UTC m=+0.345226844 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, managed_by=tripleo_ansible, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, distribution-scope=public, config_id=tripleo_step4, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, io.openshift.expose-services=) Dec 6 04:12:19 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 04:12:19 localhost podman[102550]: 2025-12-06 09:12:19.217537753 +0000 UTC m=+0.355325864 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, distribution-scope=public, tcib_managed=true, io.openshift.expose-services=, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 04:12:19 localhost podman[102550]: 2025-12-06 09:12:19.232141141 +0000 UTC m=+0.369929262 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.buildah.version=1.41.4, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.) Dec 6 04:12:19 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 04:12:19 localhost podman[102538]: 2025-12-06 09:12:19.334044604 +0000 UTC m=+0.482153382 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, name=rhosp17/openstack-nova-compute, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.buildah.version=1.41.4, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_id=tripleo_step4, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, vcs-type=git, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 04:12:19 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 04:12:25 localhost sshd[102673]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:12:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 04:12:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 04:12:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 04:12:25 localhost podman[102676]: 2025-12-06 09:12:25.92279456 +0000 UTC m=+0.076227757 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_id=tripleo_step1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 6 04:12:25 localhost systemd[1]: tmp-crun.HFJrYT.mount: Deactivated successfully. Dec 6 04:12:25 localhost podman[102675]: 2025-12-06 09:12:25.992995052 +0000 UTC m=+0.145911474 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, architecture=x86_64, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, release=1761123044, io.openshift.expose-services=, version=17.1.12, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, managed_by=tripleo_ansible) Dec 6 04:12:26 localhost podman[102677]: 2025-12-06 09:12:26.032790582 +0000 UTC m=+0.179361050 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=tripleo_step4, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 6 04:12:26 localhost podman[102675]: 2025-12-06 09:12:26.044279095 +0000 UTC m=+0.197195507 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, release=1761123044, version=17.1.12, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., config_id=tripleo_step4, vcs-type=git) Dec 6 04:12:26 localhost podman[102675]: unhealthy Dec 6 04:12:26 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:12:26 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Failed with result 'exit-code'. Dec 6 04:12:26 localhost podman[102677]: 2025-12-06 09:12:26.099536148 +0000 UTC m=+0.246106606 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, managed_by=tripleo_ansible, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, release=1761123044, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z) Dec 6 04:12:26 localhost podman[102677]: unhealthy Dec 6 04:12:26 localhost podman[102676]: 2025-12-06 09:12:26.111187445 +0000 UTC m=+0.264620672 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, managed_by=tripleo_ansible, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, version=17.1.12, architecture=x86_64, tcib_managed=true, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git) Dec 6 04:12:26 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:12:26 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Failed with result 'exit-code'. Dec 6 04:12:26 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 04:12:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 04:12:27 localhost podman[102744]: 2025-12-06 09:12:27.075056915 +0000 UTC m=+0.067672996 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.openshift.expose-services=, distribution-scope=public, container_name=nova_compute, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, version=17.1.12, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4) Dec 6 04:12:27 localhost podman[102744]: 2025-12-06 09:12:27.096953875 +0000 UTC m=+0.089569926 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, version=17.1.12, architecture=x86_64, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-type=git, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 04:12:27 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 04:12:29 localhost sshd[102768]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:12:30 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 6 04:12:30 localhost recover_tripleo_nova_virtqemud[102771]: 61814 Dec 6 04:12:30 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 6 04:12:30 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 6 04:12:44 localhost sshd[102772]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:12:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 04:12:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 04:12:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 04:12:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 04:12:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 04:12:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 04:12:49 localhost systemd[1]: tmp-crun.QU8GNj.mount: Deactivated successfully. Dec 6 04:12:49 localhost podman[102776]: 2025-12-06 09:12:49.948443517 +0000 UTC m=+0.101421091 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, managed_by=tripleo_ansible, version=17.1.12, release=1761123044, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 04:12:49 localhost systemd[1]: tmp-crun.3mQvPz.mount: Deactivated successfully. Dec 6 04:12:49 localhost podman[102775]: 2025-12-06 09:12:49.940974588 +0000 UTC m=+0.098790580 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20251118.1, version=17.1.12, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, container_name=collectd, com.redhat.component=openstack-collectd-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, config_id=tripleo_step3) Dec 6 04:12:49 localhost podman[102790]: 2025-12-06 09:12:49.996194291 +0000 UTC m=+0.139853679 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vendor=Red Hat, Inc., version=17.1.12, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, release=1761123044, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4) Dec 6 04:12:50 localhost podman[102790]: 2025-12-06 09:12:50.019037511 +0000 UTC m=+0.162696909 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, version=17.1.12, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Dec 6 04:12:50 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 04:12:50 localhost podman[102777]: 2025-12-06 09:12:50.042261883 +0000 UTC m=+0.191445200 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vcs-type=git, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, io.buildah.version=1.41.4, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 6 04:12:50 localhost podman[102777]: 2025-12-06 09:12:50.062001578 +0000 UTC m=+0.211184875 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, config_id=tripleo_step4, version=17.1.12, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, io.openshift.expose-services=, architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 6 04:12:50 localhost podman[102774]: 2025-12-06 09:12:49.968855632 +0000 UTC m=+0.126578201 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.buildah.version=1.41.4, architecture=x86_64, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, maintainer=OpenStack TripleO Team) Dec 6 04:12:50 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 04:12:50 localhost podman[102774]: 2025-12-06 09:12:50.103032936 +0000 UTC m=+0.260755515 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.41.4, managed_by=tripleo_ansible, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, name=rhosp17/openstack-cron) Dec 6 04:12:50 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 04:12:50 localhost podman[102775]: 2025-12-06 09:12:50.123908715 +0000 UTC m=+0.281724727 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, container_name=collectd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, distribution-scope=public, batch=17.1_20251118.1, config_id=tripleo_step3, io.openshift.expose-services=, architecture=x86_64, release=1761123044, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 6 04:12:50 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 04:12:50 localhost podman[102778]: 2025-12-06 09:12:50.107951136 +0000 UTC m=+0.250307274 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, version=17.1.12, config_id=tripleo_step3, managed_by=tripleo_ansible, io.openshift.expose-services=, url=https://www.redhat.com, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, release=1761123044, batch=17.1_20251118.1, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z) Dec 6 04:12:50 localhost podman[102778]: 2025-12-06 09:12:50.190020753 +0000 UTC m=+0.332376841 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-type=git, name=rhosp17/openstack-iscsid, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, batch=17.1_20251118.1, io.buildah.version=1.41.4, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 04:12:50 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 04:12:50 localhost podman[102776]: 2025-12-06 09:12:50.297633761 +0000 UTC m=+0.450611305 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, io.buildah.version=1.41.4, batch=17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4) Dec 6 04:12:50 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 04:12:52 localhost sshd[102900]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:12:52 localhost sshd[102902]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:12:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 04:12:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 04:12:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 04:12:56 localhost podman[102980]: 2025-12-06 09:12:56.922561455 +0000 UTC m=+0.081084597 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, vcs-type=git, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, distribution-scope=public, release=1761123044, version=17.1.12, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4) Dec 6 04:12:56 localhost podman[102980]: 2025-12-06 09:12:56.935045878 +0000 UTC m=+0.093569020 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20251118.1, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, version=17.1.12, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, architecture=x86_64, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.41.4, managed_by=tripleo_ansible) Dec 6 04:12:56 localhost podman[102980]: unhealthy Dec 6 04:12:56 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:12:56 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Failed with result 'exit-code'. Dec 6 04:12:56 localhost podman[102986]: 2025-12-06 09:12:56.992112327 +0000 UTC m=+0.138380984 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, release=1761123044, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, url=https://www.redhat.com, distribution-scope=public, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, managed_by=tripleo_ansible) Dec 6 04:12:57 localhost podman[102981]: 2025-12-06 09:12:57.029633827 +0000 UTC m=+0.181510106 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.41.4, version=17.1.12, architecture=x86_64, config_id=tripleo_step1, release=1761123044, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, distribution-scope=public, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd) Dec 6 04:12:57 localhost podman[102986]: 2025-12-06 09:12:57.035162316 +0000 UTC m=+0.181430923 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, distribution-scope=public, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, version=17.1.12, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Dec 6 04:12:57 localhost podman[102986]: unhealthy Dec 6 04:12:57 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:12:57 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Failed with result 'exit-code'. Dec 6 04:12:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 04:12:57 localhost podman[102981]: 2025-12-06 09:12:57.250135077 +0000 UTC m=+0.402011336 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, url=https://www.redhat.com, io.openshift.expose-services=, io.buildah.version=1.41.4, version=17.1.12, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., release=1761123044, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z) Dec 6 04:12:57 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 04:12:57 localhost podman[103049]: 2025-12-06 09:12:57.313960364 +0000 UTC m=+0.070347528 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, tcib_managed=true, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_compute, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 04:12:57 localhost podman[103049]: 2025-12-06 09:12:57.344367966 +0000 UTC m=+0.100755150 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, release=1761123044, io.buildah.version=1.41.4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, version=17.1.12) Dec 6 04:12:57 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 04:13:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 04:13:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 04:13:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 04:13:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 04:13:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 04:13:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 04:13:20 localhost podman[103085]: 2025-12-06 09:13:20.945161478 +0000 UTC m=+0.087942817 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.12, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, container_name=iscsid, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.buildah.version=1.41.4, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=openstack-iscsid-container, vcs-type=git) Dec 6 04:13:20 localhost podman[103085]: 2025-12-06 09:13:20.956106904 +0000 UTC m=+0.098888193 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, tcib_managed=true, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com) Dec 6 04:13:20 localhost podman[103075]: 2025-12-06 09:13:20.980728619 +0000 UTC m=+0.136672681 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, vcs-type=git, batch=17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team) Dec 6 04:13:20 localhost podman[103075]: 2025-12-06 09:13:20.989934241 +0000 UTC m=+0.145878313 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, tcib_managed=true, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, version=17.1.12, name=rhosp17/openstack-cron) Dec 6 04:13:21 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 04:13:21 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 04:13:21 localhost systemd[1]: tmp-crun.zyYXNg.mount: Deactivated successfully. Dec 6 04:13:21 localhost podman[103076]: 2025-12-06 09:13:21.050921401 +0000 UTC m=+0.201759507 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, architecture=x86_64, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, vcs-type=git, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1) Dec 6 04:13:21 localhost podman[103077]: 2025-12-06 09:13:21.092733562 +0000 UTC m=+0.238671087 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, url=https://www.redhat.com, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1) Dec 6 04:13:21 localhost podman[103091]: 2025-12-06 09:13:21.146074148 +0000 UTC m=+0.286152164 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, url=https://www.redhat.com, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 04:13:21 localhost podman[103076]: 2025-12-06 09:13:21.166642438 +0000 UTC m=+0.317480604 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, managed_by=tripleo_ansible, release=1761123044, config_id=tripleo_step3, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, vcs-type=git, url=https://www.redhat.com, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd) Dec 6 04:13:21 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 04:13:21 localhost podman[103091]: 2025-12-06 09:13:21.198063552 +0000 UTC m=+0.338141538 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, tcib_managed=true) Dec 6 04:13:21 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 04:13:21 localhost podman[103078]: 2025-12-06 09:13:21.257599397 +0000 UTC m=+0.404001746 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, batch=17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, version=17.1.12, distribution-scope=public, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Dec 6 04:13:21 localhost podman[103078]: 2025-12-06 09:13:21.307370903 +0000 UTC m=+0.453773252 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, version=17.1.12, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, release=1761123044, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vcs-type=git) Dec 6 04:13:21 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 04:13:21 localhost podman[103077]: 2025-12-06 09:13:21.440306328 +0000 UTC m=+0.586243853 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, vcs-type=git) Dec 6 04:13:21 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 04:13:21 localhost sshd[103210]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:13:24 localhost sshd[103212]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:13:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 04:13:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 04:13:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 04:13:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 04:13:27 localhost podman[103214]: 2025-12-06 09:13:27.982816526 +0000 UTC m=+0.142909192 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, config_id=tripleo_step4, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, container_name=ovn_controller, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, batch=17.1_20251118.1) Dec 6 04:13:27 localhost podman[103215]: 2025-12-06 09:13:27.948874956 +0000 UTC m=+0.108442245 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, container_name=metrics_qdr, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, distribution-scope=public, managed_by=tripleo_ansible) Dec 6 04:13:28 localhost podman[103214]: 2025-12-06 09:13:28.026212317 +0000 UTC m=+0.186304983 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.12, url=https://www.redhat.com) Dec 6 04:13:28 localhost podman[103214]: unhealthy Dec 6 04:13:28 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:13:28 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Failed with result 'exit-code'. Dec 6 04:13:28 localhost podman[103217]: 2025-12-06 09:13:28.041166935 +0000 UTC m=+0.193552065 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, container_name=ovn_metadata_agent, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., version=17.1.12, batch=17.1_20251118.1) Dec 6 04:13:28 localhost podman[103217]: 2025-12-06 09:13:28.081256915 +0000 UTC m=+0.233642005 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, url=https://www.redhat.com, version=17.1.12, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4) Dec 6 04:13:28 localhost podman[103217]: unhealthy Dec 6 04:13:28 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:13:28 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Failed with result 'exit-code'. Dec 6 04:13:28 localhost podman[103216]: 2025-12-06 09:13:28.094886351 +0000 UTC m=+0.250714426 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, distribution-scope=public, release=1761123044, vcs-type=git, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.41.4, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, config_id=tripleo_step5, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=) Dec 6 04:13:28 localhost podman[103215]: 2025-12-06 09:13:28.142558753 +0000 UTC m=+0.302126022 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, config_id=tripleo_step1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, tcib_managed=true, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, architecture=x86_64) Dec 6 04:13:28 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 04:13:28 localhost podman[103216]: 2025-12-06 09:13:28.15910449 +0000 UTC m=+0.314932555 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, container_name=nova_compute, release=1761123044, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-nova-compute) Dec 6 04:13:28 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 04:13:35 localhost sshd[103308]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:13:41 localhost sshd[103310]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:13:48 localhost sshd[103312]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:13:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 04:13:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 04:13:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 04:13:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 04:13:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 04:13:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 04:13:51 localhost podman[103314]: 2025-12-06 09:13:51.93819714 +0000 UTC m=+0.094956122 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, batch=17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, config_id=tripleo_step4) Dec 6 04:13:51 localhost podman[103314]: 2025-12-06 09:13:51.946486524 +0000 UTC m=+0.103245486 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, architecture=x86_64, version=17.1.12, container_name=logrotate_crond, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, tcib_managed=true, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, io.openshift.expose-services=) Dec 6 04:13:51 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 04:13:51 localhost podman[103321]: 2025-12-06 09:13:51.991528034 +0000 UTC m=+0.136090533 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, version=17.1.12, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, url=https://www.redhat.com, vendor=Red Hat, Inc., container_name=nova_migration_target) Dec 6 04:13:52 localhost podman[103322]: 2025-12-06 09:13:52.047098408 +0000 UTC m=+0.190388157 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1761123044, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 04:13:52 localhost podman[103323]: 2025-12-06 09:13:52.097818463 +0000 UTC m=+0.236872163 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, config_id=tripleo_step3, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044) Dec 6 04:13:52 localhost podman[103323]: 2025-12-06 09:13:52.108627134 +0000 UTC m=+0.247680874 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-11-18T23:44:13Z, architecture=x86_64, batch=17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, tcib_managed=true, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, vcs-type=git, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, managed_by=tripleo_ansible, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid) Dec 6 04:13:52 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 04:13:52 localhost podman[103322]: 2025-12-06 09:13:52.159154103 +0000 UTC m=+0.302443882 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, url=https://www.redhat.com, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, release=1761123044, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible) Dec 6 04:13:52 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 04:13:52 localhost podman[103315]: 2025-12-06 09:13:52.204328418 +0000 UTC m=+0.348620419 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.buildah.version=1.41.4, version=17.1.12, release=1761123044, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., batch=17.1_20251118.1, architecture=x86_64, vcs-type=git, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd) Dec 6 04:13:52 localhost podman[103337]: 2025-12-06 09:13:52.162095714 +0000 UTC m=+0.298539624 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, architecture=x86_64, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, config_id=tripleo_step4, io.openshift.expose-services=, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc.) Dec 6 04:13:52 localhost podman[103315]: 2025-12-06 09:13:52.219168953 +0000 UTC m=+0.363460984 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, vcs-type=git, config_id=tripleo_step3, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, name=rhosp17/openstack-collectd, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Dec 6 04:13:52 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 04:13:52 localhost podman[103337]: 2025-12-06 09:13:52.245342985 +0000 UTC m=+0.381786915 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, io.buildah.version=1.41.4, vendor=Red Hat, Inc., tcib_managed=true, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 04:13:52 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 04:13:52 localhost podman[103321]: 2025-12-06 09:13:52.402029928 +0000 UTC m=+0.546592387 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4) Dec 6 04:13:52 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 04:13:54 localhost podman[103547]: 2025-12-06 09:13:54.896407646 +0000 UTC m=+0.094102286 container exec ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, ceph=True, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=) Dec 6 04:13:54 localhost podman[103547]: 2025-12-06 09:13:54.999415703 +0000 UTC m=+0.197110313 container exec_died ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, name=rhceph, release=1763362218, version=7, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, io.buildah.version=1.41.4, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 6 04:13:57 localhost sshd[103690]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:13:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 04:13:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 04:13:58 localhost sshd[103710]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:13:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 04:13:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 04:13:58 localhost systemd[1]: tmp-crun.1lmW42.mount: Deactivated successfully. Dec 6 04:13:58 localhost podman[103693]: 2025-12-06 09:13:58.269217563 +0000 UTC m=+0.103081081 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team) Dec 6 04:13:58 localhost podman[103719]: 2025-12-06 09:13:58.347022768 +0000 UTC m=+0.073875436 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, batch=17.1_20251118.1, vcs-type=git, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, io.openshift.expose-services=, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, managed_by=tripleo_ansible, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Dec 6 04:13:58 localhost podman[103693]: 2025-12-06 09:13:58.363536244 +0000 UTC m=+0.197399822 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, version=17.1.12, container_name=ovn_metadata_agent, distribution-scope=public, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64) Dec 6 04:13:58 localhost podman[103693]: unhealthy Dec 6 04:13:58 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:13:58 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Failed with result 'exit-code'. Dec 6 04:13:58 localhost podman[103692]: 2025-12-06 09:13:58.413587749 +0000 UTC m=+0.247520339 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, version=17.1.12, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, release=1761123044, url=https://www.redhat.com, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 04:13:58 localhost podman[103692]: 2025-12-06 09:13:58.464314754 +0000 UTC m=+0.298247364 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.buildah.version=1.41.4, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, vcs-type=git) Dec 6 04:13:58 localhost podman[103692]: unhealthy Dec 6 04:13:58 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:13:58 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Failed with result 'exit-code'. Dec 6 04:13:58 localhost podman[103720]: 2025-12-06 09:13:58.50691583 +0000 UTC m=+0.233640844 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, tcib_managed=true, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 6 04:13:58 localhost podman[103720]: 2025-12-06 09:13:58.531676009 +0000 UTC m=+0.258401013 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, vcs-type=git, config_id=tripleo_step5, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, distribution-scope=public, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 6 04:13:58 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 04:13:58 localhost podman[103719]: 2025-12-06 09:13:58.553194189 +0000 UTC m=+0.280046867 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, batch=17.1_20251118.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Dec 6 04:13:58 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 04:13:59 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 6 04:13:59 localhost recover_tripleo_nova_virtqemud[103788]: 61814 Dec 6 04:13:59 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 6 04:13:59 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 6 04:13:59 localhost sshd[103789]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:14:16 localhost sshd[103791]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:14:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 04:14:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 04:14:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 04:14:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 04:14:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 04:14:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 04:14:22 localhost podman[103794]: 2025-12-06 09:14:22.952777819 +0000 UTC m=+0.104950679 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, vcs-type=git, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 6 04:14:22 localhost podman[103795]: 2025-12-06 09:14:22.993241379 +0000 UTC m=+0.142522850 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, architecture=x86_64, tcib_managed=true, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, io.buildah.version=1.41.4, batch=17.1_20251118.1) Dec 6 04:14:23 localhost podman[103794]: 2025-12-06 09:14:23.01220755 +0000 UTC m=+0.164380380 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, vcs-type=git, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, release=1761123044, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container) Dec 6 04:14:23 localhost podman[103796]: 2025-12-06 09:14:23.05787025 +0000 UTC m=+0.200319001 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, url=https://www.redhat.com, release=1761123044, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64) Dec 6 04:14:23 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 04:14:23 localhost podman[103796]: 2025-12-06 09:14:23.10972852 +0000 UTC m=+0.252177291 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, tcib_managed=true, io.openshift.expose-services=, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, release=1761123044, url=https://www.redhat.com, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1) Dec 6 04:14:23 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 04:14:23 localhost podman[103793]: 2025-12-06 09:14:23.15572575 +0000 UTC m=+0.309007584 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, url=https://www.redhat.com, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.expose-services=) Dec 6 04:14:23 localhost podman[103793]: 2025-12-06 09:14:23.192189798 +0000 UTC m=+0.345471592 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, build-date=2025-11-18T22:49:32Z, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-type=git, version=17.1.12, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 04:14:23 localhost podman[103813]: 2025-12-06 09:14:23.201029869 +0000 UTC m=+0.340505319 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, distribution-scope=public, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, release=1761123044, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 6 04:14:23 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 04:14:23 localhost podman[103813]: 2025-12-06 09:14:23.227555872 +0000 UTC m=+0.367031352 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, version=17.1.12, architecture=x86_64, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 04:14:23 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 04:14:23 localhost podman[103801]: 2025-12-06 09:14:23.304056347 +0000 UTC m=+0.445985363 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., container_name=iscsid, architecture=x86_64, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Dec 6 04:14:23 localhost podman[103801]: 2025-12-06 09:14:23.315153558 +0000 UTC m=+0.457082574 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=iscsid, name=rhosp17/openstack-iscsid, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, release=1761123044, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid) Dec 6 04:14:23 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 04:14:23 localhost podman[103795]: 2025-12-06 09:14:23.38111854 +0000 UTC m=+0.530400011 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, container_name=nova_migration_target, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, distribution-scope=public, tcib_managed=true, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, vcs-type=git) Dec 6 04:14:23 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 04:14:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 04:14:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 04:14:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 04:14:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 04:14:28 localhost podman[103928]: 2025-12-06 09:14:28.921945601 +0000 UTC m=+0.080033065 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, release=1761123044, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc.) Dec 6 04:14:28 localhost podman[103927]: 2025-12-06 09:14:28.982539558 +0000 UTC m=+0.140623182 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, vendor=Red Hat, Inc., container_name=metrics_qdr, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, io.openshift.expose-services=, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 04:14:28 localhost podman[103928]: 2025-12-06 09:14:28.987187441 +0000 UTC m=+0.145274905 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, release=1761123044, batch=17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, vcs-type=git, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Dec 6 04:14:28 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 04:14:29 localhost podman[103932]: 2025-12-06 09:14:29.080701517 +0000 UTC m=+0.231840267 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, tcib_managed=true, managed_by=tripleo_ansible, config_id=tripleo_step4, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, batch=17.1_20251118.1, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12) Dec 6 04:14:29 localhost podman[103926]: 2025-12-06 09:14:29.051815512 +0000 UTC m=+0.213102044 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, container_name=ovn_controller, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, version=17.1.12, release=1761123044, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, vendor=Red Hat, Inc.) Dec 6 04:14:29 localhost podman[103932]: 2025-12-06 09:14:29.121160638 +0000 UTC m=+0.272299418 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-type=git, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, batch=17.1_20251118.1, release=1761123044, version=17.1.12, tcib_managed=true) Dec 6 04:14:29 localhost podman[103932]: unhealthy Dec 6 04:14:29 localhost podman[103926]: 2025-12-06 09:14:29.135175668 +0000 UTC m=+0.296462230 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.buildah.version=1.41.4, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_id=tripleo_step4, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z) Dec 6 04:14:29 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:14:29 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Failed with result 'exit-code'. Dec 6 04:14:29 localhost podman[103926]: unhealthy Dec 6 04:14:29 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:14:29 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Failed with result 'exit-code'. Dec 6 04:14:29 localhost podman[103927]: 2025-12-06 09:14:29.183415727 +0000 UTC m=+0.341499361 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, batch=17.1_20251118.1, distribution-scope=public) Dec 6 04:14:29 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 04:14:29 localhost systemd[1]: tmp-crun.vrLii0.mount: Deactivated successfully. Dec 6 04:14:39 localhost sshd[104022]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:14:44 localhost sshd[104024]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:14:51 localhost sshd[104026]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:14:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 04:14:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 04:14:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 04:14:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 04:14:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 04:14:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 04:14:53 localhost systemd[1]: tmp-crun.QmywUE.mount: Deactivated successfully. Dec 6 04:14:53 localhost podman[104030]: 2025-12-06 09:14:53.948887161 +0000 UTC m=+0.102155972 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1761123044, url=https://www.redhat.com, architecture=x86_64, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, version=17.1.12, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, vcs-type=git, build-date=2025-11-19T00:36:58Z) Dec 6 04:14:53 localhost podman[104042]: 2025-12-06 09:14:53.958530477 +0000 UTC m=+0.094620801 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, architecture=x86_64, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, container_name=iscsid, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, build-date=2025-11-18T23:44:13Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=) Dec 6 04:14:53 localhost podman[104042]: 2025-12-06 09:14:53.994272793 +0000 UTC m=+0.130363077 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, version=17.1.12, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., distribution-scope=public, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container) Dec 6 04:14:54 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 04:14:54 localhost podman[104029]: 2025-12-06 09:14:54.013034798 +0000 UTC m=+0.169099015 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, version=17.1.12, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, vcs-type=git, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc.) Dec 6 04:14:54 localhost podman[104029]: 2025-12-06 09:14:54.023241171 +0000 UTC m=+0.179305398 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, container_name=collectd, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, architecture=x86_64, tcib_managed=true, managed_by=tripleo_ansible, url=https://www.redhat.com) Dec 6 04:14:54 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 04:14:54 localhost podman[104028]: 2025-12-06 09:14:54.094417423 +0000 UTC m=+0.252401709 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, vcs-type=git, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Dec 6 04:14:54 localhost podman[104048]: 2025-12-06 09:14:54.072227882 +0000 UTC m=+0.210011739 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, url=https://www.redhat.com, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi) Dec 6 04:14:54 localhost podman[104028]: 2025-12-06 09:14:54.129094736 +0000 UTC m=+0.287078952 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, release=1761123044, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron) Dec 6 04:14:54 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 04:14:54 localhost podman[104048]: 2025-12-06 09:14:54.153079341 +0000 UTC m=+0.290863198 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, version=17.1.12, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, io.openshift.expose-services=, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com) Dec 6 04:14:54 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 04:14:54 localhost podman[104031]: 2025-12-06 09:14:54.171664271 +0000 UTC m=+0.305694502 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2025-11-19T00:11:48Z, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, config_id=tripleo_step4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true) Dec 6 04:14:54 localhost podman[104031]: 2025-12-06 09:14:54.194109179 +0000 UTC m=+0.328139410 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, managed_by=tripleo_ansible, vcs-type=git, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 6 04:14:54 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 04:14:54 localhost podman[104030]: 2025-12-06 09:14:54.288161842 +0000 UTC m=+0.441430713 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, version=17.1.12, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, container_name=nova_migration_target, io.openshift.expose-services=, io.buildah.version=1.41.4, distribution-scope=public, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 6 04:14:54 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 04:14:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 04:14:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 04:14:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 04:14:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 04:14:59 localhost systemd[1]: tmp-crun.Uyyz8A.mount: Deactivated successfully. Dec 6 04:14:59 localhost podman[104238]: 2025-12-06 09:14:59.939263294 +0000 UTC m=+0.097900032 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, architecture=x86_64) Dec 6 04:14:59 localhost podman[104239]: 2025-12-06 09:14:59.982044005 +0000 UTC m=+0.138362472 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-type=git, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5) Dec 6 04:15:00 localhost podman[104239]: 2025-12-06 09:15:00.014686236 +0000 UTC m=+0.171004703 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, url=https://www.redhat.com, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, build-date=2025-11-19T00:36:58Z, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, container_name=nova_compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 04:15:00 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 04:15:00 localhost podman[104237]: 2025-12-06 09:15:00.021154264 +0000 UTC m=+0.182705882 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, version=17.1.12, url=https://www.redhat.com, release=1761123044, io.buildah.version=1.41.4) Dec 6 04:15:00 localhost podman[104240]: 2025-12-06 09:15:00.087566671 +0000 UTC m=+0.240225026 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent) Dec 6 04:15:00 localhost podman[104240]: 2025-12-06 09:15:00.104263762 +0000 UTC m=+0.256922137 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.12, config_id=tripleo_step4, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 04:15:00 localhost podman[104240]: unhealthy Dec 6 04:15:00 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:15:00 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Failed with result 'exit-code'. Dec 6 04:15:00 localhost podman[104238]: 2025-12-06 09:15:00.135194131 +0000 UTC m=+0.293830909 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.4, tcib_managed=true, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, release=1761123044, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 04:15:00 localhost podman[104237]: 2025-12-06 09:15:00.15440999 +0000 UTC m=+0.315961598 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step4, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z) Dec 6 04:15:00 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 04:15:00 localhost podman[104237]: unhealthy Dec 6 04:15:00 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:15:00 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Failed with result 'exit-code'. Dec 6 04:15:03 localhost sshd[104327]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:15:11 localhost sshd[104329]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:15:11 localhost sshd[104331]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:15:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 04:15:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 04:15:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 04:15:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 04:15:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 04:15:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 04:15:24 localhost podman[104333]: 2025-12-06 09:15:24.941533759 +0000 UTC m=+0.086438371 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, architecture=x86_64, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, release=1761123044, vendor=Red Hat, Inc., vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Dec 6 04:15:24 localhost podman[104353]: 2025-12-06 09:15:24.952412542 +0000 UTC m=+0.077121745 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, distribution-scope=public, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, release=1761123044, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible) Dec 6 04:15:24 localhost systemd[1]: tmp-crun.uY4Wo6.mount: Deactivated successfully. Dec 6 04:15:25 localhost podman[104335]: 2025-12-06 09:15:25.003045604 +0000 UTC m=+0.142174629 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, architecture=x86_64, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, release=1761123044, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, vendor=Red Hat, Inc., version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 04:15:25 localhost podman[104353]: 2025-12-06 09:15:25.007771009 +0000 UTC m=+0.132480232 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, version=17.1.12, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 04:15:25 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 04:15:25 localhost podman[104333]: 2025-12-06 09:15:25.026928756 +0000 UTC m=+0.171833388 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, container_name=logrotate_crond, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, vcs-type=git, tcib_managed=true, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc.) Dec 6 04:15:25 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 04:15:25 localhost podman[104336]: 2025-12-06 09:15:25.098716387 +0000 UTC m=+0.234251332 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, tcib_managed=true, release=1761123044, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12) Dec 6 04:15:25 localhost podman[104347]: 2025-12-06 09:15:25.02477821 +0000 UTC m=+0.155093075 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, name=rhosp17/openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 04:15:25 localhost podman[104334]: 2025-12-06 09:15:25.148636938 +0000 UTC m=+0.291019553 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, release=1761123044, config_id=tripleo_step3, name=rhosp17/openstack-collectd, version=17.1.12, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vcs-type=git, url=https://www.redhat.com, architecture=x86_64, maintainer=OpenStack TripleO Team) Dec 6 04:15:25 localhost podman[104347]: 2025-12-06 09:15:25.158159969 +0000 UTC m=+0.288474824 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, version=17.1.12, batch=17.1_20251118.1, io.buildah.version=1.41.4, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, container_name=iscsid, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team) Dec 6 04:15:25 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 04:15:25 localhost podman[104334]: 2025-12-06 09:15:25.181317749 +0000 UTC m=+0.323700374 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., version=17.1.12, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, distribution-scope=public) Dec 6 04:15:25 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 04:15:25 localhost podman[104336]: 2025-12-06 09:15:25.201061844 +0000 UTC m=+0.336596779 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc.) Dec 6 04:15:25 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 04:15:25 localhost podman[104335]: 2025-12-06 09:15:25.435221713 +0000 UTC m=+0.574350718 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, config_id=tripleo_step4, architecture=x86_64, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 04:15:25 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 04:15:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 04:15:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 04:15:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 04:15:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 04:15:30 localhost podman[104468]: 2025-12-06 09:15:30.925536835 +0000 UTC m=+0.080461648 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-type=git, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, release=1761123044, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, batch=17.1_20251118.1, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5) Dec 6 04:15:30 localhost podman[104468]: 2025-12-06 09:15:30.956105302 +0000 UTC m=+0.111030105 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-type=git, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1) Dec 6 04:15:30 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Deactivated successfully. Dec 6 04:15:30 localhost podman[104467]: 2025-12-06 09:15:30.977172287 +0000 UTC m=+0.133221584 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, batch=17.1_20251118.1, container_name=metrics_qdr, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, distribution-scope=public, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 04:15:31 localhost podman[104469]: 2025-12-06 09:15:31.033626898 +0000 UTC m=+0.183420654 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, version=17.1.12, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 6 04:15:31 localhost podman[104469]: 2025-12-06 09:15:31.051155926 +0000 UTC m=+0.200949682 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, io.buildah.version=1.41.4) Dec 6 04:15:31 localhost podman[104469]: unhealthy Dec 6 04:15:31 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:15:31 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Failed with result 'exit-code'. Dec 6 04:15:31 localhost podman[104466]: 2025-12-06 09:15:31.130720685 +0000 UTC m=+0.288060092 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, container_name=ovn_controller, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-type=git, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z) Dec 6 04:15:31 localhost podman[104467]: 2025-12-06 09:15:31.156607668 +0000 UTC m=+0.312656985 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.buildah.version=1.41.4, version=17.1.12, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step1) Dec 6 04:15:31 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 04:15:31 localhost podman[104466]: 2025-12-06 09:15:31.173423944 +0000 UTC m=+0.330763391 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, io.openshift.expose-services=, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, architecture=x86_64, batch=17.1_20251118.1, version=17.1.12, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, release=1761123044, vcs-type=git) Dec 6 04:15:31 localhost podman[104466]: unhealthy Dec 6 04:15:31 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:15:31 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Failed with result 'exit-code'. Dec 6 04:15:40 localhost sshd[104556]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:15:42 localhost sshd[104558]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:15:43 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 6 04:15:44 localhost recover_tripleo_nova_virtqemud[104561]: 61814 Dec 6 04:15:44 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 6 04:15:44 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 6 04:15:52 localhost sshd[104562]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:15:53 localhost sshd[104564]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:15:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 04:15:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 04:15:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 04:15:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 04:15:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 04:15:55 localhost systemd[1]: tmp-crun.to3dTV.mount: Deactivated successfully. Dec 6 04:15:55 localhost podman[104567]: 2025-12-06 09:15:55.549621979 +0000 UTC m=+0.116692940 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, version=17.1.12, io.buildah.version=1.41.4, managed_by=tripleo_ansible, batch=17.1_20251118.1, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3) Dec 6 04:15:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 04:15:55 localhost podman[104575]: 2025-12-06 09:15:55.58883586 +0000 UTC m=+0.148296928 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, version=17.1.12, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 6 04:15:55 localhost podman[104567]: 2025-12-06 09:15:55.59603591 +0000 UTC m=+0.163106841 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, version=17.1.12, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, config_id=tripleo_step3, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.buildah.version=1.41.4, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Dec 6 04:15:55 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 04:15:55 localhost podman[104575]: 2025-12-06 09:15:55.634040075 +0000 UTC m=+0.193501113 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, version=17.1.12, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 6 04:15:55 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 04:15:55 localhost podman[104569]: 2025-12-06 09:15:55.646836717 +0000 UTC m=+0.206945995 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, architecture=x86_64, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, version=17.1.12, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1) Dec 6 04:15:55 localhost podman[104568]: 2025-12-06 09:15:55.698187562 +0000 UTC m=+0.262396555 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, architecture=x86_64, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, url=https://www.redhat.com, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12) Dec 6 04:15:55 localhost podman[104568]: 2025-12-06 09:15:55.741096777 +0000 UTC m=+0.305305750 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, vcs-type=git, config_id=tripleo_step4, io.buildah.version=1.41.4, managed_by=tripleo_ansible, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, version=17.1.12, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 04:15:55 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 04:15:55 localhost podman[104569]: 2025-12-06 09:15:55.77641358 +0000 UTC m=+0.336522868 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, config_id=tripleo_step3, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.openshift.expose-services=, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, distribution-scope=public, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Dec 6 04:15:55 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 04:15:55 localhost podman[104566]: 2025-12-06 09:15:55.745865844 +0000 UTC m=+0.315560556 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, name=rhosp17/openstack-cron, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc.) Dec 6 04:15:55 localhost podman[104566]: 2025-12-06 09:15:55.828150066 +0000 UTC m=+0.397844778 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, url=https://www.redhat.com, name=rhosp17/openstack-cron, release=1761123044, io.openshift.expose-services=, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, architecture=x86_64, config_id=tripleo_step4, container_name=logrotate_crond) Dec 6 04:15:55 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 04:15:55 localhost podman[104622]: 2025-12-06 09:15:55.679313413 +0000 UTC m=+0.106267408 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, vcs-type=git, architecture=x86_64, config_id=tripleo_step4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, io.buildah.version=1.41.4, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 04:15:55 localhost podman[104622]: 2025-12-06 09:15:55.995146106 +0000 UTC m=+0.422100131 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, config_id=tripleo_step4, distribution-scope=public, url=https://www.redhat.com, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 04:15:56 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 04:16:00 localhost sshd[104777]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:16:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 04:16:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 04:16:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 04:16:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 04:16:01 localhost podman[104781]: 2025-12-06 09:16:01.930745627 +0000 UTC m=+0.084709278 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.buildah.version=1.41.4, managed_by=tripleo_ansible, batch=17.1_20251118.1, url=https://www.redhat.com, tcib_managed=true, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 6 04:16:01 localhost podman[104782]: 2025-12-06 09:16:01.942811148 +0000 UTC m=+0.090549958 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, release=1761123044, vcs-type=git, build-date=2025-11-19T00:14:25Z, distribution-scope=public, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Dec 6 04:16:01 localhost podman[104781]: 2025-12-06 09:16:01.951965598 +0000 UTC m=+0.105929269 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, distribution-scope=public, version=17.1.12, architecture=x86_64, release=1761123044, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute) Dec 6 04:16:01 localhost podman[104781]: unhealthy Dec 6 04:16:01 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:16:01 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Failed with result 'exit-code'. Dec 6 04:16:02 localhost systemd[1]: tmp-crun.A7FGfZ.mount: Deactivated successfully. Dec 6 04:16:02 localhost podman[104780]: 2025-12-06 09:16:02.107634201 +0000 UTC m=+0.263844821 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, release=1761123044, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public) Dec 6 04:16:02 localhost podman[104782]: 2025-12-06 09:16:02.123810996 +0000 UTC m=+0.271549836 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, config_id=tripleo_step4, io.buildah.version=1.41.4, tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, io.openshift.expose-services=, release=1761123044, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 04:16:02 localhost podman[104782]: unhealthy Dec 6 04:16:02 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:16:02 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Failed with result 'exit-code'. Dec 6 04:16:02 localhost podman[104779]: 2025-12-06 09:16:02.078830167 +0000 UTC m=+0.235695856 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.) Dec 6 04:16:02 localhost podman[104779]: 2025-12-06 09:16:02.207803451 +0000 UTC m=+0.364669090 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, tcib_managed=true, release=1761123044, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, vcs-type=git) Dec 6 04:16:02 localhost podman[104779]: unhealthy Dec 6 04:16:02 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:16:02 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Failed with result 'exit-code'. Dec 6 04:16:02 localhost podman[104780]: 2025-12-06 09:16:02.331311707 +0000 UTC m=+0.487522297 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, config_id=tripleo_step1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, container_name=metrics_qdr, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Dec 6 04:16:02 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 04:16:03 localhost sshd[104868]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:16:05 localhost sshd[104870]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:16:07 localhost sshd[104872]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:16:08 localhost sshd[104874]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:16:08 localhost sshd[104876]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:16:09 localhost sshd[104878]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:16:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49370 DF PROTO=TCP SPT=32840 DPT=9105 SEQ=376045614 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52BF13330000000001030307) Dec 6 04:16:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19399 DF PROTO=TCP SPT=36498 DPT=9100 SEQ=3394050689 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52BF17080000000001030307) Dec 6 04:16:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49371 DF PROTO=TCP SPT=32840 DPT=9105 SEQ=376045614 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52BF172F0000000001030307) Dec 6 04:16:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19400 DF PROTO=TCP SPT=36498 DPT=9100 SEQ=3394050689 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52BF1B300000000001030307) Dec 6 04:16:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49372 DF PROTO=TCP SPT=32840 DPT=9105 SEQ=376045614 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52BF1F2F0000000001030307) Dec 6 04:16:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19401 DF PROTO=TCP SPT=36498 DPT=9100 SEQ=3394050689 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52BF232F0000000001030307) Dec 6 04:16:23 localhost sshd[104880]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:16:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49373 DF PROTO=TCP SPT=32840 DPT=9105 SEQ=376045614 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52BF2EF00000000001030307) Dec 6 04:16:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51159 DF PROTO=TCP SPT=44704 DPT=9882 SEQ=50667696 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52BF2FA80000000001030307) Dec 6 04:16:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19402 DF PROTO=TCP SPT=36498 DPT=9100 SEQ=3394050689 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52BF32EF0000000001030307) Dec 6 04:16:24 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51160 DF PROTO=TCP SPT=44704 DPT=9882 SEQ=50667696 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52BF33B00000000001030307) Dec 6 04:16:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 04:16:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 04:16:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 04:16:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 04:16:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 04:16:25 localhost systemd[1]: tmp-crun.MGd74U.mount: Deactivated successfully. Dec 6 04:16:25 localhost podman[104885]: 2025-12-06 09:16:25.956316351 +0000 UTC m=+0.106594768 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vcs-type=git, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, tcib_managed=true, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 04:16:25 localhost podman[104884]: 2025-12-06 09:16:25.913879761 +0000 UTC m=+0.073197755 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, version=17.1.12, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid) Dec 6 04:16:26 localhost podman[104883]: 2025-12-06 09:16:25.967412312 +0000 UTC m=+0.127979965 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, url=https://www.redhat.com, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, distribution-scope=public, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true) Dec 6 04:16:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 04:16:26 localhost podman[104884]: 2025-12-06 09:16:26.044124063 +0000 UTC m=+0.203442087 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, distribution-scope=public, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, tcib_managed=true) Dec 6 04:16:26 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 04:16:26 localhost podman[104882]: 2025-12-06 09:16:26.023677186 +0000 UTC m=+0.184156316 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, release=1761123044, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, name=rhosp17/openstack-collectd, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, managed_by=tripleo_ansible, container_name=collectd, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd) Dec 6 04:16:26 localhost podman[104900]: 2025-12-06 09:16:26.049837938 +0000 UTC m=+0.194896465 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp17/openstack-cron, release=1761123044, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, batch=17.1_20251118.1, vendor=Red Hat, Inc., config_id=tripleo_step4, maintainer=OpenStack TripleO Team) Dec 6 04:16:26 localhost podman[104882]: 2025-12-06 09:16:26.108052763 +0000 UTC m=+0.268531833 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.12, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, container_name=collectd, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, tcib_managed=true) Dec 6 04:16:26 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 04:16:26 localhost podman[104900]: 2025-12-06 09:16:26.133198354 +0000 UTC m=+0.278256831 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., url=https://www.redhat.com) Dec 6 04:16:26 localhost podman[104885]: 2025-12-06 09:16:26.145630005 +0000 UTC m=+0.295908452 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.buildah.version=1.41.4, release=1761123044, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, vcs-type=git, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi) Dec 6 04:16:26 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 04:16:26 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 04:16:26 localhost podman[104986]: 2025-12-06 09:16:26.18462432 +0000 UTC m=+0.132013078 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, release=1761123044, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team) Dec 6 04:16:26 localhost podman[104883]: 2025-12-06 09:16:26.198009841 +0000 UTC m=+0.358577484 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, vcs-type=git, io.buildah.version=1.41.4, architecture=x86_64, container_name=ceilometer_agent_compute, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, url=https://www.redhat.com, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 6 04:16:26 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 04:16:26 localhost sshd[105010]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:16:26 localhost podman[104986]: 2025-12-06 09:16:26.56068887 +0000 UTC m=+0.508077588 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, version=17.1.12, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container) Dec 6 04:16:26 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 04:16:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51161 DF PROTO=TCP SPT=44704 DPT=9882 SEQ=50667696 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52BF3BB00000000001030307) Dec 6 04:16:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51162 DF PROTO=TCP SPT=44704 DPT=9882 SEQ=50667696 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52BF4B700000000001030307) Dec 6 04:16:32 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49374 DF PROTO=TCP SPT=32840 DPT=9105 SEQ=376045614 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52BF4FEF0000000001030307) Dec 6 04:16:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 04:16:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 04:16:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 04:16:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 04:16:32 localhost systemd[1]: tmp-crun.NcldAR.mount: Deactivated successfully. Dec 6 04:16:32 localhost podman[105014]: 2025-12-06 09:16:32.948174527 +0000 UTC m=+0.105152965 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container) Dec 6 04:16:33 localhost podman[105013]: 2025-12-06 09:16:33.001481392 +0000 UTC m=+0.159440729 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, url=https://www.redhat.com, distribution-scope=public, release=1761123044, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, tcib_managed=true) Dec 6 04:16:33 localhost podman[105015]: 2025-12-06 09:16:33.041024264 +0000 UTC m=+0.194654049 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 04:16:33 localhost podman[105013]: 2025-12-06 09:16:33.047650326 +0000 UTC m=+0.205609653 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.buildah.version=1.41.4, distribution-scope=public, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20251118.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Dec 6 04:16:33 localhost podman[105013]: unhealthy Dec 6 04:16:33 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:16:33 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Failed with result 'exit-code'. Dec 6 04:16:33 localhost podman[105016]: 2025-12-06 09:16:33.088701755 +0000 UTC m=+0.238983928 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4, url=https://www.redhat.com, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible) Dec 6 04:16:33 localhost podman[105016]: 2025-12-06 09:16:33.111069401 +0000 UTC m=+0.261351574 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, container_name=ovn_metadata_agent, url=https://www.redhat.com, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Dec 6 04:16:33 localhost podman[105016]: unhealthy Dec 6 04:16:33 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:16:33 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Failed with result 'exit-code'. Dec 6 04:16:33 localhost podman[105015]: 2025-12-06 09:16:33.163410225 +0000 UTC m=+0.317040060 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, version=17.1.12, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, config_id=tripleo_step5, architecture=x86_64, io.buildah.version=1.41.4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, batch=17.1_20251118.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 04:16:33 localhost podman[105015]: unhealthy Dec 6 04:16:33 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:16:33 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Failed with result 'exit-code'. Dec 6 04:16:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19403 DF PROTO=TCP SPT=36498 DPT=9100 SEQ=3394050689 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52BF53EF0000000001030307) Dec 6 04:16:33 localhost podman[105014]: 2025-12-06 09:16:33.214733429 +0000 UTC m=+0.371711867 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step1, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, tcib_managed=true, url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.openshift.expose-services=) Dec 6 04:16:33 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 04:16:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16951 DF PROTO=TCP SPT=47574 DPT=9102 SEQ=3862954808 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52BF55D60000000001030307) Dec 6 04:16:33 localhost systemd[1]: tmp-crun.WwzWXr.mount: Deactivated successfully. Dec 6 04:16:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16952 DF PROTO=TCP SPT=47574 DPT=9102 SEQ=3862954808 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52BF59EF0000000001030307) Dec 6 04:16:35 localhost sshd[105102]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:16:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16953 DF PROTO=TCP SPT=47574 DPT=9102 SEQ=3862954808 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52BF61EF0000000001030307) Dec 6 04:16:37 localhost sshd[105104]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:16:37 localhost systemd-logind[766]: New session 37 of user zuul. Dec 6 04:16:37 localhost systemd[1]: Started Session 37 of User zuul. Dec 6 04:16:38 localhost python3.9[105199]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:16:38 localhost sshd[105201]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:16:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51163 DF PROTO=TCP SPT=44704 DPT=9882 SEQ=50667696 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52BF6BEF0000000001030307) Dec 6 04:16:39 localhost python3.9[105295]: ansible-ansible.legacy.command Invoked with cmd=python3 -c "import configparser as c; p = c.ConfigParser(strict=False); p.read('/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf'); print(p['DEFAULT']['host'])"#012 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:16:40 localhost python3.9[105388]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:16:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16954 DF PROTO=TCP SPT=47574 DPT=9102 SEQ=3862954808 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52BF71AF0000000001030307) Dec 6 04:16:40 localhost python3.9[105482]: ansible-ansible.legacy.command Invoked with cmd=python3 -c "import configparser as c; p = c.ConfigParser(strict=False); p.read('/var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf'); print(p['DEFAULT']['host'])"#012 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:16:41 localhost python3.9[105575]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:16:42 localhost python3.9[105666]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline Dec 6 04:16:44 localhost python3.9[105756]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:16:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3987 DF PROTO=TCP SPT=48846 DPT=9101 SEQ=3545670120 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52BF7F3F0000000001030307) Dec 6 04:16:44 localhost python3.9[105848]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile Dec 6 04:16:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3988 DF PROTO=TCP SPT=48846 DPT=9101 SEQ=3545670120 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52BF83300000000001030307) Dec 6 04:16:46 localhost python3.9[105938]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 6 04:16:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39158 DF PROTO=TCP SPT=54560 DPT=9105 SEQ=2042264912 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52BF88640000000001030307) Dec 6 04:16:46 localhost python3.9[105986]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 6 04:16:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3989 DF PROTO=TCP SPT=48846 DPT=9101 SEQ=3545670120 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52BF8B2F0000000001030307) Dec 6 04:16:47 localhost systemd[1]: session-37.scope: Deactivated successfully. Dec 6 04:16:47 localhost systemd[1]: session-37.scope: Consumed 4.724s CPU time. Dec 6 04:16:47 localhost systemd-logind[766]: Session 37 logged out. Waiting for processes to exit. Dec 6 04:16:47 localhost systemd-logind[766]: Removed session 37. Dec 6 04:16:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9154 DF PROTO=TCP SPT=56244 DPT=9100 SEQ=1924823422 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52BF8C380000000001030307) Dec 6 04:16:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39159 DF PROTO=TCP SPT=54560 DPT=9105 SEQ=2042264912 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52BF8C6F0000000001030307) Dec 6 04:16:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39160 DF PROTO=TCP SPT=54560 DPT=9105 SEQ=2042264912 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52BF94700000000001030307) Dec 6 04:16:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39161 DF PROTO=TCP SPT=54560 DPT=9105 SEQ=2042264912 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52BFA42F0000000001030307) Dec 6 04:16:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51164 DF PROTO=TCP SPT=44704 DPT=9882 SEQ=50667696 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52BFABF00000000001030307) Dec 6 04:16:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 04:16:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 04:16:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 04:16:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 04:16:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 04:16:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 04:16:56 localhost podman[106011]: 2025-12-06 09:16:56.953515277 +0000 UTC m=+0.097709596 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, release=1761123044, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, com.redhat.component=openstack-iscsid-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 04:16:56 localhost podman[106011]: 2025-12-06 09:16:56.988094877 +0000 UTC m=+0.132289206 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20251118.1, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, distribution-scope=public, version=17.1.12, tcib_managed=true, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, config_id=tripleo_step3) Dec 6 04:16:57 localhost systemd[1]: tmp-crun.euSGza.mount: Deactivated successfully. Dec 6 04:16:57 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 04:16:57 localhost podman[106017]: 2025-12-06 09:16:57.055140873 +0000 UTC m=+0.195439933 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, release=1761123044, architecture=x86_64, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1) Dec 6 04:16:57 localhost podman[106004]: 2025-12-06 09:16:57.102807744 +0000 UTC m=+0.247116257 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true) Dec 6 04:16:57 localhost podman[106017]: 2025-12-06 09:16:57.111882692 +0000 UTC m=+0.252181772 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_id=tripleo_step4, batch=17.1_20251118.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, version=17.1.12, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, url=https://www.redhat.com, architecture=x86_64) Dec 6 04:16:57 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 04:16:57 localhost podman[106002]: 2025-12-06 09:16:57.164881647 +0000 UTC m=+0.320272120 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, vcs-type=git, version=17.1.12, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 04:16:57 localhost podman[106002]: 2025-12-06 09:16:57.172959754 +0000 UTC m=+0.328350257 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, release=1761123044, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, batch=17.1_20251118.1, io.buildah.version=1.41.4, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Dec 6 04:16:57 localhost podman[106003]: 2025-12-06 09:16:57.026791584 +0000 UTC m=+0.179689820 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.openshift.expose-services=, container_name=collectd, architecture=x86_64) Dec 6 04:16:57 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 04:16:57 localhost podman[106003]: 2025-12-06 09:16:57.209520105 +0000 UTC m=+0.362418441 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, release=1761123044, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd) Dec 6 04:16:57 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 04:16:57 localhost podman[106005]: 2025-12-06 09:16:57.266515922 +0000 UTC m=+0.410649039 container health_status a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 6 04:16:57 localhost podman[106005]: 2025-12-06 09:16:57.316496044 +0000 UTC m=+0.460629181 container exec_died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, release=1761123044, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container) Dec 6 04:16:57 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Deactivated successfully. Dec 6 04:16:57 localhost podman[106004]: 2025-12-06 09:16:57.439802704 +0000 UTC m=+0.584111237 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.12, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, managed_by=tripleo_ansible, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 6 04:16:57 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 04:16:57 localhost sshd[106134]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:16:57 localhost systemd-logind[766]: New session 38 of user zuul. Dec 6 04:16:57 localhost systemd[1]: Started Session 38 of User zuul. Dec 6 04:16:58 localhost python3.9[106229]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 6 04:16:58 localhost systemd[1]: Reloading. Dec 6 04:16:58 localhost systemd-rc-local-generator[106253]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:16:58 localhost systemd-sysv-generator[106256]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:16:58 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:16:59 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 6 04:16:59 localhost systemd[1]: Starting dnf makecache... Dec 6 04:16:59 localhost recover_tripleo_nova_virtqemud[106267]: 61814 Dec 6 04:16:59 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 6 04:16:59 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 6 04:16:59 localhost dnf[106266]: Updating Subscription Management repositories. Dec 6 04:16:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3991 DF PROTO=TCP SPT=48846 DPT=9101 SEQ=3545670120 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52BFBBEF0000000001030307) Dec 6 04:17:00 localhost python3.9[106357]: ansible-ansible.builtin.service_facts Invoked Dec 6 04:17:00 localhost network[106374]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 6 04:17:00 localhost network[106375]: 'network-scripts' will be removed from distribution in near future. Dec 6 04:17:00 localhost network[106376]: It is advised to switch to 'NetworkManager' instead for network management. Dec 6 04:17:01 localhost sshd[106430]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:17:01 localhost dnf[106266]: Metadata cache refreshed recently. Dec 6 04:17:01 localhost systemd[1]: dnf-makecache.service: Deactivated successfully. Dec 6 04:17:01 localhost systemd[1]: Finished dnf makecache. Dec 6 04:17:01 localhost systemd[1]: dnf-makecache.service: Consumed 2.332s CPU time. Dec 6 04:17:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39162 DF PROTO=TCP SPT=54560 DPT=9105 SEQ=2042264912 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52BFC3EF0000000001030307) Dec 6 04:17:02 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:17:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 04:17:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 04:17:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 04:17:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 04:17:03 localhost podman[106563]: 2025-12-06 09:17:03.284826872 +0000 UTC m=+0.079133887 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, io.openshift.expose-services=, tcib_managed=true) Dec 6 04:17:03 localhost podman[106532]: 2025-12-06 09:17:03.202914691 +0000 UTC m=+0.102549245 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, tcib_managed=true, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, version=17.1.12, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-type=git) Dec 6 04:17:03 localhost podman[106545]: 2025-12-06 09:17:03.254361598 +0000 UTC m=+0.089190346 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, distribution-scope=public, batch=17.1_20251118.1, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, architecture=x86_64, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, io.openshift.expose-services=, version=17.1.12) Dec 6 04:17:03 localhost podman[106563]: 2025-12-06 09:17:03.336446404 +0000 UTC m=+0.130753429 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1761123044, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 04:17:03 localhost podman[106563]: unhealthy Dec 6 04:17:03 localhost podman[106583]: 2025-12-06 09:17:03.344087268 +0000 UTC m=+0.081514130 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.12, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, architecture=x86_64, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 6 04:17:03 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:17:03 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Failed with result 'exit-code'. Dec 6 04:17:03 localhost podman[106545]: 2025-12-06 09:17:03.384653561 +0000 UTC m=+0.219482319 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1761123044, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team) Dec 6 04:17:03 localhost podman[106532]: 2025-12-06 09:17:03.38753385 +0000 UTC m=+0.287168434 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, container_name=ovn_controller, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044) Dec 6 04:17:03 localhost podman[106532]: unhealthy Dec 6 04:17:03 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:17:03 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Failed with result 'exit-code'. Dec 6 04:17:03 localhost podman[106545]: unhealthy Dec 6 04:17:03 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:17:03 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Failed with result 'exit-code'. Dec 6 04:17:03 localhost podman[106583]: 2025-12-06 09:17:03.53104358 +0000 UTC m=+0.268470502 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, distribution-scope=public, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, vendor=Red Hat, Inc., url=https://www.redhat.com, io.buildah.version=1.41.4, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team) Dec 6 04:17:03 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 04:17:03 localhost sshd[106623]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:17:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7214 DF PROTO=TCP SPT=60606 DPT=9102 SEQ=595273658 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52BFCEEF0000000001030307) Dec 6 04:17:04 localhost sshd[106664]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:17:06 localhost python3.9[106741]: ansible-ansible.builtin.service_facts Invoked Dec 6 04:17:06 localhost network[106758]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 6 04:17:06 localhost network[106759]: 'network-scripts' will be removed from distribution in near future. Dec 6 04:17:06 localhost network[106760]: It is advised to switch to 'NetworkManager' instead for network management. Dec 6 04:17:08 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:17:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11973 DF PROTO=TCP SPT=40214 DPT=9882 SEQ=1514282561 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52BFE1EF0000000001030307) Dec 6 04:17:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7216 DF PROTO=TCP SPT=60606 DPT=9102 SEQ=595273658 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52BFE6AF0000000001030307) Dec 6 04:17:11 localhost sshd[106960]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:17:11 localhost python3.9[106959]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:17:11 localhost systemd[1]: Reloading. Dec 6 04:17:11 localhost systemd-rc-local-generator[106986]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:17:11 localhost systemd-sysv-generator[106993]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:17:11 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:17:11 localhost systemd[1]: Stopping ceilometer_agent_compute container... Dec 6 04:17:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58176 DF PROTO=TCP SPT=50230 DPT=9101 SEQ=213042245 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52BFF4700000000001030307) Dec 6 04:17:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58178 DF PROTO=TCP SPT=50230 DPT=9101 SEQ=213042245 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C0006F0000000001030307) Dec 6 04:17:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34747 DF PROTO=TCP SPT=35542 DPT=9105 SEQ=1265973637 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C009B00000000001030307) Dec 6 04:17:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34748 DF PROTO=TCP SPT=35542 DPT=9105 SEQ=1265973637 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C0196F0000000001030307) Dec 6 04:17:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11974 DF PROTO=TCP SPT=40214 DPT=9882 SEQ=1514282561 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C021EF0000000001030307) Dec 6 04:17:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 04:17:27 localhost podman[107015]: 2025-12-06 09:17:27.17526018 +0000 UTC m=+0.084571503 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, container_name=iscsid, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, architecture=x86_64, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, config_id=tripleo_step3, release=1761123044, distribution-scope=public, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 04:17:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 04:17:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 04:17:27 localhost podman[107015]: 2025-12-06 09:17:27.224581322 +0000 UTC m=+0.133892625 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_id=tripleo_step3, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 04:17:27 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 04:17:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 04:17:27 localhost podman[107035]: 2025-12-06 09:17:27.271441448 +0000 UTC m=+0.065476858 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044) Dec 6 04:17:27 localhost podman[107057]: 2025-12-06 09:17:27.35564977 +0000 UTC m=+0.088547595 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, io.buildah.version=1.41.4, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Dec 6 04:17:27 localhost podman[107035]: 2025-12-06 09:17:27.361418277 +0000 UTC m=+0.155453707 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, version=17.1.12, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 04:17:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 04:17:27 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 04:17:27 localhost podman[107034]: 2025-12-06 09:17:27.331974534 +0000 UTC m=+0.133278547 container health_status b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, vendor=Red Hat, Inc., tcib_managed=true, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 04:17:27 localhost podman[107057]: 2025-12-06 09:17:27.392809879 +0000 UTC m=+0.125707724 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=collectd, url=https://www.redhat.com, distribution-scope=public, io.openshift.expose-services=, version=17.1.12, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, name=rhosp17/openstack-collectd) Dec 6 04:17:27 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 04:17:27 localhost podman[107034]: 2025-12-06 09:17:27.420193858 +0000 UTC m=+0.221497881 container exec_died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044) Dec 6 04:17:27 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Deactivated successfully. Dec 6 04:17:27 localhost podman[107097]: Error: container a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 is not running Dec 6 04:17:27 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Main process exited, code=exited, status=125/n/a Dec 6 04:17:27 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Failed with result 'exit-code'. Dec 6 04:17:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 04:17:27 localhost podman[107114]: 2025-12-06 09:17:27.60127485 +0000 UTC m=+0.076865868 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vendor=Red Hat, Inc., vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, version=17.1.12, release=1761123044, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 04:17:27 localhost podman[107114]: 2025-12-06 09:17:27.97640352 +0000 UTC m=+0.451994488 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, distribution-scope=public, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, vcs-type=git, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, release=1761123044, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true) Dec 6 04:17:27 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 04:17:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58180 DF PROTO=TCP SPT=50230 DPT=9101 SEQ=213042245 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C02FEF0000000001030307) Dec 6 04:17:32 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34749 DF PROTO=TCP SPT=35542 DPT=9105 SEQ=1265973637 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C039EF0000000001030307) Dec 6 04:17:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 04:17:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 04:17:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 04:17:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 04:17:33 localhost podman[107137]: 2025-12-06 09:17:33.68424033 +0000 UTC m=+0.092161067 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, release=1761123044, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.12, config_id=tripleo_step4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4) Dec 6 04:17:33 localhost podman[107137]: 2025-12-06 09:17:33.72926033 +0000 UTC m=+0.137181047 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, container_name=ovn_controller, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, batch=17.1_20251118.1, config_id=tripleo_step4, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 04:17:33 localhost podman[107137]: unhealthy Dec 6 04:17:33 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:17:33 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Failed with result 'exit-code'. Dec 6 04:17:33 localhost podman[107138]: 2025-12-06 09:17:33.745749145 +0000 UTC m=+0.150412351 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, name=rhosp17/openstack-qdrouterd, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr) Dec 6 04:17:33 localhost podman[107143]: 2025-12-06 09:17:33.793941822 +0000 UTC m=+0.192585294 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, url=https://www.redhat.com, architecture=x86_64, tcib_managed=true, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 04:17:33 localhost podman[107139]: 2025-12-06 09:17:33.849694192 +0000 UTC m=+0.251733929 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, vendor=Red Hat, Inc., tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, config_id=tripleo_step5, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, build-date=2025-11-19T00:36:58Z, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 04:17:33 localhost podman[107143]: 2025-12-06 09:17:33.86200821 +0000 UTC m=+0.260651682 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 6 04:17:33 localhost podman[107143]: unhealthy Dec 6 04:17:33 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:17:33 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Failed with result 'exit-code'. Dec 6 04:17:33 localhost podman[107139]: 2025-12-06 09:17:33.895367133 +0000 UTC m=+0.297406870 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., tcib_managed=true, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, version=17.1.12, config_id=tripleo_step5) Dec 6 04:17:33 localhost podman[107139]: unhealthy Dec 6 04:17:33 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:17:33 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Failed with result 'exit-code'. Dec 6 04:17:33 localhost podman[107138]: 2025-12-06 09:17:33.955227127 +0000 UTC m=+0.359890303 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, release=1761123044, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, managed_by=tripleo_ansible) Dec 6 04:17:33 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 04:17:34 localhost systemd[1]: tmp-crun.8Fg5GR.mount: Deactivated successfully. Dec 6 04:17:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54799 DF PROTO=TCP SPT=39736 DPT=9102 SEQ=3229812551 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C0442F0000000001030307) Dec 6 04:17:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16957 DF PROTO=TCP SPT=47574 DPT=9102 SEQ=3862954808 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C04FEF0000000001030307) Dec 6 04:17:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54801 DF PROTO=TCP SPT=39736 DPT=9102 SEQ=3229812551 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C05BEF0000000001030307) Dec 6 04:17:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10083 DF PROTO=TCP SPT=51286 DPT=9101 SEQ=2421426605 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C069A00000000001030307) Dec 6 04:17:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10085 DF PROTO=TCP SPT=51286 DPT=9101 SEQ=2421426605 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C075AF0000000001030307) Dec 6 04:17:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1823 DF PROTO=TCP SPT=33042 DPT=9105 SEQ=3654695548 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C07EAF0000000001030307) Dec 6 04:17:51 localhost sshd[107226]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:17:53 localhost podman[107002]: time="2025-12-06T09:17:53Z" level=warning msg="StopSignal SIGTERM failed to stop container ceilometer_agent_compute in 42 seconds, resorting to SIGKILL" Dec 6 04:17:53 localhost systemd[1]: libpod-a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.scope: Deactivated successfully. Dec 6 04:17:53 localhost systemd[1]: libpod-a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.scope: Consumed 5.215s CPU time. Dec 6 04:17:53 localhost podman[107002]: 2025-12-06 09:17:53.626787085 +0000 UTC m=+42.104093093 container died a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 6 04:17:53 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.timer: Deactivated successfully. Dec 6 04:17:53 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9. Dec 6 04:17:53 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Failed to open /run/systemd/transient/a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: No such file or directory Dec 6 04:17:53 localhost systemd[1]: tmp-crun.3KDGoB.mount: Deactivated successfully. Dec 6 04:17:53 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9-userdata-shm.mount: Deactivated successfully. Dec 6 04:17:53 localhost podman[107002]: 2025-12-06 09:17:53.681868094 +0000 UTC m=+42.159174072 container cleanup a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, version=17.1.12, url=https://www.redhat.com, distribution-scope=public, batch=17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z) Dec 6 04:17:53 localhost podman[107002]: ceilometer_agent_compute Dec 6 04:17:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1824 DF PROTO=TCP SPT=33042 DPT=9105 SEQ=3654695548 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C08E6F0000000001030307) Dec 6 04:17:53 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.timer: Failed to open /run/systemd/transient/a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.timer: No such file or directory Dec 6 04:17:53 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Failed to open /run/systemd/transient/a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: No such file or directory Dec 6 04:17:53 localhost podman[107229]: 2025-12-06 09:17:53.717810766 +0000 UTC m=+0.077625541 container cleanup a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, tcib_managed=true, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Dec 6 04:17:53 localhost systemd[1]: libpod-conmon-a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.scope: Deactivated successfully. Dec 6 04:17:53 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.timer: Failed to open /run/systemd/transient/a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.timer: No such file or directory Dec 6 04:17:53 localhost systemd[1]: a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: Failed to open /run/systemd/transient/a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9.service: No such file or directory Dec 6 04:17:53 localhost podman[107246]: 2025-12-06 09:17:53.819803472 +0000 UTC m=+0.066966454 container cleanup a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, config_id=tripleo_step4, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Dec 6 04:17:53 localhost podman[107246]: ceilometer_agent_compute Dec 6 04:17:53 localhost systemd[1]: tripleo_ceilometer_agent_compute.service: Deactivated successfully. Dec 6 04:17:53 localhost systemd[1]: Stopped ceilometer_agent_compute container. Dec 6 04:17:53 localhost systemd[1]: tripleo_ceilometer_agent_compute.service: Consumed 1.064s CPU time, no IO. Dec 6 04:17:54 localhost systemd[1]: var-lib-containers-storage-overlay-15de5573c617e73fedd1daaecfac821d4b4021582e250a3cae6d24e4b8e4cd51-merged.mount: Deactivated successfully. Dec 6 04:17:54 localhost python3.9[107348]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:17:54 localhost systemd[1]: Reloading. Dec 6 04:17:54 localhost systemd-rc-local-generator[107372]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:17:54 localhost systemd-sysv-generator[107378]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:17:54 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:17:55 localhost systemd[1]: Stopping ceilometer_agent_ipmi container... Dec 6 04:17:55 localhost systemd[1]: tmp-crun.dwxIby.mount: Deactivated successfully. Dec 6 04:17:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64633 DF PROTO=TCP SPT=38780 DPT=9882 SEQ=1054856800 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C095F00000000001030307) Dec 6 04:17:55 localhost sshd[107403]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:17:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 04:17:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 04:17:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 04:17:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 04:17:57 localhost systemd[1]: tmp-crun.SidGPq.mount: Deactivated successfully. Dec 6 04:17:57 localhost podman[107407]: 2025-12-06 09:17:57.683808137 +0000 UTC m=+0.085377747 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., tcib_managed=true, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, architecture=x86_64, batch=17.1_20251118.1, release=1761123044, io.buildah.version=1.41.4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, build-date=2025-11-18T23:44:13Z, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 04:17:57 localhost podman[107407]: 2025-12-06 09:17:57.728559859 +0000 UTC m=+0.130129439 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, architecture=x86_64, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, release=1761123044, version=17.1.12, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible) Dec 6 04:17:57 localhost systemd[1]: tmp-crun.doAp3b.mount: Deactivated successfully. Dec 6 04:17:57 localhost podman[107408]: Error: container b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 is not running Dec 6 04:17:57 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 04:17:57 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Main process exited, code=exited, status=125/n/a Dec 6 04:17:57 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Failed with result 'exit-code'. Dec 6 04:17:57 localhost podman[107405]: 2025-12-06 09:17:57.778289354 +0000 UTC m=+0.182978130 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, name=rhosp17/openstack-cron, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, batch=17.1_20251118.1, io.buildah.version=1.41.4, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 04:17:57 localhost podman[107405]: 2025-12-06 09:17:57.786441684 +0000 UTC m=+0.191130520 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1761123044, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true) Dec 6 04:17:57 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 04:17:57 localhost podman[107406]: 2025-12-06 09:17:57.741928769 +0000 UTC m=+0.143463288 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, vcs-type=git, batch=17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, maintainer=OpenStack TripleO Team) Dec 6 04:17:57 localhost podman[107406]: 2025-12-06 09:17:57.872593815 +0000 UTC m=+0.274128364 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, container_name=collectd, tcib_managed=true, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, version=17.1.12, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-collectd, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd) Dec 6 04:17:57 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 04:17:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 04:17:58 localhost podman[107474]: 2025-12-06 09:17:58.665427411 +0000 UTC m=+0.076437715 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, container_name=nova_migration_target, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, version=17.1.12, build-date=2025-11-19T00:36:58Z, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute) Dec 6 04:17:59 localhost podman[107474]: 2025-12-06 09:17:59.047130152 +0000 UTC m=+0.458140386 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, architecture=x86_64, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, distribution-scope=public, managed_by=tripleo_ansible, release=1761123044, vcs-type=git) Dec 6 04:17:59 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 04:17:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10087 DF PROTO=TCP SPT=51286 DPT=9101 SEQ=2421426605 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C0A5EF0000000001030307) Dec 6 04:18:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1825 DF PROTO=TCP SPT=33042 DPT=9105 SEQ=3654695548 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C0ADF00000000001030307) Dec 6 04:18:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 04:18:03 localhost podman[107575]: 2025-12-06 09:18:03.894091143 +0000 UTC m=+0.056007168 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, release=1761123044, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64) Dec 6 04:18:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 04:18:03 localhost podman[107575]: 2025-12-06 09:18:03.911589919 +0000 UTC m=+0.073506004 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, release=1761123044, io.openshift.expose-services=, container_name=ovn_controller, distribution-scope=public, architecture=x86_64, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 04:18:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 04:18:03 localhost podman[107575]: unhealthy Dec 6 04:18:03 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:18:03 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Failed with result 'exit-code'. Dec 6 04:18:03 localhost podman[107593]: 2025-12-06 09:18:03.965023507 +0000 UTC m=+0.057358439 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, architecture=x86_64, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step4) Dec 6 04:18:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 04:18:04 localhost podman[107603]: 2025-12-06 09:18:04.013274477 +0000 UTC m=+0.086434741 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, container_name=nova_compute, io.buildah.version=1.41.4, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, release=1761123044, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc.) Dec 6 04:18:04 localhost podman[107593]: 2025-12-06 09:18:04.033477746 +0000 UTC m=+0.125812678 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, vcs-type=git, tcib_managed=true) Dec 6 04:18:04 localhost podman[107593]: unhealthy Dec 6 04:18:04 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:18:04 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Failed with result 'exit-code'. Dec 6 04:18:04 localhost podman[107603]: 2025-12-06 09:18:04.059178754 +0000 UTC m=+0.132339028 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, url=https://www.redhat.com, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, release=1761123044, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 6 04:18:04 localhost podman[107603]: unhealthy Dec 6 04:18:04 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:18:04 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Failed with result 'exit-code'. Dec 6 04:18:04 localhost podman[107626]: 2025-12-06 09:18:04.12362901 +0000 UTC m=+0.123422085 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_id=tripleo_step1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container) Dec 6 04:18:04 localhost podman[107626]: 2025-12-06 09:18:04.343153119 +0000 UTC m=+0.342946204 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, container_name=metrics_qdr, config_id=tripleo_step1) Dec 6 04:18:04 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 04:18:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42592 DF PROTO=TCP SPT=58746 DPT=9102 SEQ=2208224687 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C0B96F0000000001030307) Dec 6 04:18:05 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 6 04:18:05 localhost recover_tripleo_nova_virtqemud[107665]: 61814 Dec 6 04:18:05 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 6 04:18:05 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 6 04:18:05 localhost sshd[107666]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:18:06 localhost sshd[107668]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:18:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7219 DF PROTO=TCP SPT=60606 DPT=9102 SEQ=595273658 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C0C5EF0000000001030307) Dec 6 04:18:10 localhost sshd[107670]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:18:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42594 DF PROTO=TCP SPT=58746 DPT=9102 SEQ=2208224687 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C0D12F0000000001030307) Dec 6 04:18:12 localhost sshd[107672]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:18:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5262 DF PROTO=TCP SPT=59642 DPT=9101 SEQ=1211313988 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C0DED00000000001030307) Dec 6 04:18:16 localhost sshd[107674]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:18:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5264 DF PROTO=TCP SPT=59642 DPT=9101 SEQ=1211313988 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C0EAEF0000000001030307) Dec 6 04:18:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21345 DF PROTO=TCP SPT=43012 DPT=9105 SEQ=1515895537 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C0F3EF0000000001030307) Dec 6 04:18:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21346 DF PROTO=TCP SPT=43012 DPT=9105 SEQ=1515895537 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C103AF0000000001030307) Dec 6 04:18:25 localhost sshd[107676]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:18:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12084 DF PROTO=TCP SPT=57334 DPT=9882 SEQ=2216587260 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C10BEF0000000001030307) Dec 6 04:18:27 localhost sshd[107678]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:18:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 04:18:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 04:18:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 04:18:27 localhost systemd[1]: tmp-crun.PdrVzd.mount: Deactivated successfully. Dec 6 04:18:27 localhost podman[107679]: 2025-12-06 09:18:27.940363151 +0000 UTC m=+0.092072263 container health_status 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, distribution-scope=public, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 04:18:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 04:18:27 localhost podman[107681]: Error: container b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 is not running Dec 6 04:18:27 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Main process exited, code=exited, status=125/n/a Dec 6 04:18:27 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Failed with result 'exit-code'. Dec 6 04:18:27 localhost podman[107679]: 2025-12-06 09:18:27.981179422 +0000 UTC m=+0.132888534 container exec_died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, version=17.1.12, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, release=1761123044, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Dec 6 04:18:27 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Deactivated successfully. Dec 6 04:18:28 localhost systemd[1]: tmp-crun.LzHasY.mount: Deactivated successfully. Dec 6 04:18:28 localhost podman[107680]: 2025-12-06 09:18:28.04958655 +0000 UTC m=+0.199752255 container health_status b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, batch=17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 04:18:28 localhost podman[107680]: 2025-12-06 09:18:28.087232074 +0000 UTC m=+0.237397719 container exec_died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, container_name=iscsid, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, release=1761123044, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Dec 6 04:18:28 localhost podman[107720]: 2025-12-06 09:18:28.09688938 +0000 UTC m=+0.134677189 container health_status 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, vcs-type=git, version=17.1.12, container_name=collectd) Dec 6 04:18:28 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Deactivated successfully. Dec 6 04:18:28 localhost podman[107720]: 2025-12-06 09:18:28.110364312 +0000 UTC m=+0.148152122 container exec_died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, name=rhosp17/openstack-collectd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.openshift.expose-services=, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, batch=17.1_20251118.1, release=1761123044, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z) Dec 6 04:18:28 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Deactivated successfully. Dec 6 04:18:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 04:18:29 localhost systemd[1]: tmp-crun.ooehbT.mount: Deactivated successfully. Dec 6 04:18:29 localhost podman[107749]: 2025-12-06 09:18:29.581862753 +0000 UTC m=+0.093220979 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, architecture=x86_64, version=17.1.12, release=1761123044, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4) Dec 6 04:18:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5266 DF PROTO=TCP SPT=59642 DPT=9101 SEQ=1211313988 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C11BEF0000000001030307) Dec 6 04:18:29 localhost podman[107749]: 2025-12-06 09:18:29.960206832 +0000 UTC m=+0.471565048 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, version=17.1.12, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, io.openshift.expose-services=, url=https://www.redhat.com, architecture=x86_64, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git) Dec 6 04:18:29 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 04:18:30 localhost sshd[107773]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:18:30 localhost sshd[107775]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:18:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21347 DF PROTO=TCP SPT=43012 DPT=9105 SEQ=1515895537 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C123F00000000001030307) Dec 6 04:18:32 localhost sshd[107777]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:18:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 04:18:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 04:18:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 04:18:34 localhost podman[107779]: 2025-12-06 09:18:34.16713884 +0000 UTC m=+0.078430525 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, version=17.1.12, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step4, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, release=1761123044, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 04:18:34 localhost podman[107779]: 2025-12-06 09:18:34.210255032 +0000 UTC m=+0.121546747 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, architecture=x86_64, config_id=tripleo_step4, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, distribution-scope=public, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team) Dec 6 04:18:34 localhost podman[107779]: unhealthy Dec 6 04:18:34 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:18:34 localhost podman[107780]: 2025-12-06 09:18:34.226810129 +0000 UTC m=+0.136182966 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-type=git, build-date=2025-11-19T00:14:25Z, architecture=x86_64, tcib_managed=true, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent) Dec 6 04:18:34 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Failed with result 'exit-code'. Dec 6 04:18:34 localhost podman[107780]: 2025-12-06 09:18:34.271390076 +0000 UTC m=+0.180762953 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, architecture=x86_64, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.buildah.version=1.41.4, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Dec 6 04:18:34 localhost podman[107780]: unhealthy Dec 6 04:18:34 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:18:34 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Failed with result 'exit-code'. Dec 6 04:18:34 localhost podman[107781]: 2025-12-06 09:18:34.289659876 +0000 UTC m=+0.192882635 container health_status 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, architecture=x86_64, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, version=17.1.12, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, managed_by=tripleo_ansible, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.41.4) Dec 6 04:18:34 localhost podman[107781]: 2025-12-06 09:18:34.30546021 +0000 UTC m=+0.208682949 container exec_died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_id=tripleo_step5, architecture=x86_64, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 04:18:34 localhost podman[107781]: unhealthy Dec 6 04:18:34 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:18:34 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Failed with result 'exit-code'. Dec 6 04:18:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50264 DF PROTO=TCP SPT=54836 DPT=9102 SEQ=3526283303 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C12EAF0000000001030307) Dec 6 04:18:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 04:18:34 localhost podman[107839]: 2025-12-06 09:18:34.92001644 +0000 UTC m=+0.082611343 container health_status 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, tcib_managed=true, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 6 04:18:35 localhost podman[107839]: 2025-12-06 09:18:35.12029045 +0000 UTC m=+0.282885323 container exec_died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, container_name=metrics_qdr, version=17.1.12, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Dec 6 04:18:35 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Deactivated successfully. Dec 6 04:18:37 localhost podman[107389]: time="2025-12-06T09:18:37Z" level=warning msg="StopSignal SIGTERM failed to stop container ceilometer_agent_ipmi in 42 seconds, resorting to SIGKILL" Dec 6 04:18:37 localhost systemd[1]: tmp-crun.6bjeXq.mount: Deactivated successfully. Dec 6 04:18:37 localhost systemd[1]: libpod-b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.scope: Deactivated successfully. Dec 6 04:18:37 localhost systemd[1]: libpod-b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.scope: Consumed 5.022s CPU time. Dec 6 04:18:37 localhost podman[107389]: 2025-12-06 09:18:37.266291729 +0000 UTC m=+42.106054036 container died b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Dec 6 04:18:37 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.timer: Deactivated successfully. Dec 6 04:18:37 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6. Dec 6 04:18:37 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Failed to open /run/systemd/transient/b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: No such file or directory Dec 6 04:18:37 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6-userdata-shm.mount: Deactivated successfully. Dec 6 04:18:37 localhost podman[107389]: 2025-12-06 09:18:37.327494895 +0000 UTC m=+42.167257122 container cleanup b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.41.4, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, vcs-type=git, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., config_id=tripleo_step4, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container) Dec 6 04:18:37 localhost podman[107389]: ceilometer_agent_ipmi Dec 6 04:18:37 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.timer: Failed to open /run/systemd/transient/b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.timer: No such file or directory Dec 6 04:18:37 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Failed to open /run/systemd/transient/b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: No such file or directory Dec 6 04:18:37 localhost podman[107870]: 2025-12-06 09:18:37.405648811 +0000 UTC m=+0.126994934 container cleanup b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 04:18:37 localhost systemd[1]: libpod-conmon-b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.scope: Deactivated successfully. Dec 6 04:18:37 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.timer: Failed to open /run/systemd/transient/b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.timer: No such file or directory Dec 6 04:18:37 localhost systemd[1]: b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: Failed to open /run/systemd/transient/b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6.service: No such file or directory Dec 6 04:18:37 localhost podman[107883]: 2025-12-06 09:18:37.507493073 +0000 UTC m=+0.069059748 container cleanup b63991e076084f33fdbae970a3ed856f42cb83fed28fd3dc61558f92f8ff65a6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi) Dec 6 04:18:37 localhost podman[107883]: ceilometer_agent_ipmi Dec 6 04:18:37 localhost systemd[1]: tripleo_ceilometer_agent_ipmi.service: Deactivated successfully. Dec 6 04:18:37 localhost systemd[1]: Stopped ceilometer_agent_ipmi container. Dec 6 04:18:38 localhost systemd[1]: var-lib-containers-storage-overlay-c7edd91eaf927f0cc6c745dda6c529d67c09cf793c73be3335bece938eeb713d-merged.mount: Deactivated successfully. Dec 6 04:18:38 localhost python3.9[107988]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_collectd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:18:38 localhost systemd[1]: Reloading. Dec 6 04:18:38 localhost systemd-sysv-generator[108016]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:18:38 localhost systemd-rc-local-generator[108012]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:18:38 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:18:38 localhost systemd[1]: Stopping collectd container... Dec 6 04:18:38 localhost sshd[108040]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:18:38 localhost systemd[1]: tmp-crun.fkRsYw.mount: Deactivated successfully. Dec 6 04:18:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27001 DF PROTO=TCP SPT=51632 DPT=9882 SEQ=2718330891 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C13FF00000000001030307) Dec 6 04:18:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50266 DF PROTO=TCP SPT=54836 DPT=9102 SEQ=3526283303 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C1466F0000000001030307) Dec 6 04:18:40 localhost sshd[108043]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:18:42 localhost systemd[1]: libpod-2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.scope: Deactivated successfully. Dec 6 04:18:42 localhost systemd[1]: libpod-2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.scope: Consumed 2.024s CPU time. Dec 6 04:18:42 localhost podman[108029]: 2025-12-06 09:18:42.036474655 +0000 UTC m=+3.289200135 container stop 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., version=17.1.12, container_name=collectd, name=rhosp17/openstack-collectd, release=1761123044, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public) Dec 6 04:18:42 localhost podman[108029]: 2025-12-06 09:18:42.067177656 +0000 UTC m=+3.319903126 container died 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.4, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, architecture=x86_64, release=1761123044, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com) Dec 6 04:18:42 localhost systemd[1]: tmp-crun.31wJvF.mount: Deactivated successfully. Dec 6 04:18:42 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.timer: Deactivated successfully. Dec 6 04:18:42 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185. Dec 6 04:18:42 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Failed to open /run/systemd/transient/2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: No such file or directory Dec 6 04:18:42 localhost systemd[1]: tmp-crun.bZ5Qzu.mount: Deactivated successfully. Dec 6 04:18:42 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185-userdata-shm.mount: Deactivated successfully. Dec 6 04:18:42 localhost podman[108029]: 2025-12-06 09:18:42.134436238 +0000 UTC m=+3.387161718 container cleanup 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, release=1761123044, vcs-type=git, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.12, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true) Dec 6 04:18:42 localhost podman[108029]: collectd Dec 6 04:18:42 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.timer: Failed to open /run/systemd/transient/2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.timer: No such file or directory Dec 6 04:18:42 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Failed to open /run/systemd/transient/2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: No such file or directory Dec 6 04:18:42 localhost podman[108045]: 2025-12-06 09:18:42.157418263 +0000 UTC m=+0.104781964 container cleanup 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, com.redhat.component=openstack-collectd-container, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, version=17.1.12, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 04:18:42 localhost systemd[1]: tripleo_collectd.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:18:42 localhost systemd[1]: libpod-conmon-2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.scope: Deactivated successfully. Dec 6 04:18:42 localhost podman[108076]: error opening file `/run/crun/2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185/status`: No such file or directory Dec 6 04:18:42 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.timer: Failed to open /run/systemd/transient/2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.timer: No such file or directory Dec 6 04:18:42 localhost systemd[1]: 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: Failed to open /run/systemd/transient/2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185.service: No such file or directory Dec 6 04:18:42 localhost podman[108065]: 2025-12-06 09:18:42.276128721 +0000 UTC m=+0.081334854 container cleanup 2057e5294552232f9f4337cf76c4aeca0fbda37fb4011fc7748fc47e299ca185 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.buildah.version=1.41.4, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Dec 6 04:18:42 localhost podman[108065]: collectd Dec 6 04:18:42 localhost systemd[1]: tripleo_collectd.service: Failed with result 'exit-code'. Dec 6 04:18:42 localhost systemd[1]: Stopped collectd container. Dec 6 04:18:43 localhost python3.9[108169]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_iscsid.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:18:43 localhost systemd[1]: var-lib-containers-storage-overlay-d980d54738e5f040d62ff40bb9abb4b1931da0f4c80c1ba3031e7feabd416146-merged.mount: Deactivated successfully. Dec 6 04:18:44 localhost systemd[1]: Reloading. Dec 6 04:18:44 localhost systemd-rc-local-generator[108192]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:18:44 localhost systemd-sysv-generator[108197]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:18:44 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:18:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12871 DF PROTO=TCP SPT=37808 DPT=9101 SEQ=2036028519 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C154000000000001030307) Dec 6 04:18:44 localhost systemd[1]: Stopping iscsid container... Dec 6 04:18:44 localhost systemd[1]: libpod-b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.scope: Deactivated successfully. Dec 6 04:18:44 localhost systemd[1]: libpod-b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.scope: Consumed 1.024s CPU time. Dec 6 04:18:44 localhost podman[108210]: 2025-12-06 09:18:44.516363749 +0000 UTC m=+0.064636163 container died b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, tcib_managed=true, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vcs-type=git) Dec 6 04:18:44 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.timer: Deactivated successfully. Dec 6 04:18:44 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4. Dec 6 04:18:44 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Failed to open /run/systemd/transient/b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: No such file or directory Dec 6 04:18:44 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4-userdata-shm.mount: Deactivated successfully. Dec 6 04:18:44 localhost systemd[1]: var-lib-containers-storage-overlay-ef5a2dfca972201470637fe24151a27f03799cbb5a942d988431f305c9ea334c-merged.mount: Deactivated successfully. Dec 6 04:18:44 localhost podman[108210]: 2025-12-06 09:18:44.570250301 +0000 UTC m=+0.118522685 container cleanup b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.buildah.version=1.41.4, tcib_managed=true, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, version=17.1.12, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Dec 6 04:18:44 localhost podman[108210]: iscsid Dec 6 04:18:44 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.timer: Failed to open /run/systemd/transient/b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.timer: No such file or directory Dec 6 04:18:44 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Failed to open /run/systemd/transient/b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: No such file or directory Dec 6 04:18:44 localhost podman[108224]: 2025-12-06 09:18:44.596247667 +0000 UTC m=+0.071648787 container cleanup b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, vcs-type=git, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, distribution-scope=public, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, release=1761123044) Dec 6 04:18:44 localhost systemd[1]: libpod-conmon-b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.scope: Deactivated successfully. Dec 6 04:18:44 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.timer: Failed to open /run/systemd/transient/b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.timer: No such file or directory Dec 6 04:18:44 localhost systemd[1]: b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: Failed to open /run/systemd/transient/b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4.service: No such file or directory Dec 6 04:18:44 localhost podman[108237]: 2025-12-06 09:18:44.697018456 +0000 UTC m=+0.070947815 container cleanup b42304674c227698d01aa8793300474cc86d1ff521bfa05eefa217b5f5fc74a4 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, container_name=iscsid, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Dec 6 04:18:44 localhost podman[108237]: iscsid Dec 6 04:18:44 localhost systemd[1]: tripleo_iscsid.service: Deactivated successfully. Dec 6 04:18:44 localhost systemd[1]: Stopped iscsid container. Dec 6 04:18:45 localhost python3.9[108340]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_logrotate_crond.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:18:45 localhost systemd[1]: Reloading. Dec 6 04:18:45 localhost systemd-rc-local-generator[108363]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:18:45 localhost systemd-sysv-generator[108368]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:18:45 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:18:45 localhost systemd[1]: Stopping logrotate_crond container... Dec 6 04:18:45 localhost systemd[1]: libpod-04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.scope: Deactivated successfully. Dec 6 04:18:45 localhost podman[108381]: 2025-12-06 09:18:45.867149478 +0000 UTC m=+0.073315188 container died 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, io.buildah.version=1.41.4, architecture=x86_64, container_name=logrotate_crond, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, distribution-scope=public, build-date=2025-11-18T22:49:32Z) Dec 6 04:18:45 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.timer: Deactivated successfully. Dec 6 04:18:45 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc. Dec 6 04:18:45 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Failed to open /run/systemd/transient/04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: No such file or directory Dec 6 04:18:45 localhost podman[108381]: 2025-12-06 09:18:45.995819893 +0000 UTC m=+0.201985543 container cleanup 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, vcs-type=git) Dec 6 04:18:45 localhost podman[108381]: logrotate_crond Dec 6 04:18:46 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.timer: Failed to open /run/systemd/transient/04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.timer: No such file or directory Dec 6 04:18:46 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Failed to open /run/systemd/transient/04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: No such file or directory Dec 6 04:18:46 localhost podman[108394]: 2025-12-06 09:18:46.021864921 +0000 UTC m=+0.154797466 container cleanup 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, batch=17.1_20251118.1, version=17.1.12, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, container_name=logrotate_crond, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron) Dec 6 04:18:46 localhost systemd[1]: libpod-conmon-04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.scope: Deactivated successfully. Dec 6 04:18:46 localhost podman[108426]: error opening file `/run/crun/04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc/status`: No such file or directory Dec 6 04:18:46 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.timer: Failed to open /run/systemd/transient/04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.timer: No such file or directory Dec 6 04:18:46 localhost systemd[1]: 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: Failed to open /run/systemd/transient/04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc.service: No such file or directory Dec 6 04:18:46 localhost podman[108414]: 2025-12-06 09:18:46.135213297 +0000 UTC m=+0.075290140 container cleanup 04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, vendor=Red Hat, Inc., version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.41.4, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 04:18:46 localhost podman[108414]: logrotate_crond Dec 6 04:18:46 localhost systemd[1]: tripleo_logrotate_crond.service: Deactivated successfully. Dec 6 04:18:46 localhost systemd[1]: Stopped logrotate_crond container. Dec 6 04:18:46 localhost python3.9[108519]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_metrics_qdr.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:18:46 localhost systemd[1]: Reloading. Dec 6 04:18:46 localhost systemd-rc-local-generator[108542]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:18:46 localhost systemd-sysv-generator[108546]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:18:47 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:18:47 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-04f6b29afd29691f16a28d2049f2bd2128d6ac4e86c833f5e34438bbec3e10bc-userdata-shm.mount: Deactivated successfully. Dec 6 04:18:47 localhost systemd[1]: var-lib-containers-storage-overlay-cd6425452938e99a947d98ed440c416f97c1a47fc1a973380479b4612f15ab3d-merged.mount: Deactivated successfully. Dec 6 04:18:47 localhost systemd[1]: Stopping metrics_qdr container... Dec 6 04:18:47 localhost systemd[1]: tmp-crun.VN62HF.mount: Deactivated successfully. Dec 6 04:18:47 localhost kernel: qdrouterd[54519]: segfault at 0 ip 00007fb4f238b7cb sp 00007ffe18e60290 error 4 in libc.so.6[7fb4f2328000+175000] Dec 6 04:18:47 localhost kernel: Code: 0b 00 64 44 89 23 85 c0 75 d4 e9 2b ff ff ff e8 db a5 00 00 e9 fd fe ff ff e8 41 1d 0d 00 90 f3 0f 1e fa 41 54 55 48 89 fd 53 <8b> 07 f6 c4 20 0f 85 aa 00 00 00 89 c2 81 e2 00 80 00 00 0f 84 a9 Dec 6 04:18:47 localhost systemd[1]: Created slice Slice /system/systemd-coredump. Dec 6 04:18:47 localhost systemd[1]: Started Process Core Dump (PID 108573/UID 0). Dec 6 04:18:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12873 DF PROTO=TCP SPT=37808 DPT=9101 SEQ=2036028519 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C15FF00000000001030307) Dec 6 04:18:47 localhost systemd-coredump[108574]: Resource limits disable core dumping for process 54519 (qdrouterd). Dec 6 04:18:47 localhost systemd-coredump[108574]: Process 54519 (qdrouterd) of user 42465 dumped core. Dec 6 04:18:47 localhost systemd[1]: systemd-coredump@0-108573-0.service: Deactivated successfully. Dec 6 04:18:47 localhost podman[108560]: 2025-12-06 09:18:47.451626293 +0000 UTC m=+0.238700660 container died 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, version=17.1.12, tcib_managed=true, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, distribution-scope=public) Dec 6 04:18:47 localhost systemd[1]: libpod-203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.scope: Deactivated successfully. Dec 6 04:18:47 localhost systemd[1]: libpod-203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.scope: Consumed 27.794s CPU time. Dec 6 04:18:47 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.timer: Deactivated successfully. Dec 6 04:18:47 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c. Dec 6 04:18:47 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Failed to open /run/systemd/transient/203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: No such file or directory Dec 6 04:18:47 localhost podman[108560]: 2025-12-06 09:18:47.512706245 +0000 UTC m=+0.299780622 container cleanup 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-type=git, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z) Dec 6 04:18:47 localhost podman[108560]: metrics_qdr Dec 6 04:18:47 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.timer: Failed to open /run/systemd/transient/203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.timer: No such file or directory Dec 6 04:18:47 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Failed to open /run/systemd/transient/203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: No such file or directory Dec 6 04:18:47 localhost podman[108578]: 2025-12-06 09:18:47.575883592 +0000 UTC m=+0.111450288 container cleanup 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, container_name=metrics_qdr, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, release=1761123044, build-date=2025-11-18T22:49:46Z, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1) Dec 6 04:18:47 localhost systemd[1]: tripleo_metrics_qdr.service: Main process exited, code=exited, status=139/n/a Dec 6 04:18:47 localhost systemd[1]: libpod-conmon-203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.scope: Deactivated successfully. Dec 6 04:18:47 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.timer: Failed to open /run/systemd/transient/203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.timer: No such file or directory Dec 6 04:18:47 localhost systemd[1]: 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: Failed to open /run/systemd/transient/203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c.service: No such file or directory Dec 6 04:18:47 localhost podman[108592]: 2025-12-06 09:18:47.668518842 +0000 UTC m=+0.066571713 container cleanup 203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'e8f60832f8f2382eeceefcaaff307d45'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, version=17.1.12, batch=17.1_20251118.1, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 04:18:47 localhost podman[108592]: metrics_qdr Dec 6 04:18:47 localhost systemd[1]: tripleo_metrics_qdr.service: Failed with result 'exit-code'. Dec 6 04:18:47 localhost systemd[1]: Stopped metrics_qdr container. Dec 6 04:18:48 localhost systemd[1]: var-lib-containers-storage-overlay-beaf327340ccd7215a759765519263eed11e8999b460cf785f7dbab3207ce8ee-merged.mount: Deactivated successfully. Dec 6 04:18:48 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-203277eef6de030daaca698f56ad8c5cc24d6633f7bc188a739eccc264c31a0c-userdata-shm.mount: Deactivated successfully. Dec 6 04:18:48 localhost python3.9[108695]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_dhcp.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:18:49 localhost python3.9[108788]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_l3_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:18:49 localhost sshd[108849]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:18:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25244 DF PROTO=TCP SPT=57414 DPT=9105 SEQ=1289446335 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C1692F0000000001030307) Dec 6 04:18:49 localhost python3.9[108883]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_ovs_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:18:51 localhost python3.9[108976]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:18:51 localhost systemd[1]: Reloading. Dec 6 04:18:51 localhost systemd-rc-local-generator[109003]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:18:51 localhost systemd-sysv-generator[109008]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:18:51 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:18:52 localhost systemd[1]: Stopping nova_compute container... Dec 6 04:18:52 localhost systemd[1]: tmp-crun.9XsxYW.mount: Deactivated successfully. Dec 6 04:18:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25245 DF PROTO=TCP SPT=57414 DPT=9105 SEQ=1289446335 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C178EF0000000001030307) Dec 6 04:18:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35040 DF PROTO=TCP SPT=33504 DPT=9882 SEQ=1702904302 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C185AF0000000001030307) Dec 6 04:18:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12875 DF PROTO=TCP SPT=37808 DPT=9101 SEQ=2036028519 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C18FF00000000001030307) Dec 6 04:19:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 04:19:00 localhost systemd[1]: tmp-crun.z7cxYs.mount: Deactivated successfully. Dec 6 04:19:00 localhost podman[109029]: 2025-12-06 09:19:00.396714151 +0000 UTC m=+0.298264755 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, container_name=nova_migration_target, release=1761123044, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step4, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 04:19:00 localhost podman[109029]: 2025-12-06 09:19:00.774168971 +0000 UTC m=+0.675719535 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, vcs-type=git, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, version=17.1.12, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, batch=17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4) Dec 6 04:19:00 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 04:19:02 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25246 DF PROTO=TCP SPT=57414 DPT=9105 SEQ=1289446335 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C199EF0000000001030307) Dec 6 04:19:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 04:19:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 04:19:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 04:19:04 localhost podman[109116]: Error: container 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 is not running Dec 6 04:19:04 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Main process exited, code=exited, status=125/n/a Dec 6 04:19:04 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Failed with result 'exit-code'. Dec 6 04:19:04 localhost podman[109117]: 2025-12-06 09:19:04.682505776 +0000 UTC m=+0.085679427 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 6 04:19:04 localhost podman[109117]: 2025-12-06 09:19:04.701181379 +0000 UTC m=+0.104355100 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, url=https://www.redhat.com, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, release=1761123044, maintainer=OpenStack TripleO Team, version=17.1.12, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=) Dec 6 04:19:04 localhost podman[109117]: unhealthy Dec 6 04:19:04 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:19:04 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Failed with result 'exit-code'. Dec 6 04:19:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41233 DF PROTO=TCP SPT=37402 DPT=9102 SEQ=1076870088 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C1A3EF0000000001030307) Dec 6 04:19:04 localhost podman[109115]: 2025-12-06 09:19:04.786953969 +0000 UTC m=+0.194690520 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, architecture=x86_64, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-type=git) Dec 6 04:19:04 localhost podman[109115]: 2025-12-06 09:19:04.800927577 +0000 UTC m=+0.208664128 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1761123044, vendor=Red Hat, Inc., managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, version=17.1.12, io.openshift.expose-services=, architecture=x86_64, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git) Dec 6 04:19:04 localhost podman[109115]: unhealthy Dec 6 04:19:04 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:19:04 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Failed with result 'exit-code'. Dec 6 04:19:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42597 DF PROTO=TCP SPT=58746 DPT=9102 SEQ=2208224687 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C1AFEF0000000001030307) Dec 6 04:19:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41235 DF PROTO=TCP SPT=37402 DPT=9102 SEQ=1076870088 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C1BBAF0000000001030307) Dec 6 04:19:14 localhost sshd[109179]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:19:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3911 DF PROTO=TCP SPT=59048 DPT=9101 SEQ=357860565 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C1C9300000000001030307) Dec 6 04:19:16 localhost sshd[109181]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:19:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3913 DF PROTO=TCP SPT=59048 DPT=9101 SEQ=357860565 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C1D5300000000001030307) Dec 6 04:19:18 localhost sshd[109183]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:19:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55718 DF PROTO=TCP SPT=40516 DPT=9105 SEQ=4098139791 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C1DE6F0000000001030307) Dec 6 04:19:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55719 DF PROTO=TCP SPT=40516 DPT=9105 SEQ=4098139791 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C1EE2F0000000001030307) Dec 6 04:19:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35043 DF PROTO=TCP SPT=33504 DPT=9882 SEQ=1702904302 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C1F5EF0000000001030307) Dec 6 04:19:29 localhost sshd[109185]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:19:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3915 DF PROTO=TCP SPT=59048 DPT=9101 SEQ=357860565 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C205EF0000000001030307) Dec 6 04:19:29 localhost sshd[109187]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:19:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 04:19:30 localhost podman[109189]: 2025-12-06 09:19:30.912949779 +0000 UTC m=+0.076986901 container health_status 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step4, batch=17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, version=17.1.12) Dec 6 04:19:31 localhost podman[109189]: 2025-12-06 09:19:31.294618399 +0000 UTC m=+0.458655511 container exec_died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, container_name=nova_migration_target, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, tcib_managed=true, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, name=rhosp17/openstack-nova-compute) Dec 6 04:19:31 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Deactivated successfully. Dec 6 04:19:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55720 DF PROTO=TCP SPT=40516 DPT=9105 SEQ=4098139791 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C20DF00000000001030307) Dec 6 04:19:34 localhost podman[109016]: time="2025-12-06T09:19:34Z" level=warning msg="StopSignal SIGTERM failed to stop container nova_compute in 42 seconds, resorting to SIGKILL" Dec 6 04:19:34 localhost systemd[1]: tmp-crun.IStCZ5.mount: Deactivated successfully. Dec 6 04:19:34 localhost systemd[1]: session-c11.scope: Deactivated successfully. Dec 6 04:19:34 localhost systemd[1]: libpod-41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.scope: Deactivated successfully. Dec 6 04:19:34 localhost systemd[1]: libpod-41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.scope: Consumed 36.098s CPU time. Dec 6 04:19:34 localhost podman[109016]: 2025-12-06 09:19:34.185366949 +0000 UTC m=+42.111065654 container died 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, release=1761123044, url=https://www.redhat.com, container_name=nova_compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, vcs-type=git, version=17.1.12, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 04:19:34 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.timer: Deactivated successfully. Dec 6 04:19:34 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007. Dec 6 04:19:34 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Failed to open /run/systemd/transient/41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: No such file or directory Dec 6 04:19:34 localhost systemd[1]: var-lib-containers-storage-overlay-4fa26257b93de11d5a1a515bc1294a83a2d0558581107701a6e94f33d44fcb5e-merged.mount: Deactivated successfully. Dec 6 04:19:34 localhost podman[109016]: 2025-12-06 09:19:34.246278727 +0000 UTC m=+42.171977412 container cleanup 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, release=1761123044, url=https://www.redhat.com, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team) Dec 6 04:19:34 localhost podman[109016]: nova_compute Dec 6 04:19:34 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.timer: Failed to open /run/systemd/transient/41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.timer: No such file or directory Dec 6 04:19:34 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Failed to open /run/systemd/transient/41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: No such file or directory Dec 6 04:19:34 localhost podman[109213]: 2025-12-06 09:19:34.264586357 +0000 UTC m=+0.065228050 container cleanup 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, version=17.1.12, managed_by=tripleo_ansible, batch=17.1_20251118.1, container_name=nova_compute, release=1761123044, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 04:19:34 localhost systemd[1]: libpod-conmon-41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.scope: Deactivated successfully. Dec 6 04:19:34 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.timer: Failed to open /run/systemd/transient/41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.timer: No such file or directory Dec 6 04:19:34 localhost systemd[1]: 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: Failed to open /run/systemd/transient/41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007.service: No such file or directory Dec 6 04:19:34 localhost podman[109225]: 2025-12-06 09:19:34.34945265 +0000 UTC m=+0.057562006 container cleanup 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vendor=Red Hat, Inc., release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, version=17.1.12, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 04:19:34 localhost podman[109225]: nova_compute Dec 6 04:19:34 localhost systemd[1]: tripleo_nova_compute.service: Deactivated successfully. Dec 6 04:19:34 localhost systemd[1]: Stopped nova_compute container. Dec 6 04:19:34 localhost systemd[1]: tripleo_nova_compute.service: Consumed 1.075s CPU time, no IO. Dec 6 04:19:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20151 DF PROTO=TCP SPT=45332 DPT=9102 SEQ=2909602329 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C218EF0000000001030307) Dec 6 04:19:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 04:19:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 04:19:34 localhost podman[109331]: 2025-12-06 09:19:34.916580615 +0000 UTC m=+0.090290428 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, release=1761123044, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, tcib_managed=true, batch=17.1_20251118.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 04:19:34 localhost podman[109332]: 2025-12-06 09:19:34.963109522 +0000 UTC m=+0.135640880 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, release=1761123044, vcs-type=git) Dec 6 04:19:34 localhost podman[109331]: 2025-12-06 09:19:34.990322716 +0000 UTC m=+0.164032549 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, distribution-scope=public, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, vcs-type=git) Dec 6 04:19:34 localhost podman[109331]: unhealthy Dec 6 04:19:35 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:19:35 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Failed with result 'exit-code'. Dec 6 04:19:35 localhost podman[109332]: 2025-12-06 09:19:35.002472259 +0000 UTC m=+0.175003587 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_id=tripleo_step4, io.buildah.version=1.41.4, release=1761123044, container_name=ovn_controller, distribution-scope=public, io.openshift.expose-services=, vcs-type=git) Dec 6 04:19:35 localhost podman[109332]: unhealthy Dec 6 04:19:35 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:19:35 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Failed with result 'exit-code'. Dec 6 04:19:35 localhost python3.9[109330]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:19:36 localhost systemd[1]: Reloading. Dec 6 04:19:36 localhost sshd[109374]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:19:36 localhost systemd-rc-local-generator[109396]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:19:36 localhost systemd-sysv-generator[109399]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:19:36 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:19:36 localhost systemd[1]: Stopping nova_migration_target container... Dec 6 04:19:36 localhost systemd[1]: tmp-crun.UaOSug.mount: Deactivated successfully. Dec 6 04:19:36 localhost systemd[1]: libpod-38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.scope: Deactivated successfully. Dec 6 04:19:36 localhost systemd[1]: libpod-38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.scope: Consumed 33.642s CPU time. Dec 6 04:19:36 localhost podman[109413]: 2025-12-06 09:19:36.662342474 +0000 UTC m=+0.082212841 container died 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, container_name=nova_migration_target, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_id=tripleo_step4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, distribution-scope=public, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute) Dec 6 04:19:36 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.timer: Deactivated successfully. Dec 6 04:19:36 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b. Dec 6 04:19:36 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Failed to open /run/systemd/transient/38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: No such file or directory Dec 6 04:19:36 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b-userdata-shm.mount: Deactivated successfully. Dec 6 04:19:36 localhost podman[109413]: 2025-12-06 09:19:36.72191383 +0000 UTC m=+0.141784147 container cleanup 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, version=17.1.12, config_id=tripleo_step4, release=1761123044, io.buildah.version=1.41.4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, tcib_managed=true) Dec 6 04:19:36 localhost podman[109413]: nova_migration_target Dec 6 04:19:36 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.timer: Failed to open /run/systemd/transient/38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.timer: No such file or directory Dec 6 04:19:36 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Failed to open /run/systemd/transient/38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: No such file or directory Dec 6 04:19:36 localhost podman[109426]: 2025-12-06 09:19:36.751027293 +0000 UTC m=+0.076991912 container cleanup 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.openshift.expose-services=, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, release=1761123044, config_id=tripleo_step4, version=17.1.12, tcib_managed=true, url=https://www.redhat.com, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Dec 6 04:19:36 localhost systemd[1]: libpod-conmon-38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.scope: Deactivated successfully. Dec 6 04:19:36 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.timer: Failed to open /run/systemd/transient/38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.timer: No such file or directory Dec 6 04:19:36 localhost systemd[1]: 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: Failed to open /run/systemd/transient/38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b.service: No such file or directory Dec 6 04:19:36 localhost podman[109442]: 2025-12-06 09:19:36.855639789 +0000 UTC m=+0.067304584 container cleanup 38053a2512c651e0a8d3b0f4b541280b1332690a87434e686da8738a0222dd0b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Dec 6 04:19:36 localhost podman[109442]: nova_migration_target Dec 6 04:19:36 localhost systemd[1]: tripleo_nova_migration_target.service: Deactivated successfully. Dec 6 04:19:36 localhost systemd[1]: Stopped nova_migration_target container. Dec 6 04:19:37 localhost python3.9[109545]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:19:37 localhost systemd[1]: Reloading. Dec 6 04:19:37 localhost systemd-sysv-generator[109571]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:19:37 localhost systemd-rc-local-generator[109567]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:19:37 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:19:37 localhost systemd[1]: var-lib-containers-storage-overlay-16e4342af8bf5958b38bc295034feee0dd1522d1c796e48d3acbadb880cc49ff-merged.mount: Deactivated successfully. Dec 6 04:19:37 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 6 04:19:37 localhost systemd[1]: Stopping nova_virtlogd_wrapper container... Dec 6 04:19:37 localhost recover_tripleo_nova_virtqemud[109588]: 61814 Dec 6 04:19:37 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Dec 6 04:19:37 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Dec 6 04:19:38 localhost systemd[1]: libpod-c55a3fa9476956f37d3ecfbe7a06aced3ea8b321c934918e4f504b9cf2d8fc82.scope: Deactivated successfully. Dec 6 04:19:38 localhost podman[109587]: 2025-12-06 09:19:38.018389155 +0000 UTC m=+0.050608483 container stop c55a3fa9476956f37d3ecfbe7a06aced3ea8b321c934918e4f504b9cf2d8fc82 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, build-date=2025-11-19T00:35:22Z, container_name=nova_virtlogd_wrapper, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, com.redhat.component=openstack-nova-libvirt-container, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, managed_by=tripleo_ansible) Dec 6 04:19:38 localhost podman[109587]: 2025-12-06 09:19:38.052108538 +0000 UTC m=+0.084327906 container died c55a3fa9476956f37d3ecfbe7a06aced3ea8b321c934918e4f504b9cf2d8fc82 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, vcs-type=git, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, build-date=2025-11-19T00:35:22Z, com.redhat.component=openstack-nova-libvirt-container, container_name=nova_virtlogd_wrapper, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, io.openshift.expose-services=, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 04:19:38 localhost podman[109587]: 2025-12-06 09:19:38.089813175 +0000 UTC m=+0.122032503 container cleanup c55a3fa9476956f37d3ecfbe7a06aced3ea8b321c934918e4f504b9cf2d8fc82 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, architecture=x86_64, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, io.openshift.expose-services=, version=17.1.12, vcs-type=git, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step3, container_name=nova_virtlogd_wrapper, io.buildah.version=1.41.4, release=1761123044, build-date=2025-11-19T00:35:22Z, name=rhosp17/openstack-nova-libvirt, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 04:19:38 localhost podman[109587]: nova_virtlogd_wrapper Dec 6 04:19:38 localhost podman[109603]: 2025-12-06 09:19:38.154547429 +0000 UTC m=+0.119989379 container cleanup c55a3fa9476956f37d3ecfbe7a06aced3ea8b321c934918e4f504b9cf2d8fc82 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:35:22Z, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step3, url=https://www.redhat.com, name=rhosp17/openstack-nova-libvirt, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, batch=17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtlogd_wrapper, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, tcib_managed=true) Dec 6 04:19:38 localhost systemd[1]: tmp-crun.2g2Tyh.mount: Deactivated successfully. Dec 6 04:19:38 localhost systemd[1]: var-lib-containers-storage-overlay-ef97a8ce410352459d2cd2d839f0c4f3d007fd27d6a886085c43dfe3ff9df394-merged.mount: Deactivated successfully. Dec 6 04:19:38 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c55a3fa9476956f37d3ecfbe7a06aced3ea8b321c934918e4f504b9cf2d8fc82-userdata-shm.mount: Deactivated successfully. Dec 6 04:19:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60842 DF PROTO=TCP SPT=46442 DPT=9882 SEQ=1051052648 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C22BF00000000001030307) Dec 6 04:19:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20153 DF PROTO=TCP SPT=45332 DPT=9102 SEQ=2909602329 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C230AF0000000001030307) Dec 6 04:19:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45909 DF PROTO=TCP SPT=52054 DPT=9101 SEQ=2336290836 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C23E600000000001030307) Dec 6 04:19:44 localhost systemd[1]: Stopping User Manager for UID 0... Dec 6 04:19:44 localhost systemd[84400]: Activating special unit Exit the Session... Dec 6 04:19:44 localhost systemd[84400]: Removed slice User Background Tasks Slice. Dec 6 04:19:44 localhost systemd[84400]: Stopped target Main User Target. Dec 6 04:19:44 localhost systemd[84400]: Stopped target Basic System. Dec 6 04:19:44 localhost systemd[84400]: Stopped target Paths. Dec 6 04:19:44 localhost systemd[84400]: Stopped target Sockets. Dec 6 04:19:44 localhost systemd[84400]: Stopped target Timers. Dec 6 04:19:44 localhost systemd[84400]: Stopped Daily Cleanup of User's Temporary Directories. Dec 6 04:19:44 localhost systemd[84400]: Closed D-Bus User Message Bus Socket. Dec 6 04:19:44 localhost systemd[84400]: Stopped Create User's Volatile Files and Directories. Dec 6 04:19:44 localhost systemd[84400]: Removed slice User Application Slice. Dec 6 04:19:44 localhost systemd[84400]: Reached target Shutdown. Dec 6 04:19:44 localhost systemd[84400]: Finished Exit the Session. Dec 6 04:19:44 localhost systemd[84400]: Reached target Exit the Session. Dec 6 04:19:44 localhost systemd[1]: user@0.service: Deactivated successfully. Dec 6 04:19:44 localhost systemd[1]: Stopped User Manager for UID 0. Dec 6 04:19:44 localhost systemd[1]: user@0.service: Consumed 3.449s CPU time, no IO. Dec 6 04:19:44 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Dec 6 04:19:44 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Dec 6 04:19:44 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Dec 6 04:19:44 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Dec 6 04:19:44 localhost systemd[1]: Removed slice User Slice of UID 0. Dec 6 04:19:44 localhost systemd[1]: user-0.slice: Consumed 4.412s CPU time. Dec 6 04:19:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45911 DF PROTO=TCP SPT=52054 DPT=9101 SEQ=2336290836 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C24A6F0000000001030307) Dec 6 04:19:49 localhost sshd[109619]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:19:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15920 DF PROTO=TCP SPT=47802 DPT=9105 SEQ=1421469768 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C2536F0000000001030307) Dec 6 04:19:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15921 DF PROTO=TCP SPT=47802 DPT=9105 SEQ=1421469768 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C2632F0000000001030307) Dec 6 04:19:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60843 DF PROTO=TCP SPT=46442 DPT=9882 SEQ=1051052648 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C26BEF0000000001030307) Dec 6 04:19:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45913 DF PROTO=TCP SPT=52054 DPT=9101 SEQ=2336290836 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C279EF0000000001030307) Dec 6 04:20:02 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15922 DF PROTO=TCP SPT=47802 DPT=9105 SEQ=1421469768 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C283EF0000000001030307) Dec 6 04:20:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55772 DF PROTO=TCP SPT=59922 DPT=9102 SEQ=2564949314 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C28E2F0000000001030307) Dec 6 04:20:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 04:20:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 04:20:05 localhost podman[109621]: 2025-12-06 09:20:05.188972662 +0000 UTC m=+0.098596924 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, tcib_managed=true, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., config_id=tripleo_step4, maintainer=OpenStack TripleO Team, release=1761123044) Dec 6 04:20:05 localhost podman[109622]: 2025-12-06 09:20:05.232683182 +0000 UTC m=+0.139605661 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, architecture=x86_64, build-date=2025-11-19T00:14:25Z, vcs-type=git, io.buildah.version=1.41.4, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn) Dec 6 04:20:05 localhost podman[109622]: 2025-12-06 09:20:05.253090527 +0000 UTC m=+0.160013036 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, config_id=tripleo_step4, release=1761123044, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible) Dec 6 04:20:05 localhost podman[109622]: unhealthy Dec 6 04:20:05 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:20:05 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Failed with result 'exit-code'. Dec 6 04:20:05 localhost podman[109621]: 2025-12-06 09:20:05.308808386 +0000 UTC m=+0.218432668 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, container_name=ovn_controller, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vcs-type=git, release=1761123044, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 6 04:20:05 localhost podman[109621]: unhealthy Dec 6 04:20:05 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:20:05 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Failed with result 'exit-code'. Dec 6 04:20:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41238 DF PROTO=TCP SPT=37402 DPT=9102 SEQ=1076870088 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C299EF0000000001030307) Dec 6 04:20:09 localhost ceph-osd[31726]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 6 04:20:09 localhost ceph-osd[31726]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 5761 writes, 25K keys, 5761 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5761 writes, 760 syncs, 7.58 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 6 04:20:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55774 DF PROTO=TCP SPT=59922 DPT=9102 SEQ=2564949314 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C2A6180000000001030307) Dec 6 04:20:12 localhost ceph-osd[32665]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 6 04:20:12 localhost ceph-osd[32665]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.2 total, 600.0 interval#012Cumulative writes: 4879 writes, 21K keys, 4879 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 4879 writes, 669 syncs, 7.29 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 6 04:20:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57534 DF PROTO=TCP SPT=48814 DPT=9101 SEQ=595854149 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C2B3900000000001030307) Dec 6 04:20:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57536 DF PROTO=TCP SPT=48814 DPT=9101 SEQ=595854149 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C2BFB00000000001030307) Dec 6 04:20:17 localhost sshd[109737]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:20:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30377 DF PROTO=TCP SPT=43298 DPT=9105 SEQ=2476358954 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C2C8AF0000000001030307) Dec 6 04:20:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30378 DF PROTO=TCP SPT=43298 DPT=9105 SEQ=2476358954 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C2D8700000000001030307) Dec 6 04:20:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31339 DF PROTO=TCP SPT=46294 DPT=9882 SEQ=296207786 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C2DFEF0000000001030307) Dec 6 04:20:26 localhost sshd[109739]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:20:27 localhost sshd[109741]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:20:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57538 DF PROTO=TCP SPT=48814 DPT=9101 SEQ=595854149 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C2EFEF0000000001030307) Dec 6 04:20:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30379 DF PROTO=TCP SPT=43298 DPT=9105 SEQ=2476358954 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C2F7EF0000000001030307) Dec 6 04:20:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30919 DF PROTO=TCP SPT=44688 DPT=9102 SEQ=3698087328 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C303700000000001030307) Dec 6 04:20:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 04:20:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 04:20:35 localhost podman[109744]: 2025-12-06 09:20:35.68440968 +0000 UTC m=+0.088747322 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, container_name=ovn_metadata_agent, architecture=x86_64, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 04:20:35 localhost podman[109744]: 2025-12-06 09:20:35.698983926 +0000 UTC m=+0.103321568 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, io.buildah.version=1.41.4, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, batch=17.1_20251118.1, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.12, tcib_managed=true, url=https://www.redhat.com) Dec 6 04:20:35 localhost podman[109744]: unhealthy Dec 6 04:20:35 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:20:35 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Failed with result 'exit-code'. Dec 6 04:20:35 localhost podman[109743]: 2025-12-06 09:20:35.785945792 +0000 UTC m=+0.190238523 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, managed_by=tripleo_ansible, config_id=tripleo_step4, io.buildah.version=1.41.4, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, batch=17.1_20251118.1, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 04:20:35 localhost podman[109743]: 2025-12-06 09:20:35.831191239 +0000 UTC m=+0.235483920 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, vcs-type=git) Dec 6 04:20:35 localhost podman[109743]: unhealthy Dec 6 04:20:35 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:20:35 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Failed with result 'exit-code'. Dec 6 04:20:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20156 DF PROTO=TCP SPT=45332 DPT=9102 SEQ=2909602329 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C30FEF0000000001030307) Dec 6 04:20:38 localhost sshd[109782]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:20:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30921 DF PROTO=TCP SPT=44688 DPT=9102 SEQ=3698087328 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C31B2F0000000001030307) Dec 6 04:20:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33984 DF PROTO=TCP SPT=53444 DPT=9101 SEQ=497035705 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C328C00000000001030307) Dec 6 04:20:45 localhost sshd[109784]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:20:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33986 DF PROTO=TCP SPT=53444 DPT=9101 SEQ=497035705 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C334AF0000000001030307) Dec 6 04:20:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14537 DF PROTO=TCP SPT=57488 DPT=9105 SEQ=3212763973 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C33DF00000000001030307) Dec 6 04:20:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14538 DF PROTO=TCP SPT=57488 DPT=9105 SEQ=3212763973 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C34DAF0000000001030307) Dec 6 04:20:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41178 DF PROTO=TCP SPT=56252 DPT=9882 SEQ=3025860389 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C355EF0000000001030307) Dec 6 04:20:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33988 DF PROTO=TCP SPT=53444 DPT=9101 SEQ=497035705 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C363F00000000001030307) Dec 6 04:21:02 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14539 DF PROTO=TCP SPT=57488 DPT=9105 SEQ=3212763973 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C36DEF0000000001030307) Dec 6 04:21:02 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: State 'stop-sigterm' timed out. Killing. Dec 6 04:21:02 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: Killing process 61031 (conmon) with signal SIGKILL. Dec 6 04:21:02 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: Main process exited, code=killed, status=9/KILL Dec 6 04:21:02 localhost systemd[1]: libpod-conmon-c55a3fa9476956f37d3ecfbe7a06aced3ea8b321c934918e4f504b9cf2d8fc82.scope: Deactivated successfully. Dec 6 04:21:02 localhost podman[109798]: error opening file `/run/crun/c55a3fa9476956f37d3ecfbe7a06aced3ea8b321c934918e4f504b9cf2d8fc82/status`: No such file or directory Dec 6 04:21:02 localhost podman[109786]: 2025-12-06 09:21:02.384332619 +0000 UTC m=+0.040545184 container cleanup c55a3fa9476956f37d3ecfbe7a06aced3ea8b321c934918e4f504b9cf2d8fc82 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, release=1761123044, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_virtlogd_wrapper, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, build-date=2025-11-19T00:35:22Z, url=https://www.redhat.com, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-libvirt-container, tcib_managed=true, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4) Dec 6 04:21:02 localhost podman[109786]: nova_virtlogd_wrapper Dec 6 04:21:02 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: Failed with result 'timeout'. Dec 6 04:21:02 localhost systemd[1]: Stopped nova_virtlogd_wrapper container. Dec 6 04:21:03 localhost python3.9[109891]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:21:03 localhost systemd[1]: Reloading. Dec 6 04:21:03 localhost systemd-rc-local-generator[109918]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:21:03 localhost systemd-sysv-generator[109921]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:21:03 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:21:03 localhost systemd[1]: Stopping nova_virtnodedevd container... Dec 6 04:21:03 localhost systemd[1]: libpod-77cac28f3c09b9f832ae0c5e203a7ac268e6e556a22ddb4e08fa5fd08b32fce5.scope: Deactivated successfully. Dec 6 04:21:03 localhost systemd[1]: libpod-77cac28f3c09b9f832ae0c5e203a7ac268e6e556a22ddb4e08fa5fd08b32fce5.scope: Consumed 1.535s CPU time. Dec 6 04:21:03 localhost podman[109932]: 2025-12-06 09:21:03.557459693 +0000 UTC m=+0.077343973 container died 77cac28f3c09b9f832ae0c5e203a7ac268e6e556a22ddb4e08fa5fd08b32fce5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, com.redhat.component=openstack-nova-libvirt-container, build-date=2025-11-19T00:35:22Z, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtnodedevd, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp17/openstack-nova-libvirt, io.buildah.version=1.41.4, managed_by=tripleo_ansible, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt) Dec 6 04:21:03 localhost systemd[1]: tmp-crun.wsNh1F.mount: Deactivated successfully. Dec 6 04:21:03 localhost podman[109932]: 2025-12-06 09:21:03.592867268 +0000 UTC m=+0.112751518 container cleanup 77cac28f3c09b9f832ae0c5e203a7ac268e6e556a22ddb4e08fa5fd08b32fce5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, architecture=x86_64, distribution-scope=public, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-libvirt, container_name=nova_virtnodedevd, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, io.openshift.expose-services=, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, vendor=Red Hat, Inc., vcs-type=git) Dec 6 04:21:03 localhost podman[109932]: nova_virtnodedevd Dec 6 04:21:03 localhost podman[109946]: 2025-12-06 09:21:03.629271494 +0000 UTC m=+0.062705943 container cleanup 77cac28f3c09b9f832ae0c5e203a7ac268e6e556a22ddb4e08fa5fd08b32fce5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_virtnodedevd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, name=rhosp17/openstack-nova-libvirt, config_id=tripleo_step3, architecture=x86_64, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.expose-services=, build-date=2025-11-19T00:35:22Z, distribution-scope=public, tcib_managed=true, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt) Dec 6 04:21:03 localhost systemd[1]: libpod-conmon-77cac28f3c09b9f832ae0c5e203a7ac268e6e556a22ddb4e08fa5fd08b32fce5.scope: Deactivated successfully. Dec 6 04:21:03 localhost podman[109975]: error opening file `/run/crun/77cac28f3c09b9f832ae0c5e203a7ac268e6e556a22ddb4e08fa5fd08b32fce5/status`: No such file or directory Dec 6 04:21:03 localhost podman[109962]: 2025-12-06 09:21:03.719456908 +0000 UTC m=+0.056140332 container cleanup 77cac28f3c09b9f832ae0c5e203a7ac268e6e556a22ddb4e08fa5fd08b32fce5 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:35:22Z, container_name=nova_virtnodedevd, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, com.redhat.component=openstack-nova-libvirt-container, managed_by=tripleo_ansible, config_id=tripleo_step3, distribution-scope=public, vcs-type=git, name=rhosp17/openstack-nova-libvirt, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1) Dec 6 04:21:03 localhost podman[109962]: nova_virtnodedevd Dec 6 04:21:03 localhost systemd[1]: tripleo_nova_virtnodedevd.service: Deactivated successfully. Dec 6 04:21:03 localhost systemd[1]: Stopped nova_virtnodedevd container. Dec 6 04:21:04 localhost systemd[1]: var-lib-containers-storage-overlay-29aef16efd0d0d4913740c423de7a8c374d7bce415829f4c5401764f17811d20-merged.mount: Deactivated successfully. Dec 6 04:21:04 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-77cac28f3c09b9f832ae0c5e203a7ac268e6e556a22ddb4e08fa5fd08b32fce5-userdata-shm.mount: Deactivated successfully. Dec 6 04:21:04 localhost python3.9[110068]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:21:04 localhost systemd[1]: Reloading. Dec 6 04:21:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39352 DF PROTO=TCP SPT=36072 DPT=9102 SEQ=2593195835 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C378AF0000000001030307) Dec 6 04:21:04 localhost systemd-rc-local-generator[110097]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:21:04 localhost systemd-sysv-generator[110101]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:21:04 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:21:05 localhost systemd[1]: Stopping nova_virtproxyd container... Dec 6 04:21:05 localhost systemd[1]: libpod-abf33a7ce64d174f5aeca10ae9ef2b118248dbae4e0fd3e1c43527aa9d26cefa.scope: Deactivated successfully. Dec 6 04:21:05 localhost podman[110109]: 2025-12-06 09:21:05.170591175 +0000 UTC m=+0.086522844 container died abf33a7ce64d174f5aeca10ae9ef2b118248dbae4e0fd3e1c43527aa9d26cefa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, name=rhosp17/openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtproxyd, vendor=Red Hat, Inc., release=1761123044, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-libvirt-container, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, config_id=tripleo_step3, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt) Dec 6 04:21:05 localhost podman[110109]: 2025-12-06 09:21:05.219592337 +0000 UTC m=+0.135524006 container cleanup abf33a7ce64d174f5aeca10ae9ef2b118248dbae4e0fd3e1c43527aa9d26cefa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, container_name=nova_virtproxyd, distribution-scope=public, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-libvirt-container, vcs-type=git, build-date=2025-11-19T00:35:22Z, managed_by=tripleo_ansible, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, release=1761123044) Dec 6 04:21:05 localhost podman[110109]: nova_virtproxyd Dec 6 04:21:05 localhost podman[110122]: 2025-12-06 09:21:05.251846856 +0000 UTC m=+0.069477091 container cleanup abf33a7ce64d174f5aeca10ae9ef2b118248dbae4e0fd3e1c43527aa9d26cefa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, config_id=tripleo_step3, name=rhosp17/openstack-nova-libvirt, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, architecture=x86_64, container_name=nova_virtproxyd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-libvirt-container, version=17.1.12, release=1761123044, batch=17.1_20251118.1, build-date=2025-11-19T00:35:22Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt) Dec 6 04:21:05 localhost systemd[1]: libpod-conmon-abf33a7ce64d174f5aeca10ae9ef2b118248dbae4e0fd3e1c43527aa9d26cefa.scope: Deactivated successfully. Dec 6 04:21:05 localhost podman[110151]: error opening file `/run/crun/abf33a7ce64d174f5aeca10ae9ef2b118248dbae4e0fd3e1c43527aa9d26cefa/status`: No such file or directory Dec 6 04:21:05 localhost podman[110139]: 2025-12-06 09:21:05.337605184 +0000 UTC m=+0.058928687 container cleanup abf33a7ce64d174f5aeca10ae9ef2b118248dbae4e0fd3e1c43527aa9d26cefa (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, config_id=tripleo_step3, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, io.buildah.version=1.41.4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_virtproxyd, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:35:22Z, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=) Dec 6 04:21:05 localhost podman[110139]: nova_virtproxyd Dec 6 04:21:05 localhost systemd[1]: tripleo_nova_virtproxyd.service: Deactivated successfully. Dec 6 04:21:05 localhost systemd[1]: Stopped nova_virtproxyd container. Dec 6 04:21:05 localhost systemd[1]: tmp-crun.oyeqKz.mount: Deactivated successfully. Dec 6 04:21:05 localhost systemd[1]: var-lib-containers-storage-overlay-1c3c28666d804509c2b20602368c8cd77799587988a2d1a1789a75cb16c60a47-merged.mount: Deactivated successfully. Dec 6 04:21:05 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-abf33a7ce64d174f5aeca10ae9ef2b118248dbae4e0fd3e1c43527aa9d26cefa-userdata-shm.mount: Deactivated successfully. Dec 6 04:21:05 localhost sshd[110200]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:21:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 04:21:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 04:21:05 localhost podman[110248]: 2025-12-06 09:21:05.919131302 +0000 UTC m=+0.085925935 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, release=1761123044, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team) Dec 6 04:21:05 localhost podman[110248]: 2025-12-06 09:21:05.937126793 +0000 UTC m=+0.103921416 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, architecture=x86_64, version=17.1.12, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team) Dec 6 04:21:05 localhost podman[110248]: unhealthy Dec 6 04:21:05 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:21:05 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Failed with result 'exit-code'. Dec 6 04:21:06 localhost podman[110264]: 2025-12-06 09:21:06.008533753 +0000 UTC m=+0.081788939 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, batch=17.1_20251118.1, container_name=ovn_controller, tcib_managed=true) Dec 6 04:21:06 localhost podman[110264]: 2025-12-06 09:21:06.026158213 +0000 UTC m=+0.099413389 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, config_id=tripleo_step4, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, release=1761123044, distribution-scope=public, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., version=17.1.12) Dec 6 04:21:06 localhost podman[110264]: unhealthy Dec 6 04:21:06 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:21:06 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Failed with result 'exit-code'. Dec 6 04:21:06 localhost python3.9[110247]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:21:07 localhost systemd[1]: Reloading. Dec 6 04:21:07 localhost systemd-sysv-generator[110360]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:21:07 localhost systemd-rc-local-generator[110352]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:21:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:21:07 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Dec 6 04:21:07 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Main process exited, code=killed, status=15/TERM Dec 6 04:21:07 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Failed with result 'signal'. Dec 6 04:21:07 localhost systemd[1]: Stopped Check and recover tripleo_nova_virtqemud. Dec 6 04:21:07 localhost systemd[1]: tripleo_nova_virtqemud_recover.timer: Deactivated successfully. Dec 6 04:21:07 localhost systemd[1]: Stopped Check and recover tripleo_nova_virtqemud every 10m. Dec 6 04:21:07 localhost systemd[1]: Stopping nova_virtqemud container... Dec 6 04:21:07 localhost systemd[1]: libpod-e444006757a84d45c953d1ef31bc530b8507f3f86e52f8ba7761eaf744cfae6a.scope: Deactivated successfully. Dec 6 04:21:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55777 DF PROTO=TCP SPT=59922 DPT=9102 SEQ=2564949314 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C383EF0000000001030307) Dec 6 04:21:07 localhost systemd[1]: libpod-e444006757a84d45c953d1ef31bc530b8507f3f86e52f8ba7761eaf744cfae6a.scope: Consumed 2.878s CPU time. Dec 6 04:21:07 localhost podman[110392]: 2025-12-06 09:21:07.637258903 +0000 UTC m=+0.069287755 container died e444006757a84d45c953d1ef31bc530b8507f3f86e52f8ba7761eaf744cfae6a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, container_name=nova_virtqemud, com.redhat.component=openstack-nova-libvirt-container, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, build-date=2025-11-19T00:35:22Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, version=17.1.12, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.buildah.version=1.41.4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, managed_by=tripleo_ansible) Dec 6 04:21:07 localhost podman[110392]: 2025-12-06 09:21:07.660543497 +0000 UTC m=+0.092572329 container cleanup e444006757a84d45c953d1ef31bc530b8507f3f86e52f8ba7761eaf744cfae6a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, vcs-type=git, com.redhat.component=openstack-nova-libvirt-container, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, container_name=nova_virtqemud, distribution-scope=public, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, version=17.1.12, batch=17.1_20251118.1) Dec 6 04:21:07 localhost podman[110392]: nova_virtqemud Dec 6 04:21:07 localhost podman[110407]: 2025-12-06 09:21:07.711691176 +0000 UTC m=+0.060338922 container cleanup e444006757a84d45c953d1ef31bc530b8507f3f86e52f8ba7761eaf744cfae6a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, tcib_managed=true, architecture=x86_64, build-date=2025-11-19T00:35:22Z, version=17.1.12, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_virtqemud, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-libvirt, release=1761123044, io.buildah.version=1.41.4, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=) Dec 6 04:21:08 localhost systemd[1]: var-lib-containers-storage-overlay-9893e3b6825fe8589fe9eca74d23476479efd735e739d26cb203759d2b267e35-merged.mount: Deactivated successfully. Dec 6 04:21:08 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e444006757a84d45c953d1ef31bc530b8507f3f86e52f8ba7761eaf744cfae6a-userdata-shm.mount: Deactivated successfully. Dec 6 04:21:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39354 DF PROTO=TCP SPT=36072 DPT=9102 SEQ=2593195835 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C3906F0000000001030307) Dec 6 04:21:11 localhost sshd[110438]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:21:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31921 DF PROTO=TCP SPT=38962 DPT=9101 SEQ=2796151943 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C39DF00000000001030307) Dec 6 04:21:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31923 DF PROTO=TCP SPT=38962 DPT=9101 SEQ=2796151943 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C3A9F00000000001030307) Dec 6 04:21:17 localhost sshd[110440]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:21:19 localhost sshd[110442]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:21:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15708 DF PROTO=TCP SPT=57760 DPT=9105 SEQ=1755571789 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C3B32F0000000001030307) Dec 6 04:21:19 localhost sshd[110444]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:21:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15709 DF PROTO=TCP SPT=57760 DPT=9105 SEQ=1755571789 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C3C2EF0000000001030307) Dec 6 04:21:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46996 DF PROTO=TCP SPT=40800 DPT=9882 SEQ=4001645799 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C3CFAF0000000001030307) Dec 6 04:21:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31925 DF PROTO=TCP SPT=38962 DPT=9101 SEQ=2796151943 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C3D9F00000000001030307) Dec 6 04:21:32 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15710 DF PROTO=TCP SPT=57760 DPT=9105 SEQ=1755571789 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C3E3EF0000000001030307) Dec 6 04:21:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60336 DF PROTO=TCP SPT=48272 DPT=9102 SEQ=419265507 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C3EDAF0000000001030307) Dec 6 04:21:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 04:21:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 04:21:36 localhost systemd[1]: tmp-crun.0EjixX.mount: Deactivated successfully. Dec 6 04:21:36 localhost podman[110446]: 2025-12-06 09:21:36.369489425 +0000 UTC m=+0.279884741 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-type=git, version=17.1.12, vendor=Red Hat, Inc., container_name=ovn_controller, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, architecture=x86_64, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Dec 6 04:21:36 localhost podman[110446]: 2025-12-06 09:21:36.382026449 +0000 UTC m=+0.292421735 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, release=1761123044, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vcs-type=git, distribution-scope=public, url=https://www.redhat.com, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller) Dec 6 04:21:36 localhost podman[110446]: unhealthy Dec 6 04:21:36 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:21:36 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Failed with result 'exit-code'. Dec 6 04:21:36 localhost podman[110447]: 2025-12-06 09:21:36.427178103 +0000 UTC m=+0.334608088 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, architecture=x86_64, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, io.openshift.expose-services=, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git) Dec 6 04:21:36 localhost podman[110447]: 2025-12-06 09:21:36.466651394 +0000 UTC m=+0.374081439 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, io.openshift.expose-services=, release=1761123044, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, config_id=tripleo_step4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, architecture=x86_64) Dec 6 04:21:36 localhost podman[110447]: unhealthy Dec 6 04:21:36 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:21:36 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Failed with result 'exit-code'. Dec 6 04:21:37 localhost sshd[110485]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:21:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30924 DF PROTO=TCP SPT=44688 DPT=9102 SEQ=3698087328 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C3F9EF0000000001030307) Dec 6 04:21:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60338 DF PROTO=TCP SPT=48272 DPT=9102 SEQ=419265507 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C4056F0000000001030307) Dec 6 04:21:40 localhost sshd[110487]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:21:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14973 DF PROTO=TCP SPT=60808 DPT=9101 SEQ=2516561203 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C413210000000001030307) Dec 6 04:21:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14975 DF PROTO=TCP SPT=60808 DPT=9101 SEQ=2516561203 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C41F2F0000000001030307) Dec 6 04:21:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32181 DF PROTO=TCP SPT=55830 DPT=9105 SEQ=3315581072 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C4282F0000000001030307) Dec 6 04:21:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32182 DF PROTO=TCP SPT=55830 DPT=9105 SEQ=3315581072 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C437F00000000001030307) Dec 6 04:21:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46999 DF PROTO=TCP SPT=40800 DPT=9882 SEQ=4001645799 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C43FEF0000000001030307) Dec 6 04:21:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14977 DF PROTO=TCP SPT=60808 DPT=9101 SEQ=2516561203 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C44FEF0000000001030307) Dec 6 04:22:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32183 DF PROTO=TCP SPT=55830 DPT=9105 SEQ=3315581072 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C457EF0000000001030307) Dec 6 04:22:01 localhost sshd[110489]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:22:02 localhost sshd[110491]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:22:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4883 DF PROTO=TCP SPT=56360 DPT=9102 SEQ=336027206 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C462EF0000000001030307) Dec 6 04:22:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 04:22:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 04:22:06 localhost systemd[1]: tmp-crun.0U0JEP.mount: Deactivated successfully. Dec 6 04:22:06 localhost podman[110493]: 2025-12-06 09:22:06.937208396 +0000 UTC m=+0.093970832 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, architecture=x86_64, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, container_name=ovn_controller) Dec 6 04:22:06 localhost podman[110493]: 2025-12-06 09:22:06.980312197 +0000 UTC m=+0.137074583 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, architecture=x86_64, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4) Dec 6 04:22:06 localhost systemd[1]: tmp-crun.Iejkwv.mount: Deactivated successfully. Dec 6 04:22:06 localhost podman[110493]: unhealthy Dec 6 04:22:06 localhost podman[110494]: 2025-12-06 09:22:06.994183473 +0000 UTC m=+0.141737906 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, io.openshift.expose-services=, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, config_id=tripleo_step4, architecture=x86_64, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 04:22:07 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:22:07 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Failed with result 'exit-code'. Dec 6 04:22:07 localhost podman[110494]: 2025-12-06 09:22:07.015217278 +0000 UTC m=+0.162771751 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, url=https://www.redhat.com, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 6 04:22:07 localhost podman[110494]: unhealthy Dec 6 04:22:07 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:22:07 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Failed with result 'exit-code'. Dec 6 04:22:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50985 DF PROTO=TCP SPT=35922 DPT=9882 SEQ=755790178 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C473F00000000001030307) Dec 6 04:22:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4885 DF PROTO=TCP SPT=56360 DPT=9102 SEQ=336027206 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C47AAF0000000001030307) Dec 6 04:22:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40564 DF PROTO=TCP SPT=50504 DPT=9101 SEQ=166074965 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C488520000000001030307) Dec 6 04:22:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40566 DF PROTO=TCP SPT=50504 DPT=9101 SEQ=166074965 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C4946F0000000001030307) Dec 6 04:22:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44567 DF PROTO=TCP SPT=51520 DPT=9105 SEQ=2758505337 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C49D6F0000000001030307) Dec 6 04:22:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44568 DF PROTO=TCP SPT=51520 DPT=9105 SEQ=2758505337 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C4AD2F0000000001030307) Dec 6 04:22:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9738 DF PROTO=TCP SPT=39894 DPT=9882 SEQ=2425240548 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C4B9EF0000000001030307) Dec 6 04:22:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40568 DF PROTO=TCP SPT=50504 DPT=9101 SEQ=166074965 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C4C3EF0000000001030307) Dec 6 04:22:29 localhost sshd[110662]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:22:31 localhost systemd[1]: tripleo_nova_virtqemud.service: State 'stop-sigterm' timed out. Killing. Dec 6 04:22:31 localhost systemd[1]: tripleo_nova_virtqemud.service: Killing process 61810 (conmon) with signal SIGKILL. Dec 6 04:22:31 localhost systemd[1]: tripleo_nova_virtqemud.service: Main process exited, code=killed, status=9/KILL Dec 6 04:22:31 localhost systemd[1]: libpod-conmon-e444006757a84d45c953d1ef31bc530b8507f3f86e52f8ba7761eaf744cfae6a.scope: Deactivated successfully. Dec 6 04:22:31 localhost podman[110675]: error opening file `/run/crun/e444006757a84d45c953d1ef31bc530b8507f3f86e52f8ba7761eaf744cfae6a/status`: No such file or directory Dec 6 04:22:31 localhost systemd[1]: tmp-crun.yip6c4.mount: Deactivated successfully. Dec 6 04:22:31 localhost podman[110664]: 2025-12-06 09:22:31.918061813 +0000 UTC m=+0.071046418 container cleanup e444006757a84d45c953d1ef31bc530b8507f3f86e52f8ba7761eaf744cfae6a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, architecture=x86_64, distribution-scope=public, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, managed_by=tripleo_ansible, container_name=nova_virtqemud, build-date=2025-11-19T00:35:22Z, name=rhosp17/openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, config_id=tripleo_step3) Dec 6 04:22:31 localhost podman[110664]: nova_virtqemud Dec 6 04:22:31 localhost systemd[1]: tripleo_nova_virtqemud.service: Failed with result 'timeout'. Dec 6 04:22:31 localhost systemd[1]: Stopped nova_virtqemud container. Dec 6 04:22:32 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44569 DF PROTO=TCP SPT=51520 DPT=9105 SEQ=2758505337 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C4CDEF0000000001030307) Dec 6 04:22:32 localhost python3.9[110769]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud_recover.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:22:32 localhost systemd[1]: Reloading. Dec 6 04:22:32 localhost systemd-rc-local-generator[110800]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:22:32 localhost systemd-sysv-generator[110803]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:22:32 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:22:33 localhost sshd[110809]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:22:33 localhost python3.9[110902]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:22:33 localhost systemd[1]: Reloading. Dec 6 04:22:33 localhost systemd-rc-local-generator[110929]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:22:33 localhost systemd-sysv-generator[110934]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:22:33 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:22:34 localhost systemd[1]: Stopping nova_virtsecretd container... Dec 6 04:22:34 localhost systemd[1]: libpod-2914dfad5be61e80048556735bb44e4f1907a2e2df52ff8faede941ddfde7367.scope: Deactivated successfully. Dec 6 04:22:34 localhost podman[110943]: 2025-12-06 09:22:34.170845255 +0000 UTC m=+0.064836689 container died 2914dfad5be61e80048556735bb44e4f1907a2e2df52ff8faede941ddfde7367 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, version=17.1.12, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, distribution-scope=public, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, container_name=nova_virtsecretd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:35:22Z, config_id=tripleo_step3, com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, name=rhosp17/openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1761123044, url=https://www.redhat.com, architecture=x86_64) Dec 6 04:22:34 localhost podman[110943]: 2025-12-06 09:22:34.215603127 +0000 UTC m=+0.109594521 container cleanup 2914dfad5be61e80048556735bb44e4f1907a2e2df52ff8faede941ddfde7367 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, container_name=nova_virtsecretd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, release=1761123044, com.redhat.component=openstack-nova-libvirt-container, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-11-19T00:35:22Z, name=rhosp17/openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Dec 6 04:22:34 localhost podman[110943]: nova_virtsecretd Dec 6 04:22:34 localhost podman[110957]: 2025-12-06 09:22:34.252785088 +0000 UTC m=+0.073851656 container cleanup 2914dfad5be61e80048556735bb44e4f1907a2e2df52ff8faede941ddfde7367 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, release=1761123044, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, name=rhosp17/openstack-nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-libvirt-container, version=17.1.12, config_id=tripleo_step3, distribution-scope=public, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vcs-type=git, container_name=nova_virtsecretd, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4) Dec 6 04:22:34 localhost systemd[1]: libpod-conmon-2914dfad5be61e80048556735bb44e4f1907a2e2df52ff8faede941ddfde7367.scope: Deactivated successfully. Dec 6 04:22:34 localhost podman[110985]: error opening file `/run/crun/2914dfad5be61e80048556735bb44e4f1907a2e2df52ff8faede941ddfde7367/status`: No such file or directory Dec 6 04:22:34 localhost podman[110974]: 2025-12-06 09:22:34.34649375 +0000 UTC m=+0.067903602 container cleanup 2914dfad5be61e80048556735bb44e4f1907a2e2df52ff8faede941ddfde7367 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, release=1761123044, version=17.1.12, architecture=x86_64, vcs-type=git, container_name=nova_virtsecretd, name=rhosp17/openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, io.openshift.expose-services=, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-libvirt-container, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt) Dec 6 04:22:34 localhost podman[110974]: nova_virtsecretd Dec 6 04:22:34 localhost systemd[1]: tripleo_nova_virtsecretd.service: Deactivated successfully. Dec 6 04:22:34 localhost systemd[1]: Stopped nova_virtsecretd container. Dec 6 04:22:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23535 DF PROTO=TCP SPT=58814 DPT=9102 SEQ=2706882849 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C4D8300000000001030307) Dec 6 04:22:35 localhost python3.9[111078]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:22:35 localhost systemd[1]: Reloading. Dec 6 04:22:35 localhost systemd-rc-local-generator[111106]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:22:35 localhost systemd-sysv-generator[111109]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:22:35 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:22:35 localhost systemd[1]: var-lib-containers-storage-overlay-5fc0aec4e92feba574efc8a2831ff4547cdf669c9ba74f30ce1106436a335beb-merged.mount: Deactivated successfully. Dec 6 04:22:35 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2914dfad5be61e80048556735bb44e4f1907a2e2df52ff8faede941ddfde7367-userdata-shm.mount: Deactivated successfully. Dec 6 04:22:35 localhost systemd[1]: Stopping nova_virtstoraged container... Dec 6 04:22:35 localhost systemd[1]: libpod-92a0134fb6ae7fa0506c791c4569a09e7c0cdb7fcb636d7ea6233a4978e1275d.scope: Deactivated successfully. Dec 6 04:22:35 localhost podman[111118]: 2025-12-06 09:22:35.560497387 +0000 UTC m=+0.075031021 container died 92a0134fb6ae7fa0506c791c4569a09e7c0cdb7fcb636d7ea6233a4978e1275d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step3, com.redhat.component=openstack-nova-libvirt-container, container_name=nova_virtstoraged, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, distribution-scope=public, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team) Dec 6 04:22:35 localhost podman[111118]: 2025-12-06 09:22:35.602889737 +0000 UTC m=+0.117423351 container cleanup 92a0134fb6ae7fa0506c791c4569a09e7c0cdb7fcb636d7ea6233a4978e1275d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_virtstoraged, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.12, config_id=tripleo_step3, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, com.redhat.component=openstack-nova-libvirt-container, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, tcib_managed=true, name=rhosp17/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1) Dec 6 04:22:35 localhost podman[111118]: nova_virtstoraged Dec 6 04:22:35 localhost podman[111132]: 2025-12-06 09:22:35.677261626 +0000 UTC m=+0.105543806 container cleanup 92a0134fb6ae7fa0506c791c4569a09e7c0cdb7fcb636d7ea6233a4978e1275d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, batch=17.1_20251118.1, container_name=nova_virtstoraged, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, name=rhosp17/openstack-nova-libvirt, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, version=17.1.12, config_id=tripleo_step3, build-date=2025-11-19T00:35:22Z, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://www.redhat.com) Dec 6 04:22:35 localhost systemd[1]: libpod-conmon-92a0134fb6ae7fa0506c791c4569a09e7c0cdb7fcb636d7ea6233a4978e1275d.scope: Deactivated successfully. Dec 6 04:22:35 localhost podman[111159]: error opening file `/run/crun/92a0134fb6ae7fa0506c791c4569a09e7c0cdb7fcb636d7ea6233a4978e1275d/status`: No such file or directory Dec 6 04:22:35 localhost podman[111148]: 2025-12-06 09:22:35.785265878 +0000 UTC m=+0.069543874 container cleanup 92a0134fb6ae7fa0506c791c4569a09e7c0cdb7fcb636d7ea6233a4978e1275d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, com.redhat.component=openstack-nova-libvirt-container, container_name=nova_virtstoraged, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '179caa3982511c1fd3314b961771f96c'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, tcib_managed=true, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:35:22Z, config_id=tripleo_step3, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1) Dec 6 04:22:35 localhost podman[111148]: nova_virtstoraged Dec 6 04:22:35 localhost systemd[1]: tripleo_nova_virtstoraged.service: Deactivated successfully. Dec 6 04:22:35 localhost systemd[1]: Stopped nova_virtstoraged container. Dec 6 04:22:36 localhost systemd[1]: var-lib-containers-storage-overlay-c3718780a2d803326e6f6ad6743b607a5e1167128961c2a398f3fd685de43043-merged.mount: Deactivated successfully. Dec 6 04:22:36 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-92a0134fb6ae7fa0506c791c4569a09e7c0cdb7fcb636d7ea6233a4978e1275d-userdata-shm.mount: Deactivated successfully. Dec 6 04:22:36 localhost python3.9[111254]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ovn_controller.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:22:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 04:22:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 04:22:37 localhost systemd[1]: Reloading. Dec 6 04:22:37 localhost systemd-rc-local-generator[111314]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:22:37 localhost systemd-sysv-generator[111317]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:22:37 localhost podman[111258]: 2025-12-06 09:22:37.734650538 +0000 UTC m=+0.148505723 container health_status 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_id=tripleo_step4, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent) Dec 6 04:22:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60341 DF PROTO=TCP SPT=48272 DPT=9102 SEQ=419265507 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C4E3EF0000000001030307) Dec 6 04:22:37 localhost podman[111258]: 2025-12-06 09:22:37.748089859 +0000 UTC m=+0.161945074 container exec_died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, config_id=tripleo_step4, vendor=Red Hat, Inc., url=https://www.redhat.com, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1) Dec 6 04:22:37 localhost podman[111258]: unhealthy Dec 6 04:22:37 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:22:37 localhost podman[111257]: 2025-12-06 09:22:37.68319786 +0000 UTC m=+0.098096178 container health_status 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, url=https://www.redhat.com, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, version=17.1.12, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 6 04:22:37 localhost podman[111257]: 2025-12-06 09:22:37.815456984 +0000 UTC m=+0.230355272 container exec_died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, vendor=Red Hat, Inc., container_name=ovn_controller, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, architecture=x86_64, batch=17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, release=1761123044, distribution-scope=public, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Dec 6 04:22:37 localhost podman[111257]: unhealthy Dec 6 04:22:37 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:22:37 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Failed with result 'exit-code'. Dec 6 04:22:37 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:22:37 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Failed with result 'exit-code'. Dec 6 04:22:37 localhost systemd[1]: Stopping ovn_controller container... Dec 6 04:22:38 localhost systemd[1]: libpod-1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.scope: Deactivated successfully. Dec 6 04:22:38 localhost systemd[1]: libpod-1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.scope: Consumed 2.722s CPU time. Dec 6 04:22:38 localhost podman[111333]: 2025-12-06 09:22:38.009067631 +0000 UTC m=+0.075275489 container died 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, release=1761123044, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1) Dec 6 04:22:38 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.timer: Deactivated successfully. Dec 6 04:22:38 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076. Dec 6 04:22:38 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Failed to open /run/systemd/transient/1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: No such file or directory Dec 6 04:22:38 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076-userdata-shm.mount: Deactivated successfully. Dec 6 04:22:38 localhost podman[111333]: 2025-12-06 09:22:38.041321509 +0000 UTC m=+0.107529327 container cleanup 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20251118.1, vendor=Red Hat, Inc.) Dec 6 04:22:38 localhost podman[111333]: ovn_controller Dec 6 04:22:38 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.timer: Failed to open /run/systemd/transient/1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.timer: No such file or directory Dec 6 04:22:38 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Failed to open /run/systemd/transient/1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: No such file or directory Dec 6 04:22:38 localhost podman[111347]: 2025-12-06 09:22:38.067434659 +0000 UTC m=+0.052116478 container cleanup 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://www.redhat.com, tcib_managed=true, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, vendor=Red Hat, Inc., release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4) Dec 6 04:22:38 localhost systemd[1]: libpod-conmon-1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.scope: Deactivated successfully. Dec 6 04:22:38 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.timer: Failed to open /run/systemd/transient/1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.timer: No such file or directory Dec 6 04:22:38 localhost systemd[1]: 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: Failed to open /run/systemd/transient/1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076.service: No such file or directory Dec 6 04:22:38 localhost podman[111362]: 2025-12-06 09:22:38.157440859 +0000 UTC m=+0.066306154 container cleanup 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Dec 6 04:22:38 localhost podman[111362]: ovn_controller Dec 6 04:22:38 localhost systemd[1]: tripleo_ovn_controller.service: Deactivated successfully. Dec 6 04:22:38 localhost systemd[1]: Stopped ovn_controller container. Dec 6 04:22:38 localhost python3.9[111466]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ovn_metadata_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:22:38 localhost systemd[1]: var-lib-containers-storage-overlay-5c9f3dee13341691faa8d64763052b5a13a4b5ff224f1a26e18c82d56bd99001-merged.mount: Deactivated successfully. Dec 6 04:22:38 localhost systemd[1]: Reloading. Dec 6 04:22:39 localhost systemd-rc-local-generator[111492]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:22:39 localhost systemd-sysv-generator[111495]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:22:39 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:22:39 localhost systemd[1]: Stopping ovn_metadata_agent container... Dec 6 04:22:39 localhost systemd[1]: libpod-87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.scope: Deactivated successfully. Dec 6 04:22:39 localhost systemd[1]: libpod-87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.scope: Consumed 11.209s CPU time. Dec 6 04:22:39 localhost podman[111508]: 2025-12-06 09:22:39.806689078 +0000 UTC m=+0.550984441 container died 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, release=1761123044, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Dec 6 04:22:39 localhost systemd[1]: tmp-crun.0VaXjp.mount: Deactivated successfully. Dec 6 04:22:39 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.timer: Deactivated successfully. Dec 6 04:22:39 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54. Dec 6 04:22:39 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Failed to open /run/systemd/transient/87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: No such file or directory Dec 6 04:22:39 localhost systemd[1]: var-lib-containers-storage-overlay-31be4bfa33f1fd50cae755746783d85d8683e10cd2caa7fdf7edb677e543b7f9-merged.mount: Deactivated successfully. Dec 6 04:22:39 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54-userdata-shm.mount: Deactivated successfully. Dec 6 04:22:39 localhost podman[111508]: 2025-12-06 09:22:39.936489088 +0000 UTC m=+0.680784461 container cleanup 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 04:22:39 localhost podman[111508]: ovn_metadata_agent Dec 6 04:22:39 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.timer: Failed to open /run/systemd/transient/87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.timer: No such file or directory Dec 6 04:22:39 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Failed to open /run/systemd/transient/87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: No such file or directory Dec 6 04:22:39 localhost podman[111521]: 2025-12-06 09:22:39.963431164 +0000 UTC m=+0.143125599 container cleanup 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, batch=17.1_20251118.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, version=17.1.12, release=1761123044, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 6 04:22:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23537 DF PROTO=TCP SPT=58814 DPT=9102 SEQ=2706882849 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C4EFEF0000000001030307) Dec 6 04:22:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21841 DF PROTO=TCP SPT=58354 DPT=9101 SEQ=2676575745 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C4FD800000000001030307) Dec 6 04:22:44 localhost sshd[111538]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:22:47 localhost sshd[111540]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:22:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21843 DF PROTO=TCP SPT=58354 DPT=9101 SEQ=2676575745 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C5096F0000000001030307) Dec 6 04:22:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26066 DF PROTO=TCP SPT=41034 DPT=9105 SEQ=1222753639 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C512B00000000001030307) Dec 6 04:22:50 localhost sshd[111542]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:22:50 localhost sshd[111544]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:22:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26067 DF PROTO=TCP SPT=41034 DPT=9105 SEQ=1222753639 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C5226F0000000001030307) Dec 6 04:22:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9741 DF PROTO=TCP SPT=39894 DPT=9882 SEQ=2425240548 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C529EF0000000001030307) Dec 6 04:22:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21845 DF PROTO=TCP SPT=58354 DPT=9101 SEQ=2676575745 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C539EF0000000001030307) Dec 6 04:23:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26068 DF PROTO=TCP SPT=41034 DPT=9105 SEQ=1222753639 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C541F00000000001030307) Dec 6 04:23:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62057 DF PROTO=TCP SPT=55046 DPT=9102 SEQ=1063964906 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C54D6F0000000001030307) Dec 6 04:23:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4888 DF PROTO=TCP SPT=56360 DPT=9102 SEQ=336027206 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C559EF0000000001030307) Dec 6 04:23:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62059 DF PROTO=TCP SPT=55046 DPT=9102 SEQ=1063964906 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C5652F0000000001030307) Dec 6 04:23:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63284 DF PROTO=TCP SPT=54782 DPT=9101 SEQ=1038312291 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C572B00000000001030307) Dec 6 04:23:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63286 DF PROTO=TCP SPT=54782 DPT=9101 SEQ=1038312291 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C57EAF0000000001030307) Dec 6 04:23:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32693 DF PROTO=TCP SPT=59226 DPT=9105 SEQ=3382669033 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C587F00000000001030307) Dec 6 04:23:19 localhost sshd[111624]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:23:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32694 DF PROTO=TCP SPT=59226 DPT=9105 SEQ=3382669033 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C597AF0000000001030307) Dec 6 04:23:24 localhost sshd[111626]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:23:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61831 DF PROTO=TCP SPT=47160 DPT=9882 SEQ=2720142757 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C59FEF0000000001030307) Dec 6 04:23:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63288 DF PROTO=TCP SPT=54782 DPT=9101 SEQ=1038312291 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C5ADEF0000000001030307) Dec 6 04:23:32 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32695 DF PROTO=TCP SPT=59226 DPT=9105 SEQ=3382669033 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C5B7F00000000001030307) Dec 6 04:23:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45721 DF PROTO=TCP SPT=53764 DPT=9102 SEQ=922920744 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C5C2700000000001030307) Dec 6 04:23:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23540 DF PROTO=TCP SPT=58814 DPT=9102 SEQ=2706882849 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C5CDEF0000000001030307) Dec 6 04:23:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45723 DF PROTO=TCP SPT=53764 DPT=9102 SEQ=922920744 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C5DA2F0000000001030307) Dec 6 04:23:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25076 DF PROTO=TCP SPT=48722 DPT=9101 SEQ=2915572739 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C5E7E00000000001030307) Dec 6 04:23:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25078 DF PROTO=TCP SPT=48722 DPT=9101 SEQ=2915572739 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C5F3EF0000000001030307) Dec 6 04:23:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23654 DF PROTO=TCP SPT=44542 DPT=9105 SEQ=1476196052 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C5FCEF0000000001030307) Dec 6 04:23:50 localhost sshd[111628]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:23:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23655 DF PROTO=TCP SPT=44542 DPT=9105 SEQ=1476196052 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C60CAF0000000001030307) Dec 6 04:23:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1316 DF PROTO=TCP SPT=40114 DPT=9882 SEQ=4058485434 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C619700000000001030307) Dec 6 04:23:58 localhost sshd[111630]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:23:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25080 DF PROTO=TCP SPT=48722 DPT=9101 SEQ=2915572739 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C623F00000000001030307) Dec 6 04:24:02 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23656 DF PROTO=TCP SPT=44542 DPT=9105 SEQ=1476196052 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C62DEF0000000001030307) Dec 6 04:24:03 localhost sshd[111632]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:24:04 localhost systemd[1]: tripleo_ovn_metadata_agent.service: State 'stop-sigterm' timed out. Killing. Dec 6 04:24:04 localhost systemd[1]: tripleo_ovn_metadata_agent.service: Killing process 69389 (conmon) with signal SIGKILL. Dec 6 04:24:04 localhost systemd[1]: tripleo_ovn_metadata_agent.service: Main process exited, code=killed, status=9/KILL Dec 6 04:24:04 localhost systemd[1]: libpod-conmon-87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.scope: Deactivated successfully. Dec 6 04:24:04 localhost podman[111646]: error opening file `/run/crun/87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54/status`: No such file or directory Dec 6 04:24:04 localhost systemd[1]: tmp-crun.NBN1rV.mount: Deactivated successfully. Dec 6 04:24:04 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.timer: Failed to open /run/systemd/transient/87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.timer: No such file or directory Dec 6 04:24:04 localhost systemd[1]: 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: Failed to open /run/systemd/transient/87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54.service: No such file or directory Dec 6 04:24:04 localhost podman[111634]: 2025-12-06 09:24:04.183065716 +0000 UTC m=+0.092260549 container cleanup 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., batch=17.1_20251118.1, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 6 04:24:04 localhost podman[111634]: ovn_metadata_agent Dec 6 04:24:04 localhost systemd[1]: tripleo_ovn_metadata_agent.service: Failed with result 'timeout'. Dec 6 04:24:04 localhost systemd[1]: Stopped ovn_metadata_agent container. Dec 6 04:24:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57763 DF PROTO=TCP SPT=45836 DPT=9102 SEQ=3722726399 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C637B00000000001030307) Dec 6 04:24:04 localhost python3.9[111741]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_rsyslog.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:24:04 localhost systemd[1]: Reloading. Dec 6 04:24:05 localhost systemd-rc-local-generator[111767]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:24:05 localhost systemd-sysv-generator[111773]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:24:05 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:24:06 localhost python3.9[111872]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:24:07 localhost python3.9[111964]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:24:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62062 DF PROTO=TCP SPT=55046 DPT=9102 SEQ=1063964906 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C643EF0000000001030307) Dec 6 04:24:07 localhost python3.9[112056]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_collectd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:24:08 localhost python3.9[112148]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_iscsid.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:24:09 localhost python3.9[112240]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_logrotate_crond.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:24:09 localhost python3.9[112332]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_metrics_qdr.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:24:09 localhost sshd[112392]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:24:10 localhost python3.9[112426]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_dhcp.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:24:10 localhost python3.9[112518]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_l3_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:24:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57765 DF PROTO=TCP SPT=45836 DPT=9102 SEQ=3722726399 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C64F700000000001030307) Dec 6 04:24:11 localhost python3.9[112610]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_ovs_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:24:11 localhost python3.9[112702]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:24:12 localhost python3.9[112794]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:24:12 localhost python3.9[112886]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:24:13 localhost python3.9[112978]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:24:13 localhost sshd[113056]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:24:14 localhost python3.9[113072]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:24:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12344 DF PROTO=TCP SPT=56144 DPT=9101 SEQ=1440162264 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C65D100000000001030307) Dec 6 04:24:14 localhost python3.9[113164]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:24:15 localhost python3.9[113308]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud_recover.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:24:15 localhost podman[113385]: 2025-12-06 09:24:15.389613375 +0000 UTC m=+0.079787567 container exec ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , io.openshift.expose-services=, release=1763362218, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, RELEASE=main, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, distribution-scope=public, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 6 04:24:15 localhost podman[113385]: 2025-12-06 09:24:15.488498506 +0000 UTC m=+0.178672678 container exec_died ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, release=1763362218, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, GIT_BRANCH=main, ceph=True, vendor=Red Hat, Inc., distribution-scope=public, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container) Dec 6 04:24:15 localhost python3.9[113486]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:24:16 localhost python3.9[113638]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:24:16 localhost python3.9[113761]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ovn_controller.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:24:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12346 DF PROTO=TCP SPT=56144 DPT=9101 SEQ=1440162264 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C6692F0000000001030307) Dec 6 04:24:17 localhost python3.9[113868]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ovn_metadata_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:24:18 localhost python3.9[113960]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_rsyslog.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:24:19 localhost python3.9[114052]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:24:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25514 DF PROTO=TCP SPT=55834 DPT=9100 SEQ=2732588667 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C671EF0000000001030307) Dec 6 04:24:20 localhost python3.9[114144]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:24:20 localhost python3.9[114236]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_collectd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:24:21 localhost python3.9[114328]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_iscsid.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:24:21 localhost python3.9[114420]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_logrotate_crond.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:24:22 localhost python3.9[114512]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_metrics_qdr.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:24:22 localhost python3.9[114604]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_dhcp.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:24:23 localhost python3.9[114696]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_l3_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:24:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22459 DF PROTO=TCP SPT=52630 DPT=9105 SEQ=3910658383 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C681EF0000000001030307) Dec 6 04:24:24 localhost python3.9[114788]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_ovs_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:24:24 localhost python3.9[114880]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:24:25 localhost python3.9[114972]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:24:25 localhost sshd[115065]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:24:25 localhost python3.9[115064]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:24:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1319 DF PROTO=TCP SPT=40114 DPT=9882 SEQ=4058485434 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C689F00000000001030307) Dec 6 04:24:26 localhost python3.9[115158]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:24:26 localhost python3.9[115250]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:24:27 localhost python3.9[115342]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:24:28 localhost python3.9[115434]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:24:28 localhost python3.9[115526]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:24:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12348 DF PROTO=TCP SPT=56144 DPT=9101 SEQ=1440162264 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C699EF0000000001030307) Dec 6 04:24:30 localhost python3.9[115618]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:24:30 localhost python3.9[115710]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ovn_controller.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:24:31 localhost python3.9[115802]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:24:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22460 DF PROTO=TCP SPT=52630 DPT=9105 SEQ=3910658383 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C6A1EF0000000001030307) Dec 6 04:24:32 localhost python3.9[115894]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_rsyslog.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:24:33 localhost python3.9[115986]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:24:34 localhost python3.9[116078]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Dec 6 04:24:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39987 DF PROTO=TCP SPT=53286 DPT=9102 SEQ=3523987682 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C6ACF00000000001030307) Dec 6 04:24:35 localhost python3.9[116170]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 6 04:24:35 localhost systemd[1]: Reloading. Dec 6 04:24:35 localhost sshd[116171]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:24:35 localhost systemd-rc-local-generator[116193]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:24:35 localhost systemd-sysv-generator[116199]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:24:35 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:24:37 localhost python3.9[116300]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:24:37 localhost sshd[116348]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:24:38 localhost python3.9[116395]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:24:38 localhost python3.9[116488]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_collectd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:24:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6232 DF PROTO=TCP SPT=43008 DPT=9882 SEQ=843075189 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C6BDEF0000000001030307) Dec 6 04:24:40 localhost python3.9[116581]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_iscsid.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:24:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39989 DF PROTO=TCP SPT=53286 DPT=9102 SEQ=3523987682 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C6C4AF0000000001030307) Dec 6 04:24:41 localhost python3.9[116674]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_logrotate_crond.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:24:41 localhost python3.9[116767]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_metrics_qdr.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:24:42 localhost python3.9[116860]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_dhcp.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:24:42 localhost python3.9[116953]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_l3_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:24:43 localhost python3.9[117046]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_ovs_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:24:43 localhost python3.9[117139]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:24:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8017 DF PROTO=TCP SPT=33954 DPT=9101 SEQ=1281717090 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C6D2400000000001030307) Dec 6 04:24:44 localhost python3.9[117232]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:24:45 localhost python3.9[117325]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:24:45 localhost python3.9[117418]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:24:46 localhost python3.9[117511]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:24:46 localhost python3.9[117604]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:24:47 localhost sshd[117620]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:24:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8019 DF PROTO=TCP SPT=33954 DPT=9101 SEQ=1281717090 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C6DE300000000001030307) Dec 6 04:24:47 localhost python3.9[117699]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud_recover.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:24:48 localhost python3.9[117792]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:24:48 localhost python3.9[117885]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:24:49 localhost python3.9[117978]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ovn_controller.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:24:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55083 DF PROTO=TCP SPT=41472 DPT=9105 SEQ=338037815 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C6E76F0000000001030307) Dec 6 04:24:49 localhost python3.9[118071]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ovn_metadata_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:24:50 localhost python3.9[118164]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_rsyslog.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:24:50 localhost systemd[1]: session-38.scope: Deactivated successfully. Dec 6 04:24:50 localhost systemd[1]: session-38.scope: Consumed 48.016s CPU time. Dec 6 04:24:50 localhost systemd-logind[766]: Session 38 logged out. Waiting for processes to exit. Dec 6 04:24:50 localhost systemd-logind[766]: Removed session 38. Dec 6 04:24:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55084 DF PROTO=TCP SPT=41472 DPT=9105 SEQ=338037815 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C6F72F0000000001030307) Dec 6 04:24:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40289 DF PROTO=TCP SPT=54604 DPT=9882 SEQ=3151936849 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C703EF0000000001030307) Dec 6 04:24:58 localhost sshd[118180]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:24:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8021 DF PROTO=TCP SPT=33954 DPT=9101 SEQ=1281717090 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C70DF00000000001030307) Dec 6 04:25:02 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55085 DF PROTO=TCP SPT=41472 DPT=9105 SEQ=338037815 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C717EF0000000001030307) Dec 6 04:25:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62322 DF PROTO=TCP SPT=51760 DPT=9102 SEQ=1466230694 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C7222F0000000001030307) Dec 6 04:25:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57768 DF PROTO=TCP SPT=45836 DPT=9102 SEQ=3722726399 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C72DEF0000000001030307) Dec 6 04:25:08 localhost sshd[118182]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:25:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62324 DF PROTO=TCP SPT=51760 DPT=9102 SEQ=1466230694 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C739F00000000001030307) Dec 6 04:25:11 localhost sshd[118184]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:25:11 localhost systemd-logind[766]: New session 39 of user zuul. Dec 6 04:25:11 localhost systemd[1]: Started Session 39 of User zuul. Dec 6 04:25:12 localhost python3.9[118277]: ansible-ansible.legacy.ping Invoked with data=pong Dec 6 04:25:13 localhost python3.9[118381]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 6 04:25:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44462 DF PROTO=TCP SPT=41906 DPT=9101 SEQ=641563132 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C747700000000001030307) Dec 6 04:25:14 localhost python3.9[118473]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:25:15 localhost python3.9[118566]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:25:16 localhost python3.9[118658]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:25:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9670 DF PROTO=TCP SPT=35906 DPT=9105 SEQ=3358184643 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C750960000000001030307) Dec 6 04:25:17 localhost python3.9[118750]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:25:17 localhost python3.9[118853]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013116.419006-178-186212007016195/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:25:18 localhost python3.9[118978]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 6 04:25:19 localhost python3.9[119089]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 04:25:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9672 DF PROTO=TCP SPT=35906 DPT=9105 SEQ=3358184643 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C75CAF0000000001030307) Dec 6 04:25:20 localhost python3.9[119181]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 04:25:20 localhost python3.9[119271]: ansible-ansible.builtin.service_facts Invoked Dec 6 04:25:21 localhost network[119288]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 6 04:25:21 localhost network[119289]: 'network-scripts' will be removed from distribution in near future. Dec 6 04:25:21 localhost network[119290]: It is advised to switch to 'NetworkManager' instead for network management. Dec 6 04:25:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:25:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9673 DF PROTO=TCP SPT=35906 DPT=9105 SEQ=3358184643 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C76C6F0000000001030307) Dec 6 04:25:25 localhost python3.9[119487]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:25:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40292 DF PROTO=TCP SPT=54604 DPT=9882 SEQ=3151936849 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C773EF0000000001030307) Dec 6 04:25:25 localhost python3.9[119577]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 6 04:25:26 localhost python3.9[119673]: ansible-ansible.legacy.command Invoked with _raw_params=# This is a hack to deploy RDO Delorean repos to RHEL as if it were Centos 9 Stream#012set -euxo pipefail#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./repo-setup-main#012# This is required for FIPS enabled until trunk.rdoproject.org#012# is not being served from a centos7 host, tracked by#012# https://issues.redhat.com/browse/RHOSZUUL-1517#012dnf -y install crypto-policies#012update-crypto-policies --set FIPS:NO-ENFORCE-EMS#012./venv/bin/repo-setup current-podified -b antelope -d centos9 --stream#012#012# Exclude ceph-common-18.2.7 as it's pulling newer openssl not compatible#012# with rhel 9.2 openssh#012dnf config-manager --setopt centos9-storage.exclude="ceph-common-18.2.7" --save#012# FIXME: perform dnf upgrade for other packages in EDPM ansible#012# here we only ensuring that decontainerized libvirt can start#012dnf -y upgrade openstack-selinux#012rm -f /run/virtlogd.pid#012#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:25:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44466 DF PROTO=TCP SPT=41906 DPT=9101 SEQ=641563132 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C783F00000000001030307) Dec 6 04:25:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9674 DF PROTO=TCP SPT=35906 DPT=9105 SEQ=3358184643 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C78BEF0000000001030307) Dec 6 04:25:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60545 DF PROTO=TCP SPT=44536 DPT=9102 SEQ=2235910770 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C7972F0000000001030307) Dec 6 04:25:35 localhost sshd[119704]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:25:36 localhost systemd[1]: Stopping OpenSSH server daemon... Dec 6 04:25:36 localhost systemd[1]: sshd.service: Deactivated successfully. Dec 6 04:25:36 localhost systemd[1]: sshd.service: Unit process 119704 (sshd) remains running after unit stopped. Dec 6 04:25:36 localhost systemd[1]: sshd.service: Unit process 119705 (sshd) remains running after unit stopped. Dec 6 04:25:36 localhost systemd[1]: Stopped OpenSSH server daemon. Dec 6 04:25:36 localhost systemd[1]: sshd.service: Consumed 8.991s CPU time, read 0B from disk, written 72.0K to disk. Dec 6 04:25:36 localhost systemd[1]: Stopped target sshd-keygen.target. Dec 6 04:25:36 localhost systemd[1]: Stopping sshd-keygen.target... Dec 6 04:25:36 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 6 04:25:36 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 6 04:25:36 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 6 04:25:36 localhost systemd[1]: Reached target sshd-keygen.target. Dec 6 04:25:36 localhost systemd[1]: Starting OpenSSH server daemon... Dec 6 04:25:36 localhost sshd[119718]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:25:36 localhost systemd[1]: Started OpenSSH server daemon. Dec 6 04:25:36 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 6 04:25:36 localhost systemd[1]: Starting man-db-cache-update.service... Dec 6 04:25:36 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 6 04:25:36 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Dec 6 04:25:36 localhost systemd[1]: Finished man-db-cache-update.service. Dec 6 04:25:36 localhost systemd[1]: run-ra9a8bdac78f142938721a263baf7b7ca.service: Deactivated successfully. Dec 6 04:25:36 localhost systemd[1]: run-r6956e17b585c4974a9e5f0de9a738586.service: Deactivated successfully. Dec 6 04:25:37 localhost systemd[1]: Stopping OpenSSH server daemon... Dec 6 04:25:37 localhost systemd[1]: sshd.service: Deactivated successfully. Dec 6 04:25:37 localhost systemd[1]: Stopped OpenSSH server daemon. Dec 6 04:25:37 localhost systemd[1]: Stopped target sshd-keygen.target. Dec 6 04:25:37 localhost systemd[1]: Stopping sshd-keygen.target... Dec 6 04:25:37 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 6 04:25:37 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 6 04:25:37 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 6 04:25:37 localhost systemd[1]: Reached target sshd-keygen.target. Dec 6 04:25:37 localhost systemd[1]: Starting OpenSSH server daemon... Dec 6 04:25:37 localhost sshd[119889]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:25:37 localhost systemd[1]: Started OpenSSH server daemon. Dec 6 04:25:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39992 DF PROTO=TCP SPT=53286 DPT=9102 SEQ=3523987682 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C7A3EF0000000001030307) Dec 6 04:25:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60547 DF PROTO=TCP SPT=44536 DPT=9102 SEQ=2235910770 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C7AEF00000000001030307) Dec 6 04:25:42 localhost sshd[119895]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:25:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20795 DF PROTO=TCP SPT=39928 DPT=9101 SEQ=1490110567 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C7BCA00000000001030307) Dec 6 04:25:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20797 DF PROTO=TCP SPT=39928 DPT=9101 SEQ=1490110567 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C7C8AF0000000001030307) Dec 6 04:25:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33320 DF PROTO=TCP SPT=57988 DPT=9105 SEQ=3101264450 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C7D1AF0000000001030307) Dec 6 04:25:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33321 DF PROTO=TCP SPT=57988 DPT=9105 SEQ=3101264450 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C7E16F0000000001030307) Dec 6 04:25:54 localhost sshd[119992]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:25:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16256 DF PROTO=TCP SPT=58812 DPT=9882 SEQ=829556207 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C7E9EF0000000001030307) Dec 6 04:25:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20799 DF PROTO=TCP SPT=39928 DPT=9101 SEQ=1490110567 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C7F7EF0000000001030307) Dec 6 04:26:02 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33322 DF PROTO=TCP SPT=57988 DPT=9105 SEQ=3101264450 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C801F00000000001030307) Dec 6 04:26:04 localhost sshd[120026]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:26:04 localhost sshd[120028]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:26:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5459 DF PROTO=TCP SPT=45040 DPT=9102 SEQ=3298331109 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C80C6F0000000001030307) Dec 6 04:26:04 localhost sshd[120030]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:26:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62327 DF PROTO=TCP SPT=51760 DPT=9102 SEQ=1466230694 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C817EF0000000001030307) Dec 6 04:26:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5461 DF PROTO=TCP SPT=45040 DPT=9102 SEQ=3298331109 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C8242F0000000001030307) Dec 6 04:26:14 localhost sshd[120037]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:26:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25799 DF PROTO=TCP SPT=51538 DPT=9101 SEQ=2217985075 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C831D00000000001030307) Dec 6 04:26:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25801 DF PROTO=TCP SPT=51538 DPT=9101 SEQ=2217985075 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C83DEF0000000001030307) Dec 6 04:26:18 localhost sshd[120039]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:26:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49986 DF PROTO=TCP SPT=45616 DPT=9105 SEQ=2957778523 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C846F00000000001030307) Dec 6 04:26:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49987 DF PROTO=TCP SPT=45616 DPT=9105 SEQ=2957778523 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C856AF0000000001030307) Dec 6 04:26:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49351 DF PROTO=TCP SPT=53212 DPT=9882 SEQ=3580497107 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C8636F0000000001030307) Dec 6 04:26:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25803 DF PROTO=TCP SPT=51538 DPT=9101 SEQ=2217985075 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C86DEF0000000001030307) Dec 6 04:26:32 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49988 DF PROTO=TCP SPT=45616 DPT=9105 SEQ=2957778523 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C877EF0000000001030307) Dec 6 04:26:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8903 DF PROTO=TCP SPT=33538 DPT=9102 SEQ=2493667773 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C881AF0000000001030307) Dec 6 04:26:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60550 DF PROTO=TCP SPT=44536 DPT=9102 SEQ=2235910770 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C88DEF0000000001030307) Dec 6 04:26:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8905 DF PROTO=TCP SPT=33538 DPT=9102 SEQ=2493667773 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C8996F0000000001030307) Dec 6 04:26:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33152 DF PROTO=TCP SPT=48004 DPT=9101 SEQ=3197119332 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C8A7000000000001030307) Dec 6 04:26:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33154 DF PROTO=TCP SPT=48004 DPT=9101 SEQ=3197119332 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C8B2F00000000001030307) Dec 6 04:26:49 localhost kernel: SELinux: Converting 2754 SID table entries... Dec 6 04:26:49 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 6 04:26:49 localhost kernel: SELinux: policy capability open_perms=1 Dec 6 04:26:49 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 6 04:26:49 localhost kernel: SELinux: policy capability always_check_network=0 Dec 6 04:26:49 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 6 04:26:49 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 6 04:26:49 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 6 04:26:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33185 DF PROTO=TCP SPT=43664 DPT=9100 SEQ=1086256800 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C8BBEF0000000001030307) Dec 6 04:26:51 localhost sshd[120466]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:26:53 localhost dbus-broker-launch[756]: avc: op=load_policy lsm=selinux seqno=17 res=1 Dec 6 04:26:53 localhost python3.9[120545]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:26:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4096 DF PROTO=TCP SPT=55170 DPT=9105 SEQ=1281169954 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C8CBEF0000000001030307) Dec 6 04:26:54 localhost python3.9[120637]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/edpm.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:26:54 localhost python3.9[120710]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/edpm.fact mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013213.6757448-427-138852573395214/.source.fact _original_basename=.7hkiu422 follow=False checksum=03aee63dcf9b49b0ac4473b2f1a1b5d3783aa639 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:26:55 localhost python3.9[120800]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 6 04:26:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49354 DF PROTO=TCP SPT=53212 DPT=9882 SEQ=3580497107 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C8D3EF0000000001030307) Dec 6 04:26:56 localhost python3.9[120898]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 6 04:26:57 localhost python3.9[120952]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 6 04:26:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33156 DF PROTO=TCP SPT=48004 DPT=9101 SEQ=3197119332 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C8E3EF0000000001030307) Dec 6 04:27:00 localhost sshd[120957]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:27:01 localhost systemd[1]: Reloading. Dec 6 04:27:01 localhost systemd-rc-local-generator[120987]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:27:01 localhost systemd-sysv-generator[120990]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:27:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:27:01 localhost systemd[1]: Queuing reload/restart jobs for marked units… Dec 6 04:27:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4097 DF PROTO=TCP SPT=55170 DPT=9105 SEQ=1281169954 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C8EBEF0000000001030307) Dec 6 04:27:03 localhost python3.9[121095]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:27:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60754 DF PROTO=TCP SPT=35742 DPT=9102 SEQ=3388254926 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C8F6EF0000000001030307) Dec 6 04:27:06 localhost python3.9[121334]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False Dec 6 04:27:07 localhost python3.9[121426]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None Dec 6 04:27:08 localhost python3.9[121519]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:27:09 localhost python3.9[121611]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None Dec 6 04:27:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18284 DF PROTO=TCP SPT=45648 DPT=9882 SEQ=3869491977 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C907EF0000000001030307) Dec 6 04:27:10 localhost python3.9[121703]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 04:27:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60756 DF PROTO=TCP SPT=35742 DPT=9102 SEQ=3388254926 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C90EAF0000000001030307) Dec 6 04:27:11 localhost python3.9[121795]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:27:11 localhost python3.9[121868]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013230.659795-751-215459042769421/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=31b29ee7333177b2eb4f4f85549af35c3d0ec3b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:27:11 localhost sshd[121883]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:27:12 localhost python3.9[121962]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:27:14 localhost sshd[122024]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:27:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63011 DF PROTO=TCP SPT=35002 DPT=9101 SEQ=9154961 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C91C300000000001030307) Dec 6 04:27:14 localhost python3.9[122058]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None Dec 6 04:27:15 localhost python3.9[122151]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None Dec 6 04:27:16 localhost sshd[122168]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:27:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1709 DF PROTO=TCP SPT=59634 DPT=9105 SEQ=3366381085 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C925560000000001030307) Dec 6 04:27:16 localhost python3.9[122246]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None Dec 6 04:27:17 localhost python3.9[122344]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None Dec 6 04:27:18 localhost python3.9[122436]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 6 04:27:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1711 DF PROTO=TCP SPT=59634 DPT=9105 SEQ=3366381085 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C9316F0000000001030307) Dec 6 04:27:22 localhost python3.9[122589]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 04:27:22 localhost python3.9[122696]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:27:23 localhost python3.9[122769]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765013242.2870924-1024-49473799356540/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 6 04:27:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1712 DF PROTO=TCP SPT=59634 DPT=9105 SEQ=3366381085 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C9412F0000000001030307) Dec 6 04:27:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3035 DF PROTO=TCP SPT=57632 DPT=9882 SEQ=2170270327 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C94DF00000000001030307) Dec 6 04:27:27 localhost sshd[122784]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:27:28 localhost python3.9[122863]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 6 04:27:28 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 6 04:27:28 localhost systemd[1]: Stopped Load Kernel Modules. Dec 6 04:27:28 localhost systemd[1]: Stopping Load Kernel Modules... Dec 6 04:27:28 localhost systemd[1]: Starting Load Kernel Modules... Dec 6 04:27:28 localhost systemd-modules-load[122867]: Module 'msr' is built in Dec 6 04:27:28 localhost systemd[1]: Finished Load Kernel Modules. Dec 6 04:27:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63015 DF PROTO=TCP SPT=35002 DPT=9101 SEQ=9154961 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C957EF0000000001030307) Dec 6 04:27:30 localhost python3.9[122959]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:27:30 localhost python3.9[123032]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765013250.0094578-1093-62300419915202/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 6 04:27:32 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1713 DF PROTO=TCP SPT=59634 DPT=9105 SEQ=3366381085 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C961EF0000000001030307) Dec 6 04:27:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51167 DF PROTO=TCP SPT=54836 DPT=9102 SEQ=3373859316 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C96BF00000000001030307) Dec 6 04:27:34 localhost sshd[123047]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:27:36 localhost python3.9[123126]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 6 04:27:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8908 DF PROTO=TCP SPT=33538 DPT=9102 SEQ=2493667773 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C977EF0000000001030307) Dec 6 04:27:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51169 DF PROTO=TCP SPT=54836 DPT=9102 SEQ=3373859316 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C983AF0000000001030307) Dec 6 04:27:40 localhost sshd[123129]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:27:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8657 DF PROTO=TCP SPT=45986 DPT=9101 SEQ=2014193925 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C991600000000001030307) Dec 6 04:27:44 localhost sshd[123220]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:27:45 localhost python3.9[123221]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:27:45 localhost python3.9[123314]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile Dec 6 04:27:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8659 DF PROTO=TCP SPT=45986 DPT=9101 SEQ=2014193925 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C99D6F0000000001030307) Dec 6 04:27:47 localhost python3.9[123404]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:27:48 localhost python3.9[123496]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:27:48 localhost systemd[1]: Stopping Dynamic System Tuning Daemon... Dec 6 04:27:48 localhost systemd[1]: tuned.service: Deactivated successfully. Dec 6 04:27:48 localhost systemd[1]: Stopped Dynamic System Tuning Daemon. Dec 6 04:27:48 localhost systemd[1]: tuned.service: Consumed 1.810s CPU time, no IO. Dec 6 04:27:48 localhost systemd[1]: Starting Dynamic System Tuning Daemon... Dec 6 04:27:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15776 DF PROTO=TCP SPT=52430 DPT=9105 SEQ=2927662920 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C9A66F0000000001030307) Dec 6 04:27:50 localhost systemd[1]: Started Dynamic System Tuning Daemon. Dec 6 04:27:52 localhost sshd[123568]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:27:52 localhost python3.9[123600]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline Dec 6 04:27:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15777 DF PROTO=TCP SPT=52430 DPT=9105 SEQ=2927662920 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C9B62F0000000001030307) Dec 6 04:27:53 localhost sshd[123615]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:27:55 localhost sshd[123617]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:27:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3038 DF PROTO=TCP SPT=57632 DPT=9882 SEQ=2170270327 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C9BDF00000000001030307) Dec 6 04:27:56 localhost python3.9[123696]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:27:56 localhost systemd[1]: Reloading. Dec 6 04:27:56 localhost systemd-rc-local-generator[123719]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:27:56 localhost systemd-sysv-generator[123723]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:27:56 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:27:56 localhost sshd[123735]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:27:57 localhost python3.9[123828]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:27:57 localhost systemd[1]: Reloading. Dec 6 04:27:57 localhost systemd-rc-local-generator[123852]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:27:57 localhost systemd-sysv-generator[123858]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:27:57 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:27:58 localhost sshd[123881]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:27:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8661 DF PROTO=TCP SPT=45986 DPT=9101 SEQ=2014193925 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C9CDF00000000001030307) Dec 6 04:27:59 localhost python3.9[123960]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:28:00 localhost sshd[124009]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:28:00 localhost python3.9[124055]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:28:00 localhost kernel: Adding 1048572k swap on /swap. Priority:-2 extents:1 across:1048572k FS Dec 6 04:28:01 localhost python3.9[124148]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:28:01 localhost sshd[124153]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:28:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15778 DF PROTO=TCP SPT=52430 DPT=9105 SEQ=2927662920 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C9D5EF0000000001030307) Dec 6 04:28:03 localhost sshd[124217]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:28:03 localhost python3.9[124251]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:28:04 localhost python3.9[124344]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 6 04:28:04 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 6 04:28:04 localhost systemd[1]: Stopped Apply Kernel Variables. Dec 6 04:28:04 localhost systemd[1]: Stopping Apply Kernel Variables... Dec 6 04:28:04 localhost systemd[1]: Starting Apply Kernel Variables... Dec 6 04:28:04 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 6 04:28:04 localhost systemd[1]: Finished Apply Kernel Variables. Dec 6 04:28:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41929 DF PROTO=TCP SPT=36116 DPT=9102 SEQ=2957430391 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C9E12F0000000001030307) Dec 6 04:28:04 localhost sshd[124366]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:28:05 localhost systemd[1]: session-39.scope: Deactivated successfully. Dec 6 04:28:05 localhost systemd[1]: session-39.scope: Consumed 1min 55.959s CPU time. Dec 6 04:28:05 localhost systemd-logind[766]: Session 39 logged out. Waiting for processes to exit. Dec 6 04:28:05 localhost systemd-logind[766]: Removed session 39. Dec 6 04:28:06 localhost sshd[124369]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:28:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60759 DF PROTO=TCP SPT=35742 DPT=9102 SEQ=3388254926 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C9EDEF0000000001030307) Dec 6 04:28:08 localhost sshd[124371]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:28:09 localhost sshd[124373]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:28:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41931 DF PROTO=TCP SPT=36116 DPT=9102 SEQ=2957430391 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52C9F8EF0000000001030307) Dec 6 04:28:11 localhost sshd[124375]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:28:12 localhost sshd[124377]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:28:13 localhost sshd[124379]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:28:13 localhost systemd-logind[766]: New session 40 of user zuul. Dec 6 04:28:13 localhost systemd[1]: Started Session 40 of User zuul. Dec 6 04:28:14 localhost python3.9[124472]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 6 04:28:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41231 DF PROTO=TCP SPT=60810 DPT=9101 SEQ=476356393 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CA06900000000001030307) Dec 6 04:28:14 localhost sshd[124477]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:28:15 localhost python3.9[124568]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 6 04:28:15 localhost sshd[124573]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:28:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41233 DF PROTO=TCP SPT=60810 DPT=9101 SEQ=476356393 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CA12AF0000000001030307) Dec 6 04:28:17 localhost sshd[124589]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:28:18 localhost sshd[124591]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:28:19 localhost sshd[124671]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:28:19 localhost python3.9[124670]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:28:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13460 DF PROTO=TCP SPT=39288 DPT=9105 SEQ=3189584631 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CA1BB00000000001030307) Dec 6 04:28:20 localhost python3.9[124763]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 6 04:28:20 localhost sshd[124796]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:28:21 localhost python3.9[124861]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 6 04:28:22 localhost sshd[124935]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:28:22 localhost python3.9[124915]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 6 04:28:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13461 DF PROTO=TCP SPT=39288 DPT=9105 SEQ=3189584631 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CA2B6F0000000001030307) Dec 6 04:28:23 localhost sshd[124983]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:28:25 localhost sshd[125000]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:28:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53129 DF PROTO=TCP SPT=36238 DPT=9882 SEQ=998623718 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CA33F00000000001030307) Dec 6 04:28:26 localhost python3.9[125093]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 6 04:28:26 localhost sshd[125203]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:28:27 localhost python3.9[125250]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:28:28 localhost sshd[125343]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:28:28 localhost python3.9[125342]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:28:29 localhost python3.9[125447]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:28:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41235 DF PROTO=TCP SPT=60810 DPT=9101 SEQ=476356393 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CA41EF0000000001030307) Dec 6 04:28:29 localhost python3.9[125495]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:28:30 localhost sshd[125511]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:28:30 localhost python3.9[125589]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:28:31 localhost python3.9[125662]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765013310.1091344-324-60315192490320/.source.conf follow=False _original_basename=registries.conf.j2 checksum=804a0d01b832e60d20f779a331306df708c87b02 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 6 04:28:31 localhost sshd[125709]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:28:32 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13462 DF PROTO=TCP SPT=39288 DPT=9105 SEQ=3189584631 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CA4BEF0000000001030307) Dec 6 04:28:32 localhost python3.9[125756]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Dec 6 04:28:32 localhost systemd-journald[47810]: Field hash table of /run/log/journal/4b30904fc4748c16d0c72dbebcabab49/system.journal has a fill level at 75.4 (251 of 333 items), suggesting rotation. Dec 6 04:28:32 localhost systemd-journald[47810]: /run/log/journal/4b30904fc4748c16d0c72dbebcabab49/system.journal: Journal header limits reached or header out-of-date, rotating. Dec 6 04:28:32 localhost rsyslogd[760]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 6 04:28:32 localhost rsyslogd[760]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 6 04:28:32 localhost rsyslogd[760]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 6 04:28:32 localhost python3.9[125849]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Dec 6 04:28:32 localhost auditd[725]: Audit daemon rotating log files Dec 6 04:28:33 localhost sshd[125942]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:28:33 localhost python3.9[125941]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Dec 6 04:28:33 localhost python3.9[126035]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Dec 6 04:28:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36607 DF PROTO=TCP SPT=32972 DPT=9102 SEQ=3652515637 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CA566F0000000001030307) Dec 6 04:28:34 localhost sshd[126050]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:28:35 localhost sshd[126052]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:28:36 localhost sshd[126113]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:28:37 localhost python3.9[126131]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 6 04:28:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51172 DF PROTO=TCP SPT=54836 DPT=9102 SEQ=3373859316 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CA61F00000000001030307) Dec 6 04:28:37 localhost python3.9[126225]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Dec 6 04:28:37 localhost sshd[126227]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:28:39 localhost sshd[126230]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:28:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36609 DF PROTO=TCP SPT=32972 DPT=9102 SEQ=3652515637 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CA6E2F0000000001030307) Dec 6 04:28:41 localhost sshd[126246]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:28:41 localhost sshd[126326]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:28:41 localhost python3.9[126325]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openstack-network-scripts'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Dec 6 04:28:42 localhost sshd[126330]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:28:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60358 DF PROTO=TCP SPT=54274 DPT=9101 SEQ=1618947386 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CA7BC00000000001030307) Dec 6 04:28:44 localhost sshd[126332]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:28:45 localhost python3.9[126425]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['podman', 'buildah'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Dec 6 04:28:45 localhost sshd[126427]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:28:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60360 DF PROTO=TCP SPT=54274 DPT=9101 SEQ=1618947386 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CA87B00000000001030307) Dec 6 04:28:47 localhost sshd[126430]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:28:49 localhost sshd[126452]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:28:49 localhost python3.9[126531]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['tuned', 'tuned-profiles-cpu-partitioning'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Dec 6 04:28:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11914 DF PROTO=TCP SPT=43406 DPT=9105 SEQ=512472526 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CA90F00000000001030307) Dec 6 04:28:50 localhost sshd[126534]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:28:50 localhost sshd[126536]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:28:52 localhost sshd[126538]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:28:53 localhost python3.9[126631]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['os-net-config'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Dec 6 04:28:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11915 DF PROTO=TCP SPT=43406 DPT=9105 SEQ=512472526 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CAA0AF0000000001030307) Dec 6 04:28:53 localhost sshd[126633]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:28:55 localhost sshd[126636]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:28:55 localhost sshd[126638]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:28:56 localhost sshd[126654]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:28:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38103 DF PROTO=TCP SPT=44998 DPT=9882 SEQ=629359971 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CAAD700000000001030307) Dec 6 04:28:57 localhost python3.9[126733]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openssh-server'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Dec 6 04:28:58 localhost sshd[126736]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:28:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60362 DF PROTO=TCP SPT=54274 DPT=9101 SEQ=1618947386 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CAB7EF0000000001030307) Dec 6 04:29:00 localhost sshd[126738]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:29:01 localhost sshd[126754]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:29:02 localhost python3.9[126833]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Dec 6 04:29:02 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11916 DF PROTO=TCP SPT=43406 DPT=9105 SEQ=512472526 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CAC1F00000000001030307) Dec 6 04:29:02 localhost sshd[126836]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:29:03 localhost sshd[126838]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:29:04 localhost sshd[126840]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:29:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26227 DF PROTO=TCP SPT=44978 DPT=9102 SEQ=2002269931 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CACBAF0000000001030307) Dec 6 04:29:05 localhost sshd[126848]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:29:06 localhost sshd[126850]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:29:07 localhost sshd[126853]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:29:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41934 DF PROTO=TCP SPT=36116 DPT=9102 SEQ=2957430391 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CAD7EF0000000001030307) Dec 6 04:29:09 localhost sshd[126856]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:29:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26229 DF PROTO=TCP SPT=44978 DPT=9102 SEQ=2002269931 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CAE36F0000000001030307) Dec 6 04:29:10 localhost sshd[126904]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:29:11 localhost sshd[126925]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:29:12 localhost sshd[126941]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:29:13 localhost python3.9[127020]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:29:13 localhost sshd[127106]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:29:13 localhost python3.9[127126]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:29:14 localhost sshd[127168]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:29:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14834 DF PROTO=TCP SPT=39114 DPT=9101 SEQ=167283131 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CAF0F00000000001030307) Dec 6 04:29:14 localhost python3.9[127202]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1765013353.443281-723-221557040268258/.source.json _original_basename=.bi8r63uj follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:29:15 localhost python3.9[127294]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Dec 6 04:29:15 localhost sshd[127305]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:29:15 localhost sshd[127319]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:29:17 localhost sshd[127323]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:29:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14836 DF PROTO=TCP SPT=39114 DPT=9101 SEQ=167283131 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CAFCEF0000000001030307) Dec 6 04:29:18 localhost sshd[127331]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:29:18 localhost sshd[127338]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:29:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14840 DF PROTO=TCP SPT=34580 DPT=9105 SEQ=105657166 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CB062F0000000001030307) Dec 6 04:29:19 localhost sshd[127341]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:29:20 localhost sshd[127355]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:29:21 localhost sshd[127375]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:29:21 localhost podman[127308]: 2025-12-06 09:29:15.691597694 +0000 UTC m=+0.043209802 image pull quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified Dec 6 04:29:21 localhost sshd[127431]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:29:23 localhost sshd[127514]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:29:23 localhost python3.9[127526]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Dec 6 04:29:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14841 DF PROTO=TCP SPT=34580 DPT=9105 SEQ=105657166 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CB15EF0000000001030307) Dec 6 04:29:23 localhost sshd[127551]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:29:24 localhost sshd[127608]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:29:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38106 DF PROTO=TCP SPT=44998 DPT=9882 SEQ=629359971 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CB1DEF0000000001030307) Dec 6 04:29:26 localhost sshd[127670]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:29:28 localhost sshd[127684]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:29:29 localhost sshd[127686]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:29:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14838 DF PROTO=TCP SPT=39114 DPT=9101 SEQ=167283131 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CB2DEF0000000001030307) Dec 6 04:29:30 localhost podman[127538]: 2025-12-06 09:29:23.810098684 +0000 UTC m=+0.036690083 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Dec 6 04:29:31 localhost sshd[127781]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:29:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14842 DF PROTO=TCP SPT=34580 DPT=9105 SEQ=105657166 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CB35F00000000001030307) Dec 6 04:29:32 localhost python3.9[127874]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Dec 6 04:29:32 localhost sshd[127887]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:29:34 localhost sshd[127914]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:29:34 localhost podman[127888]: 2025-12-06 09:29:32.816493596 +0000 UTC m=+0.044544664 image pull quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified Dec 6 04:29:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46070 DF PROTO=TCP SPT=43424 DPT=9102 SEQ=3390634261 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CB40AF0000000001030307) Dec 6 04:29:35 localhost python3.9[128054]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Dec 6 04:29:35 localhost sshd[128079]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:29:36 localhost podman[128066]: 2025-12-06 09:29:35.763366962 +0000 UTC m=+0.029144342 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 04:29:37 localhost sshd[128157]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:29:37 localhost python3.9[128236]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Dec 6 04:29:38 localhost sshd[128262]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:29:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52445 DF PROTO=TCP SPT=39700 DPT=9882 SEQ=3386255856 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CB51F00000000001030307) Dec 6 04:29:40 localhost sshd[128276]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:29:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46072 DF PROTO=TCP SPT=43424 DPT=9102 SEQ=3390634261 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CB586F0000000001030307) Dec 6 04:29:41 localhost podman[128248]: 2025-12-06 09:29:38.0137414 +0000 UTC m=+0.044833372 image pull quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified Dec 6 04:29:41 localhost python3.9[128432]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Dec 6 04:29:41 localhost sshd[128443]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:29:42 localhost sshd[128448]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:29:42 localhost sshd[128462]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:29:43 localhost podman[128447]: 2025-12-06 09:29:42.054575691 +0000 UTC m=+0.040404606 image pull quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c Dec 6 04:29:43 localhost sshd[128509]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:29:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22771 DF PROTO=TCP SPT=43078 DPT=9101 SEQ=4011535923 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CB66200000000001030307) Dec 6 04:29:45 localhost sshd[128560]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:29:45 localhost systemd[1]: session-40.scope: Deactivated successfully. Dec 6 04:29:45 localhost systemd[1]: session-40.scope: Consumed 1min 25.599s CPU time. Dec 6 04:29:45 localhost systemd-logind[766]: Session 40 logged out. Waiting for processes to exit. Dec 6 04:29:45 localhost systemd-logind[766]: Removed session 40. Dec 6 04:29:46 localhost sshd[128563]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:29:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22773 DF PROTO=TCP SPT=43078 DPT=9101 SEQ=4011535923 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CB722F0000000001030307) Dec 6 04:29:48 localhost sshd[128565]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:29:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8048 DF PROTO=TCP SPT=36620 DPT=9105 SEQ=3809635373 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CB7B2F0000000001030307) Dec 6 04:29:49 localhost sshd[128567]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:29:50 localhost sshd[128569]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:29:51 localhost systemd-logind[766]: New session 41 of user zuul. Dec 6 04:29:51 localhost systemd[1]: Started Session 41 of User zuul. Dec 6 04:29:51 localhost sshd[128605]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:29:52 localhost python3.9[128664]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 6 04:29:52 localhost sshd[128715]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:29:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8049 DF PROTO=TCP SPT=36620 DPT=9105 SEQ=3809635373 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CB8AEF0000000001030307) Dec 6 04:29:54 localhost python3.9[128888]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None Dec 6 04:29:54 localhost sshd[129025]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:29:55 localhost python3.9[129105]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 6 04:29:55 localhost sshd[129127]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:29:56 localhost python3.9[129161]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch3.3'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Dec 6 04:29:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28777 DF PROTO=TCP SPT=51968 DPT=9882 SEQ=3764173009 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CB97AF0000000001030307) Dec 6 04:29:57 localhost sshd[129164]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:29:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22775 DF PROTO=TCP SPT=43078 DPT=9101 SEQ=4011535923 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CBA1F00000000001030307) Dec 6 04:29:59 localhost sshd[129422]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:30:00 localhost python3.9[129515]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 6 04:30:01 localhost sshd[129518]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:30:02 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8050 DF PROTO=TCP SPT=36620 DPT=9105 SEQ=3809635373 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CBABEF0000000001030307) Dec 6 04:30:03 localhost sshd[129520]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:30:04 localhost sshd[129534]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:30:04 localhost sshd[129582]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:30:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52667 DF PROTO=TCP SPT=37752 DPT=9102 SEQ=1262336413 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CBB5EF0000000001030307) Dec 6 04:30:05 localhost python3.9[129629]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Dec 6 04:30:06 localhost sshd[129632]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:30:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26232 DF PROTO=TCP SPT=44978 DPT=9102 SEQ=2002269931 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CBC1EF0000000001030307) Dec 6 04:30:08 localhost sshd[129725]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:30:08 localhost python3.9[129724]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 6 04:30:09 localhost ceph-osd[31726]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 6 04:30:09 localhost ceph-osd[31726]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 5761 writes, 25K keys, 5761 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5761 writes, 760 syncs, 7.58 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 6 04:30:09 localhost python3.9[129818]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None Dec 6 04:30:10 localhost sshd[129819]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:30:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52669 DF PROTO=TCP SPT=37752 DPT=9102 SEQ=1262336413 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CBCDB00000000001030307) Dec 6 04:30:11 localhost sshd[129825]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:30:11 localhost kernel: SELinux: Converting 2756 SID table entries... Dec 6 04:30:11 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 6 04:30:11 localhost kernel: SELinux: policy capability open_perms=1 Dec 6 04:30:11 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 6 04:30:11 localhost kernel: SELinux: policy capability always_check_network=0 Dec 6 04:30:11 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 6 04:30:11 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 6 04:30:11 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 6 04:30:12 localhost sshd[129878]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:30:12 localhost python3.9[129919]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 6 04:30:12 localhost ceph-osd[32665]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 6 04:30:12 localhost ceph-osd[32665]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.2 total, 600.0 interval#012Cumulative writes: 4879 writes, 21K keys, 4879 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 4879 writes, 669 syncs, 7.29 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 6 04:30:13 localhost sshd[129940]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:30:13 localhost dbus-broker-launch[756]: avc: op=load_policy lsm=selinux seqno=18 res=1 Dec 6 04:30:13 localhost python3.9[130019]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 6 04:30:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63456 DF PROTO=TCP SPT=46218 DPT=9101 SEQ=4105108448 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CBDB500000000001030307) Dec 6 04:30:14 localhost sshd[130022]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:30:16 localhost sshd[130024]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:30:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63458 DF PROTO=TCP SPT=46218 DPT=9101 SEQ=4105108448 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CBE76F0000000001030307) Dec 6 04:30:17 localhost sshd[130040]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:30:18 localhost python3.9[130119]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:30:19 localhost sshd[130273]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:30:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40440 DF PROTO=TCP SPT=56598 DPT=9105 SEQ=3260359937 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CBF06F0000000001030307) Dec 6 04:30:20 localhost python3.9[130366]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Dec 6 04:30:20 localhost python3.9[130456]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:30:20 localhost sshd[130459]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:30:21 localhost python3.9[130552]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 6 04:30:22 localhost sshd[130555]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:30:22 localhost sshd[130557]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:30:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40441 DF PROTO=TCP SPT=56598 DPT=9105 SEQ=3260359937 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CC002F0000000001030307) Dec 6 04:30:24 localhost sshd[130559]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:30:24 localhost sshd[130561]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:30:25 localhost sshd[130655]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:30:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28780 DF PROTO=TCP SPT=51968 DPT=9882 SEQ=3764173009 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CC07EF0000000001030307) Dec 6 04:30:25 localhost python3.9[130654]: ansible-ansible.legacy.dnf Invoked with name=['openstack-network-scripts'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 6 04:30:27 localhost sshd[130659]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:30:27 localhost sshd[130661]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:30:28 localhost sshd[130663]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:30:29 localhost python3.9[130756]: ansible-ansible.builtin.systemd Invoked with enabled=True name=network daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None Dec 6 04:30:29 localhost systemd[1]: Reloading. Dec 6 04:30:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63460 DF PROTO=TCP SPT=46218 DPT=9101 SEQ=4105108448 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CC17EF0000000001030307) Dec 6 04:30:29 localhost systemd-rc-local-generator[130784]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:30:29 localhost systemd-sysv-generator[130789]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:30:29 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:30:30 localhost sshd[130810]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:30:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40442 DF PROTO=TCP SPT=56598 DPT=9105 SEQ=3260359937 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CC1FEF0000000001030307) Dec 6 04:30:32 localhost python3.9[130937]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:30:32 localhost sshd[130998]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:30:32 localhost python3.9[131045]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:30:33 localhost python3.9[131139]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:30:34 localhost python3.9[131231]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:30:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35371 DF PROTO=TCP SPT=42664 DPT=9102 SEQ=2431858908 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CC2B2F0000000001030307) Dec 6 04:30:35 localhost python3.9[131323]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:30:35 localhost python3.9[131411]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013434.746427-564-252547989457630/.source _original_basename=.1xvvien5 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:30:36 localhost python3.9[131503]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:30:37 localhost python3.9[131595]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={} Dec 6 04:30:37 localhost python3.9[131687]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:30:38 localhost python3.9[131779]: ansible-ansible.legacy.stat Invoked with path=/etc/os-net-config/config.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:30:39 localhost python3.9[131852]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/os-net-config/config.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013438.4440038-690-203314695616937/.source.yaml _original_basename=.kv_jn06l follow=False checksum=06d744ebe702728c19f6d1a8f97158d086012058 force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:30:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62219 DF PROTO=TCP SPT=50060 DPT=9882 SEQ=6242818 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CC3DF00000000001030307) Dec 6 04:30:40 localhost python3.9[131944]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml Dec 6 04:30:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35373 DF PROTO=TCP SPT=42664 DPT=9102 SEQ=2431858908 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CC42EF0000000001030307) Dec 6 04:30:42 localhost ansible-async_wrapper.py[132049]: Invoked with j406301226221 300 /home/zuul/.ansible/tmp/ansible-tmp-1765013441.1552155-762-249806775900804/AnsiballZ_edpm_os_net_config.py _ Dec 6 04:30:42 localhost ansible-async_wrapper.py[132052]: Starting module and watcher Dec 6 04:30:42 localhost ansible-async_wrapper.py[132052]: Start watching 132053 (300) Dec 6 04:30:42 localhost ansible-async_wrapper.py[132053]: Start module (132053) Dec 6 04:30:42 localhost ansible-async_wrapper.py[132049]: Return async_wrapper task started. Dec 6 04:30:42 localhost python3.9[132054]: ansible-edpm_os_net_config Invoked with cleanup=False config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=False Dec 6 04:30:42 localhost ansible-async_wrapper.py[132053]: Module complete (132053) Dec 6 04:30:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50038 DF PROTO=TCP SPT=55042 DPT=9101 SEQ=3105452605 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CC50800000000001030307) Dec 6 04:30:45 localhost python3.9[132146]: ansible-ansible.legacy.async_status Invoked with jid=j406301226221.132049 mode=status _async_dir=/root/.ansible_async Dec 6 04:30:46 localhost sshd[132206]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:30:46 localhost python3.9[132205]: ansible-ansible.legacy.async_status Invoked with jid=j406301226221.132049 mode=cleanup _async_dir=/root/.ansible_async Dec 6 04:30:47 localhost python3.9[132299]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:30:47 localhost ansible-async_wrapper.py[132052]: Done in kid B. Dec 6 04:30:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50040 DF PROTO=TCP SPT=55042 DPT=9101 SEQ=3105452605 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CC5C6F0000000001030307) Dec 6 04:30:47 localhost python3.9[132372]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013446.5525827-828-211742844102186/.source.returncode _original_basename=.483a2nfj follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:30:48 localhost python3.9[132464]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:30:48 localhost python3.9[132537]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013447.8015974-876-142619129736780/.source.cfg _original_basename=.jttoc4pf follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:30:49 localhost python3.9[132629]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 6 04:30:49 localhost systemd[1]: Reloading Network Manager... Dec 6 04:30:49 localhost NetworkManager[5973]: [1765013449.5901] audit: op="reload" arg="0" pid=132633 uid=0 result="success" Dec 6 04:30:49 localhost NetworkManager[5973]: [1765013449.5911] config: signal: SIGHUP (no changes from disk) Dec 6 04:30:49 localhost systemd[1]: Reloaded Network Manager. Dec 6 04:30:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61084 DF PROTO=TCP SPT=57008 DPT=9105 SEQ=58946200 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CC65AF0000000001030307) Dec 6 04:30:49 localhost systemd[1]: session-41.scope: Deactivated successfully. Dec 6 04:30:49 localhost systemd[1]: session-41.scope: Consumed 34.557s CPU time. Dec 6 04:30:49 localhost systemd-logind[766]: Session 41 logged out. Waiting for processes to exit. Dec 6 04:30:49 localhost systemd-logind[766]: Removed session 41. Dec 6 04:30:52 localhost sshd[132648]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:30:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61085 DF PROTO=TCP SPT=57008 DPT=9105 SEQ=58946200 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CC75700000000001030307) Dec 6 04:30:53 localhost sshd[132650]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:30:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62220 DF PROTO=TCP SPT=50060 DPT=9882 SEQ=6242818 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CC7DEF0000000001030307) Dec 6 04:30:56 localhost sshd[132652]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:30:56 localhost systemd-logind[766]: New session 42 of user zuul. Dec 6 04:30:56 localhost systemd[1]: Started Session 42 of User zuul. Dec 6 04:30:57 localhost python3.9[132745]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 6 04:30:58 localhost python3.9[132839]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 6 04:30:59 localhost sshd[132917]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:30:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50042 DF PROTO=TCP SPT=55042 DPT=9101 SEQ=3105452605 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CC8BEF0000000001030307) Dec 6 04:30:59 localhost sshd[132918]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:31:01 localhost python3.9[132994]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:31:01 localhost systemd[1]: session-42.scope: Deactivated successfully. Dec 6 04:31:01 localhost systemd[1]: session-42.scope: Consumed 2.043s CPU time. Dec 6 04:31:01 localhost systemd-logind[766]: Session 42 logged out. Waiting for processes to exit. Dec 6 04:31:01 localhost systemd-logind[766]: Removed session 42. Dec 6 04:31:02 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61086 DF PROTO=TCP SPT=57008 DPT=9105 SEQ=58946200 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CC95F00000000001030307) Dec 6 04:31:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24569 DF PROTO=TCP SPT=43864 DPT=9102 SEQ=3374735870 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CCA0700000000001030307) Dec 6 04:31:06 localhost sshd[133010]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:31:06 localhost sshd[133012]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:31:07 localhost systemd-logind[766]: New session 43 of user zuul. Dec 6 04:31:07 localhost systemd[1]: Started Session 43 of User zuul. Dec 6 04:31:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52672 DF PROTO=TCP SPT=37752 DPT=9102 SEQ=1262336413 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CCABF00000000001030307) Dec 6 04:31:08 localhost python3.9[133105]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 6 04:31:09 localhost python3.9[133199]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 6 04:31:10 localhost python3.9[133295]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 6 04:31:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24571 DF PROTO=TCP SPT=43864 DPT=9102 SEQ=3374735870 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CCB82F0000000001030307) Dec 6 04:31:11 localhost python3.9[133349]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 6 04:31:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12430 DF PROTO=TCP SPT=38062 DPT=9101 SEQ=2424013065 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CCC5B00000000001030307) Dec 6 04:31:15 localhost python3.9[133443]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 6 04:31:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12432 DF PROTO=TCP SPT=38062 DPT=9101 SEQ=2424013065 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CCD1AF0000000001030307) Dec 6 04:31:17 localhost python3.9[133598]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:31:18 localhost python3.9[133690]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:31:19 localhost python3.9[133795]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:31:19 localhost python3.9[133843]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:31:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42092 DF PROTO=TCP SPT=46140 DPT=9105 SEQ=3322961043 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CCDAEF0000000001030307) Dec 6 04:31:20 localhost python3.9[133935]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:31:20 localhost python3.9[133983]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 04:31:21 localhost python3.9[134075]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Dec 6 04:31:22 localhost python3.9[134167]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Dec 6 04:31:22 localhost python3.9[134259]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Dec 6 04:31:23 localhost python3.9[134351]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Dec 6 04:31:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42093 DF PROTO=TCP SPT=46140 DPT=9105 SEQ=3322961043 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CCEAAF0000000001030307) Dec 6 04:31:24 localhost python3.9[134443]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 6 04:31:26 localhost sshd[134446]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:31:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28301 DF PROTO=TCP SPT=45950 DPT=9882 SEQ=1914039869 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CCF7700000000001030307) Dec 6 04:31:28 localhost sshd[134507]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:31:28 localhost python3.9[134541]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 6 04:31:29 localhost python3.9[134635]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:31:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12434 DF PROTO=TCP SPT=38062 DPT=9101 SEQ=2424013065 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CD01F00000000001030307) Dec 6 04:31:29 localhost python3.9[134727]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:31:30 localhost python3.9[134819]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:31:31 localhost python3.9[134912]: ansible-service_facts Invoked Dec 6 04:31:31 localhost sshd[134915]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:31:32 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42094 DF PROTO=TCP SPT=46140 DPT=9105 SEQ=3322961043 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CD0BF00000000001030307) Dec 6 04:31:32 localhost network[134931]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 6 04:31:32 localhost network[134932]: 'network-scripts' will be removed from distribution in near future. Dec 6 04:31:32 localhost network[134933]: It is advised to switch to 'NetworkManager' instead for network management. Dec 6 04:31:33 localhost sshd[134960]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:31:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53742 DF PROTO=TCP SPT=32768 DPT=9102 SEQ=4197401112 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CD156F0000000001030307) Dec 6 04:31:34 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:31:37 localhost podman[135193]: Dec 6 04:31:37 localhost podman[135193]: 2025-12-06 09:31:37.03302051 +0000 UTC m=+0.078648836 container create f7dea120b01e5da8ca5fc0c0f656130ff391ea6f4ac52f14f31e2b6fc7790976 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_booth, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, architecture=x86_64, GIT_BRANCH=main, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, ceph=True, name=rhceph, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, vcs-type=git, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_CLEAN=True, release=1763362218, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 6 04:31:37 localhost systemd[1]: Started libpod-conmon-f7dea120b01e5da8ca5fc0c0f656130ff391ea6f4ac52f14f31e2b6fc7790976.scope. Dec 6 04:31:37 localhost systemd[1]: Started libcrun container. Dec 6 04:31:37 localhost podman[135193]: 2025-12-06 09:31:36.999713061 +0000 UTC m=+0.045341397 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 04:31:37 localhost podman[135193]: 2025-12-06 09:31:37.105019142 +0000 UTC m=+0.150647468 container init f7dea120b01e5da8ca5fc0c0f656130ff391ea6f4ac52f14f31e2b6fc7790976 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_booth, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, io.openshift.expose-services=, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, CEPH_POINT_RELEASE=, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph) Dec 6 04:31:37 localhost podman[135193]: 2025-12-06 09:31:37.113984556 +0000 UTC m=+0.159612882 container start f7dea120b01e5da8ca5fc0c0f656130ff391ea6f4ac52f14f31e2b6fc7790976 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_booth, name=rhceph, build-date=2025-11-26T19:44:28Z, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, distribution-scope=public, architecture=x86_64, com.redhat.component=rhceph-container, RELEASE=main, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux ) Dec 6 04:31:37 localhost podman[135193]: 2025-12-06 09:31:37.114295096 +0000 UTC m=+0.159923482 container attach f7dea120b01e5da8ca5fc0c0f656130ff391ea6f4ac52f14f31e2b6fc7790976 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_booth, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, io.buildah.version=1.41.4, vcs-type=git, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=7, maintainer=Guillaume Abrioux , distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, RELEASE=main) Dec 6 04:31:37 localhost jolly_booth[135208]: 167 167 Dec 6 04:31:37 localhost systemd[1]: libpod-f7dea120b01e5da8ca5fc0c0f656130ff391ea6f4ac52f14f31e2b6fc7790976.scope: Deactivated successfully. Dec 6 04:31:37 localhost podman[135193]: 2025-12-06 09:31:37.118483104 +0000 UTC m=+0.164111460 container died f7dea120b01e5da8ca5fc0c0f656130ff391ea6f4ac52f14f31e2b6fc7790976 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_booth, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, GIT_BRANCH=main, GIT_CLEAN=True, CEPH_POINT_RELEASE=, version=7, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, ceph=True, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 6 04:31:37 localhost podman[135213]: 2025-12-06 09:31:37.22953654 +0000 UTC m=+0.099018689 container remove f7dea120b01e5da8ca5fc0c0f656130ff391ea6f4ac52f14f31e2b6fc7790976 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_booth, architecture=x86_64, name=rhceph, io.openshift.tags=rhceph ceph, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, GIT_CLEAN=True, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, distribution-scope=public, version=7, io.openshift.expose-services=) Dec 6 04:31:37 localhost systemd[1]: libpod-conmon-f7dea120b01e5da8ca5fc0c0f656130ff391ea6f4ac52f14f31e2b6fc7790976.scope: Deactivated successfully. Dec 6 04:31:37 localhost podman[135268]: Dec 6 04:31:37 localhost podman[135268]: 2025-12-06 09:31:37.436824348 +0000 UTC m=+0.064920366 container create 54aa923a29c9494ae51712763b0891f5557af30c5c5bfc7c39c5a9a586c4691d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_joliot, version=7, release=1763362218, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, GIT_BRANCH=main, architecture=x86_64, name=rhceph, io.buildah.version=1.41.4, GIT_CLEAN=True, distribution-scope=public, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 6 04:31:37 localhost systemd[1]: Started libpod-conmon-54aa923a29c9494ae51712763b0891f5557af30c5c5bfc7c39c5a9a586c4691d.scope. Dec 6 04:31:37 localhost systemd[1]: Started libcrun container. Dec 6 04:31:37 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dda23162c6a2b4a6cfbe7e1aa75e3125b9693ec8c2c2e213eee44327d1862ee1/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 6 04:31:37 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dda23162c6a2b4a6cfbe7e1aa75e3125b9693ec8c2c2e213eee44327d1862ee1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 6 04:31:37 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dda23162c6a2b4a6cfbe7e1aa75e3125b9693ec8c2c2e213eee44327d1862ee1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 6 04:31:37 localhost podman[135268]: 2025-12-06 09:31:37.491346076 +0000 UTC m=+0.119442094 container init 54aa923a29c9494ae51712763b0891f5557af30c5c5bfc7c39c5a9a586c4691d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_joliot, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=7, distribution-scope=public, io.openshift.expose-services=, GIT_CLEAN=True, name=rhceph, ceph=True, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, GIT_BRANCH=main, maintainer=Guillaume Abrioux ) Dec 6 04:31:37 localhost podman[135268]: 2025-12-06 09:31:37.501221018 +0000 UTC m=+0.129317026 container start 54aa923a29c9494ae51712763b0891f5557af30c5c5bfc7c39c5a9a586c4691d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_joliot, com.redhat.component=rhceph-container, io.openshift.expose-services=, vcs-type=git, name=rhceph, RELEASE=main, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, version=7, vendor=Red Hat, Inc., io.buildah.version=1.41.4, ceph=True, GIT_BRANCH=main, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public) Dec 6 04:31:37 localhost podman[135268]: 2025-12-06 09:31:37.501372072 +0000 UTC m=+0.129468120 container attach 54aa923a29c9494ae51712763b0891f5557af30c5c5bfc7c39c5a9a586c4691d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_joliot, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.openshift.expose-services=, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, version=7, io.openshift.tags=rhceph ceph, distribution-scope=public, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, vcs-type=git, RELEASE=main) Dec 6 04:31:37 localhost podman[135268]: 2025-12-06 09:31:37.403932033 +0000 UTC m=+0.032028121 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 04:31:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35376 DF PROTO=TCP SPT=42664 DPT=9102 SEQ=2431858908 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CD21F00000000001030307) Dec 6 04:31:38 localhost systemd[1]: var-lib-containers-storage-overlay-051e186c773fa279ab709de94d6a2c9095ecda4c868e7aaf9dbef01413e6f953-merged.mount: Deactivated successfully. Dec 6 04:31:38 localhost practical_joliot[135312]: [ Dec 6 04:31:38 localhost practical_joliot[135312]: { Dec 6 04:31:38 localhost practical_joliot[135312]: "available": false, Dec 6 04:31:38 localhost practical_joliot[135312]: "ceph_device": false, Dec 6 04:31:38 localhost practical_joliot[135312]: "device_id": "QEMU_DVD-ROM_QM00001", Dec 6 04:31:38 localhost practical_joliot[135312]: "lsm_data": {}, Dec 6 04:31:38 localhost practical_joliot[135312]: "lvs": [], Dec 6 04:31:38 localhost practical_joliot[135312]: "path": "/dev/sr0", Dec 6 04:31:38 localhost practical_joliot[135312]: "rejected_reasons": [ Dec 6 04:31:38 localhost practical_joliot[135312]: "Has a FileSystem", Dec 6 04:31:38 localhost practical_joliot[135312]: "Insufficient space (<5GB)" Dec 6 04:31:38 localhost practical_joliot[135312]: ], Dec 6 04:31:38 localhost practical_joliot[135312]: "sys_api": { Dec 6 04:31:38 localhost practical_joliot[135312]: "actuators": null, Dec 6 04:31:38 localhost practical_joliot[135312]: "device_nodes": "sr0", Dec 6 04:31:38 localhost practical_joliot[135312]: "human_readable_size": "482.00 KB", Dec 6 04:31:38 localhost practical_joliot[135312]: "id_bus": "ata", Dec 6 04:31:38 localhost practical_joliot[135312]: "model": "QEMU DVD-ROM", Dec 6 04:31:38 localhost practical_joliot[135312]: "nr_requests": "2", Dec 6 04:31:38 localhost practical_joliot[135312]: "partitions": {}, Dec 6 04:31:38 localhost practical_joliot[135312]: "path": "/dev/sr0", Dec 6 04:31:38 localhost practical_joliot[135312]: "removable": "1", Dec 6 04:31:38 localhost practical_joliot[135312]: "rev": "2.5+", Dec 6 04:31:38 localhost practical_joliot[135312]: "ro": "0", Dec 6 04:31:38 localhost practical_joliot[135312]: "rotational": "1", Dec 6 04:31:38 localhost practical_joliot[135312]: "sas_address": "", Dec 6 04:31:38 localhost practical_joliot[135312]: "sas_device_handle": "", Dec 6 04:31:38 localhost practical_joliot[135312]: "scheduler_mode": "mq-deadline", Dec 6 04:31:38 localhost practical_joliot[135312]: "sectors": 0, Dec 6 04:31:38 localhost practical_joliot[135312]: "sectorsize": "2048", Dec 6 04:31:38 localhost practical_joliot[135312]: "size": 493568.0, Dec 6 04:31:38 localhost practical_joliot[135312]: "support_discard": "0", Dec 6 04:31:38 localhost practical_joliot[135312]: "type": "disk", Dec 6 04:31:38 localhost practical_joliot[135312]: "vendor": "QEMU" Dec 6 04:31:38 localhost practical_joliot[135312]: } Dec 6 04:31:38 localhost practical_joliot[135312]: } Dec 6 04:31:38 localhost practical_joliot[135312]: ] Dec 6 04:31:38 localhost systemd[1]: libpod-54aa923a29c9494ae51712763b0891f5557af30c5c5bfc7c39c5a9a586c4691d.scope: Deactivated successfully. Dec 6 04:31:38 localhost podman[135268]: 2025-12-06 09:31:38.357094691 +0000 UTC m=+0.985190719 container died 54aa923a29c9494ae51712763b0891f5557af30c5c5bfc7c39c5a9a586c4691d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_joliot, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, ceph=True, maintainer=Guillaume Abrioux , GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, version=7, architecture=x86_64, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, name=rhceph, release=1763362218, build-date=2025-11-26T19:44:28Z, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 6 04:31:38 localhost systemd[1]: tmp-crun.xyT7md.mount: Deactivated successfully. Dec 6 04:31:38 localhost systemd[1]: var-lib-containers-storage-overlay-dda23162c6a2b4a6cfbe7e1aa75e3125b9693ec8c2c2e213eee44327d1862ee1-merged.mount: Deactivated successfully. Dec 6 04:31:38 localhost podman[136777]: 2025-12-06 09:31:38.439019737 +0000 UTC m=+0.071123586 container remove 54aa923a29c9494ae51712763b0891f5557af30c5c5bfc7c39c5a9a586c4691d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_joliot, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, version=7, com.redhat.component=rhceph-container, RELEASE=main, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, release=1763362218, GIT_CLEAN=True) Dec 6 04:31:38 localhost systemd[1]: libpod-conmon-54aa923a29c9494ae51712763b0891f5557af30c5c5bfc7c39c5a9a586c4691d.scope: Deactivated successfully. Dec 6 04:31:39 localhost python3.9[136869]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 6 04:31:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53744 DF PROTO=TCP SPT=32768 DPT=9102 SEQ=4197401112 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CD2D300000000001030307) Dec 6 04:31:41 localhost sshd[136887]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:31:43 localhost python3.9[136980]: ansible-package_facts Invoked with manager=['auto'] strategy=first Dec 6 04:31:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35408 DF PROTO=TCP SPT=45486 DPT=9101 SEQ=4065589046 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CD3AE00000000001030307) Dec 6 04:31:45 localhost python3.9[137072]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:31:45 localhost python3.9[137147]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013504.6597707-658-179656407375413/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:31:46 localhost python3.9[137241]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:31:47 localhost python3.9[137316]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013506.1301033-703-231410338939100/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:31:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35410 DF PROTO=TCP SPT=45486 DPT=9101 SEQ=4065589046 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CD46F00000000001030307) Dec 6 04:31:48 localhost python3.9[137410]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:31:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33707 DF PROTO=TCP SPT=55974 DPT=9105 SEQ=3389028362 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CD4FEF0000000001030307) Dec 6 04:31:50 localhost python3.9[137504]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 6 04:31:52 localhost python3.9[137558]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:31:53 localhost sshd[137575]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:31:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33708 DF PROTO=TCP SPT=55974 DPT=9105 SEQ=3389028362 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CD5FB00000000001030307) Dec 6 04:31:54 localhost sshd[137609]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:31:54 localhost python3.9[137656]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 6 04:31:55 localhost python3.9[137710]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 6 04:31:55 localhost chronyd[25988]: chronyd exiting Dec 6 04:31:55 localhost systemd[1]: Stopping NTP client/server... Dec 6 04:31:55 localhost systemd[1]: chronyd.service: Deactivated successfully. Dec 6 04:31:55 localhost systemd[1]: Stopped NTP client/server. Dec 6 04:31:55 localhost systemd[1]: Starting NTP client/server... Dec 6 04:31:55 localhost chronyd[137718]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Dec 6 04:31:55 localhost chronyd[137718]: Frequency -30.244 +/- 0.187 ppm read from /var/lib/chrony/drift Dec 6 04:31:55 localhost chronyd[137718]: Loaded seccomp filter (level 2) Dec 6 04:31:55 localhost systemd[1]: Started NTP client/server. Dec 6 04:31:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28304 DF PROTO=TCP SPT=45950 DPT=9882 SEQ=1914039869 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CD67F00000000001030307) Dec 6 04:31:56 localhost systemd[1]: session-43.scope: Deactivated successfully. Dec 6 04:31:56 localhost systemd[1]: session-43.scope: Consumed 27.337s CPU time. Dec 6 04:31:56 localhost systemd-logind[766]: Session 43 logged out. Waiting for processes to exit. Dec 6 04:31:56 localhost systemd-logind[766]: Removed session 43. Dec 6 04:31:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35412 DF PROTO=TCP SPT=45486 DPT=9101 SEQ=4065589046 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CD77F00000000001030307) Dec 6 04:32:01 localhost sshd[137734]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:32:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33709 DF PROTO=TCP SPT=55974 DPT=9105 SEQ=3389028362 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CD7FEF0000000001030307) Dec 6 04:32:02 localhost systemd-logind[766]: New session 44 of user zuul. Dec 6 04:32:02 localhost systemd[1]: Started Session 44 of User zuul. Dec 6 04:32:03 localhost python3.9[137827]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 6 04:32:04 localhost python3.9[137923]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:32:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55749 DF PROTO=TCP SPT=32946 DPT=9102 SEQ=4129418666 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CD8AAF0000000001030307) Dec 6 04:32:05 localhost python3.9[138028]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:32:05 localhost python3.9[138076]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.3lnedvcb recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:32:06 localhost python3.9[138168]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:32:07 localhost python3.9[138243]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013526.2429948-144-32310357103542/.source _original_basename=.fol479re follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:32:08 localhost python3.9[138335]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 04:32:08 localhost python3.9[138427]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:32:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17450 DF PROTO=TCP SPT=58168 DPT=9882 SEQ=21909413 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CD9BEF0000000001030307) Dec 6 04:32:09 localhost python3.9[138500]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765013528.2452762-216-5857502528035/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 6 04:32:09 localhost python3.9[138592]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:32:10 localhost python3.9[138665]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765013529.329488-216-48221136956556/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 6 04:32:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55751 DF PROTO=TCP SPT=32946 DPT=9102 SEQ=4129418666 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CDA26F0000000001030307) Dec 6 04:32:10 localhost python3.9[138757]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:32:12 localhost python3.9[138849]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:32:12 localhost python3.9[138922]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013531.0994515-327-41990860339714/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:32:13 localhost python3.9[139014]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:32:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1368 DF PROTO=TCP SPT=49982 DPT=9101 SEQ=225342361 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CDB0100000000001030307) Dec 6 04:32:14 localhost python3.9[139087]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013532.8076408-372-59111087322722/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:32:15 localhost python3.9[139179]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:32:15 localhost systemd[1]: Reloading. Dec 6 04:32:15 localhost systemd-sysv-generator[139208]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:32:15 localhost systemd-rc-local-generator[139204]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:32:15 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:32:15 localhost systemd[1]: Reloading. Dec 6 04:32:15 localhost systemd-rc-local-generator[139239]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:32:15 localhost systemd-sysv-generator[139243]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:32:15 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:32:16 localhost systemd[1]: Starting EDPM Container Shutdown... Dec 6 04:32:16 localhost systemd[1]: Finished EDPM Container Shutdown. Dec 6 04:32:16 localhost python3.9[139347]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:32:17 localhost python3.9[139420]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013536.3184662-441-48610634782922/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:32:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1370 DF PROTO=TCP SPT=49982 DPT=9101 SEQ=225342361 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CDBC300000000001030307) Dec 6 04:32:18 localhost python3.9[139512]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:32:18 localhost python3.9[139585]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013537.5362775-486-247914366815403/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:32:19 localhost python3.9[139677]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:32:19 localhost systemd[1]: Reloading. Dec 6 04:32:19 localhost systemd-rc-local-generator[139699]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:32:19 localhost systemd-sysv-generator[139704]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:32:19 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:32:19 localhost systemd[1]: Starting Create netns directory... Dec 6 04:32:19 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Dec 6 04:32:19 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Dec 6 04:32:19 localhost systemd[1]: Finished Create netns directory. Dec 6 04:32:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=812 DF PROTO=TCP SPT=38992 DPT=9105 SEQ=2343405215 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CDC52F0000000001030307) Dec 6 04:32:20 localhost python3.9[139808]: ansible-ansible.builtin.service_facts Invoked Dec 6 04:32:20 localhost network[139825]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 6 04:32:20 localhost network[139826]: 'network-scripts' will be removed from distribution in near future. Dec 6 04:32:20 localhost network[139827]: It is advised to switch to 'NetworkManager' instead for network management. Dec 6 04:32:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:32:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=813 DF PROTO=TCP SPT=38992 DPT=9105 SEQ=2343405215 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CDD4EF0000000001030307) Dec 6 04:32:24 localhost python3.9[140028]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:32:25 localhost python3.9[140103]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013544.2972224-609-19980025123247/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:32:26 localhost python3.9[140196]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 6 04:32:26 localhost systemd[1]: Reloading OpenSSH server daemon... Dec 6 04:32:26 localhost systemd[1]: Reloaded OpenSSH server daemon. Dec 6 04:32:26 localhost sshd[119889]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:32:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41129 DF PROTO=TCP SPT=50666 DPT=9882 SEQ=2894400643 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CDE1AF0000000001030307) Dec 6 04:32:28 localhost python3.9[140292]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:32:29 localhost sshd[140385]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:32:29 localhost python3.9[140384]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:32:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1372 DF PROTO=TCP SPT=49982 DPT=9101 SEQ=225342361 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CDEBEF0000000001030307) Dec 6 04:32:29 localhost python3.9[140459]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013548.794004-702-261466426779047/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:32:31 localhost python3.9[140551]: ansible-community.general.timezone Invoked with name=UTC hwclock=None Dec 6 04:32:31 localhost systemd[1]: Starting Time & Date Service... Dec 6 04:32:31 localhost systemd[1]: Started Time & Date Service. Dec 6 04:32:32 localhost python3.9[140647]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:32:32 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=814 DF PROTO=TCP SPT=38992 DPT=9105 SEQ=2343405215 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CDF5F00000000001030307) Dec 6 04:32:32 localhost python3.9[140739]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:32:33 localhost python3.9[140812]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013552.3764398-807-128530496483931/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:32:33 localhost python3.9[140904]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:32:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3009 DF PROTO=TCP SPT=56102 DPT=9102 SEQ=3789048056 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CDFFEF0000000001030307) Dec 6 04:32:34 localhost python3.9[140977]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013553.5411372-852-221618421100880/.source.yaml _original_basename=.ng53fc6z follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:32:35 localhost python3.9[141069]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:32:36 localhost python3.9[141144]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013555.2275996-898-180336099374098/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:32:36 localhost sshd[141145]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:32:37 localhost python3.9[141238]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:32:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53747 DF PROTO=TCP SPT=32768 DPT=9102 SEQ=4197401112 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CE0BEF0000000001030307) Dec 6 04:32:38 localhost python3.9[141331]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:32:39 localhost python3[141424]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall Dec 6 04:32:39 localhost python3.9[141546]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:32:40 localhost python3.9[141670]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013559.3508754-1014-220633779897061/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:32:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3011 DF PROTO=TCP SPT=56102 DPT=9102 SEQ=3789048056 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CE17AF0000000001030307) Dec 6 04:32:41 localhost python3.9[141796]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:32:41 localhost python3.9[141884]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013560.601365-1059-153563874272434/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:32:42 localhost sshd[141977]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:32:42 localhost python3.9[141976]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:32:42 localhost python3.9[142051]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013561.9440055-1105-203600522259937/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:32:43 localhost python3.9[142143]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:32:44 localhost python3.9[142216]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013563.1466417-1149-54972388325639/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:32:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56704 DF PROTO=TCP SPT=52330 DPT=9101 SEQ=1489643712 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CE25410000000001030307) Dec 6 04:32:45 localhost python3.9[142308]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:32:45 localhost python3.9[142381]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013564.683013-1195-128072970491707/.source.nft follow=False _original_basename=ruleset.j2 checksum=15a82a0dc61abfd6aa593407582b5b950437eb80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:32:46 localhost python3.9[142473]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:32:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38454 DF PROTO=TCP SPT=54342 DPT=9100 SEQ=289234418 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CE32380000000001030307) Dec 6 04:32:47 localhost python3.9[142565]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:32:48 localhost python3.9[142660]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:32:49 localhost python3.9[142753]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:32:49 localhost python3.9[142845]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:32:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33711 DF PROTO=TCP SPT=55974 DPT=9105 SEQ=3389028362 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CE3DEF0000000001030307) Dec 6 04:32:50 localhost python3.9[142937]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None Dec 6 04:32:51 localhost python3.9[143030]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None Dec 6 04:32:52 localhost systemd-logind[766]: Session 44 logged out. Waiting for processes to exit. Dec 6 04:32:52 localhost systemd[1]: session-44.scope: Deactivated successfully. Dec 6 04:32:52 localhost systemd[1]: session-44.scope: Consumed 27.557s CPU time. Dec 6 04:32:52 localhost systemd-logind[766]: Removed session 44. Dec 6 04:32:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27453 DF PROTO=TCP SPT=59210 DPT=9882 SEQ=1642654696 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CE4AD90000000001030307) Dec 6 04:32:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17452 DF PROTO=TCP SPT=58168 DPT=9882 SEQ=21909413 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CE59EF0000000001030307) Dec 6 04:32:58 localhost sshd[143046]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:32:58 localhost systemd-logind[766]: New session 45 of user zuul. Dec 6 04:32:58 localhost systemd[1]: Started Session 45 of User zuul. Dec 6 04:32:59 localhost python3.9[143141]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None Dec 6 04:33:00 localhost python3.9[143233]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:33:01 localhost systemd[1]: systemd-timedated.service: Deactivated successfully. Dec 6 04:33:02 localhost python3.9[143329]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts Dec 6 04:33:02 localhost sshd[143344]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:33:03 localhost python3.9[143423]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.u691m5zd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:33:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14146 DF PROTO=TCP SPT=54290 DPT=9102 SEQ=4089443986 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CE71070000000001030307) Dec 6 04:33:04 localhost sshd[143499]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:33:04 localhost python3.9[143498]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.u691m5zd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013582.9401598-190-217070317977165/.source.u691m5zd _original_basename=.cpktzjzx follow=False checksum=3e842c629948eb11ff005810a7264dbaf8a6d16e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:33:06 localhost python3.9[143592]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 6 04:33:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55754 DF PROTO=TCP SPT=32946 DPT=9102 SEQ=4129418666 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CE7FF00000000001030307) Dec 6 04:33:08 localhost sshd[143593]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:33:10 localhost python3.9[143686]: ansible-ansible.builtin.blockinfile Invoked with block=np0005548785.localdomain,192.168.122.103,np0005548785* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC89JzJHuRLDUgmU66VPdPVwYLrvslBwa5i2QfiUzrnpt1lKz8ayq6QMRy5y5GgfjQQhX/YZiAjUSoogVsYDkoDaImXdtfQHFlFMLTlJPiYcA/cGAwMAE/vifpWoztBHUXkJ5YWUojkXzGoR8d7ESx/tTLG/9QrQDsW6JcV18mcFCQZdeWYWGWdLn6ynmQOZ0N4U6mYK1FqE+GKgP6L9PEjkC1ePo81AnYcdQ5Z1IETdcCcJytdvvxH/Zie1PiAaMAgMYhsqu7+DZRRTvg+cEMw3mRVuodIyQEbpZs8MjR3itViRfZ+UqYi6uKDnz1viLL0aACaYhOLzrE7bQ6Sl4j1MnMrWncUOv3Sq2fus+Y6oYmed84E6HUNljte7vVP9jwPclbCAmj5WuC/Av9dSqqHEpPRbKJ4tAuBrO2LBKS7J62FjRYiY807V1viyxUgjK5FmsQyfVr3/YOirluSx54e4XwxxDrAjtrd0x68H7/Mt6HP/79cWKaVbC7XUckYRmE=#012np0005548785.localdomain,192.168.122.103,np0005548785* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPnHRGHw2U3XDUZBfS69ZpwocvZ2haE6Sebzf3BV40dJ#012np0005548785.localdomain,192.168.122.103,np0005548785* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMgorOAtIXk7BOknkR82ERwiBlDoAcpTTo8DwXwOeKFxueIG2AzGwqy/M3AlognMpbS9bigTSmXKYzfS5SNcGD8=#012np0005548786.localdomain,192.168.122.104,np0005548786* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDURzBA/aIGrwPgaIApy0UCTi4wdQhfDEx0QfkSAIn0ZptZcOkaR8BWtl9GijRPEp++Ep4qU04JcwHO1ZULd2UnCdDeg1Imwnf7x9HQBjAr0mH+tE0t4MBLtBbrk8Ep5ggyKATK1CvEl3NuGIS4gSSUWxzkR74Iju/GtrEMuVnMSsOw+auBofiv1ne4zyXqQWZORiK32DSolw1KyXGLyqG+JOpl3Kza5o79S1KUghfRzskZMm/AxFYciPmg4EQK/jL9Izj7qq3v8MaL8baeyqNlPaaRKCh+pkZlYtoPzDhe+vn/jwnDmQgqC1Bh+dkNiKEVlWz3mxoiMoeLY3jP/tMF2M4M8puGakPc2sqJxk1++Tv/lFRO3zBS+V2kECKI5DtQI6XThfLYXxIQl5SHr4yGEoxhMNt6YNQPLp6lg30kHO24YyNNA7LPFYYoOGUCaq5ZVUCF9lagMxcgkN0Bs+ZZqeni+53RqxoutiRZ0m9pIiqxGjrJjbNFXmofgfDBcUE=#012np0005548786.localdomain,192.168.122.104,np0005548786* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILIgwHZ/0Q8K6t9dlBCQwEO6OABCR0J0IF6hfmA44GBM#012np0005548786.localdomain,192.168.122.104,np0005548786* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBItDJKfsljV78XBJL8EuwSxDvfxuZ9Jz6PgjXVap/GJqsza+9ApDVkNpmAVhdxO9qX1PPD9KOxQjcrD2A8MXQ10=#012np0005548787.localdomain,192.168.122.105,np0005548787* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXe0UZ2kJKcvYaHSnjIOf3QqkGhArLo32nvDm8Pl8ZVNWfdRV8R+e17etAicDq//fxWC+U9jiHp4qI6/0Jm64rPocmJKaA+r79sNpv+598NlGtVUfTYQ34Ze9bgaPkjAwKfPNrzjSDChyfkys4Hm0J7ttog5rvMcuRelxkFmoonOcuzBC+9ufI6qld7br5w4WDookwamkefbMCiwAZxrw2bSjoTu7/TEFbt7SM0lUIdqP5WvxpWK52OkjnakQ0BL4QHdRYz1kBx/vS0TFxXb2pMO291dfkxDl3H2oXXZZYK/LWy3nZyJEX+mD5J6WOEs5HC5GQQ+CNEV0wa2e/gJA7KBsyL5T6RBtH8id22sBHZkzcaDhUz1ZABGAiOx4rdrr4YFFFy/u00nX3ZCuRBPXYh37Pafl7GXcSKyhTmkCZI0591RdNmb1duh9ZIObRmPVp2+WIheAFvS7EU4B0+ZjAEbDJgiSa9VlUrlRFX0ajcFHR8FnwNRcoERO3A3h4/Tc=#012np0005548787.localdomain,192.168.122.105,np0005548787* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGClV/UHC6wrHLH6ofPCeG9Z3WpaSbH42qD4AsTbywke#012np0005548787.localdomain,192.168.122.105,np0005548787* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBD+VBma5zUGbc6C8yvVJH1yH01D2HwvgMwJZ3Ew/fQ9uangWsK7hoczIcWgUhEN67mue6bMYPNkv+zbE5QDlLqA=#012np0005548789.localdomain,192.168.122.107,np0005548789* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwH3rhRTvOINLmLdbeRXeXOiMzz+IXEuW2cXYAe50Wcc3ikH2RVGirWQrwLc8hAoA7UFCXADqEMxPg6/fLsQkbP7kLOpUtam8nuXvgt8VHM4RFl5wh9EOgZ7DWgjA7s3r2eQMcBhv82CjVMLY/YjnLuRNXCsJAqeG32qcKedKH/huEFvkb49U/UnNlxi5BfNrMlY9n5UQXE2rd6EKwP58aP/qQ1ie3p8nwHc36/MJcfEIABlLaoHK/LxnadOFTh93OkqVi7A0VQsKSmKD64nABiN7ML0NReoyRIQI5r3Dawe8v2K9jCBh5jY88TVsYUJqgwoZSSU73sYGHX4uF+PY8wL7qwn6mCzA17GGYeB8Dy0N8qwDqah6kUjpcLwGp7YaKf0FIZPBKcLVMrX6Tnwxer1j3kOIt3tgLZoz3mMfstWfCyvt9t+GEW5MCE+MBkY4Eree3uK7pI+wJ3vFQS9XVP00hjNiLWYmoaaW6rl8xtw7QtGhzmjcWbOxaZvHWE5E=#012np0005548789.localdomain,192.168.122.107,np0005548789* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM7zsgz8o1LOsRIDgDJ0j4aB+gvG7QE4PuIS5gi3px2U#012np0005548789.localdomain,192.168.122.107,np0005548789* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNB22R613xD5iIn21fw712bqcytUxBHAFZPMSjpWL8XVTi6taleS2y8rpYqGoN21DgQgwO1SxmcqZLfwlh7T5/4=#012np0005548790.localdomain,192.168.122.108,np0005548790* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDmdMCy44p73Ui+o09YQitqR9FILqoJ6AGYYutFVH6wn5m1j6oEoI4XgVFPR3UpG3SXdoiG7m0DRxC/WZZMpZbaQ3ZHbJJioRh1hV5uQtK5k2gtmS8uePng5UprbLncMXf+HIxNRvirU3r6zdgNGAroK0rN0nWESi/FNb2flu9Aw9JAsgIAAouW4IUoeyMGZ1AflhRhsWsQMstM9UEeGU+iTqV7al1URVCSq1finY99m+QC+Pftpd2C/+agboOIiVa63+D/RqqfYqh4C/PYfDbssYjcZzk3P90+HQ6uMKexX3HRnFbyje4eLSBHC0pjr/4pNfk/eSpdHeyMAPsP+QlBztdcPj9OnjcmT9ymeJRKF7GwNIWg3Pn9L2yY50d8l9Zu6rNIDW786XNcbm88yHdCHA5FE1A8XTWQRQ3eUSUsmsvf03pExAouRM4Fj8dvCu6wzG2SuyWqmdT5yCNrUG0e1CeE6PcfTLBeS5CJAwn5HM8aUndQQldWmaUbMPL5Jis=#012np0005548790.localdomain,192.168.122.108,np0005548790* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPoyxTI8+8n9PWFBkZatum98GfJRQMd2qn9CijEFzfEz#012np0005548790.localdomain,192.168.122.108,np0005548790* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNOlnHgYu82mRZ1QroLe1BG6rymOGDqDJGz5MpHZnXnhJ6iIwC87em0cGHiSKgU+UZ4DpWQTIlxwKsn9Jp9Hl1Y=#012np0005548788.localdomain,192.168.122.106,np0005548788* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxIoAQH9YZnGrAxYR5prFQwo6HY5mwdDjndb+bp2pwvtVLM4ABIdCi+K1wpbhOpoO7BsYOf/tdBqemvSDleNo/ZLh3v3MmoVtoTtQZqLWsAQWFgJCjcGUGB+H3CHhtbp706coVQMlGD+UQqpCBy8WamMB/Ldy+hSHbLHwzuMzj8tO90vUbEyuKgOuu/X3ZFa+Yjo/asQ+PTrVfirh1QvRQ9aK22xH89KbThA/1an4OjnNGLCP752auSQ894B21QLKfqaMGPlpbjU8Wr6MP4zKV9lUzpQiFr6IU6cd4CeIsJDj7FnAZuBSmi8ewgm/r4ZWkmCSlqw8OpMC5soJnm8Q4PJTIFvT9eyyFCh9xmQkMhzE8P332LtYjZ+vXhYFU14e04mOQx5UrtHN8uWJVbOAwtLNAcenHyRtCQGkAZ6f9q0OvSuYr+o3FhHhN5ABu32AKAD8YpkjLypi+PbaiKNQW8XzPAHHbV8CGZ4B09ZWeQY49VA0bPxIYBXd1mEBlXSE=#012np0005548788.localdomain,192.168.122.106,np0005548788* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJBkIOjRpLl815RvOqIZSSNUu/CGLqucfCRUist+ERWP#012np0005548788.localdomain,192.168.122.106,np0005548788* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNyEL9+sMn9BF0LnCanz9jbKQTm6FNV71J4qGFTonom0KXHpLL1p0eyrgFY0iwGH2UtwJ6VWm5bm2RaQJmObwZI=#012 create=True mode=0644 path=/tmp/ansible.u691m5zd state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:33:11 localhost python3.9[143778]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.u691m5zd' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:33:12 localhost python3.9[143872]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.u691m5zd state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:33:13 localhost systemd[1]: session-45.scope: Deactivated successfully. Dec 6 04:33:13 localhost systemd[1]: session-45.scope: Consumed 4.162s CPU time. Dec 6 04:33:13 localhost systemd-logind[766]: Session 45 logged out. Waiting for processes to exit. Dec 6 04:33:13 localhost systemd-logind[766]: Removed session 45. Dec 6 04:33:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24703 DF PROTO=TCP SPT=39358 DPT=9101 SEQ=2857460432 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CE9A730000000001030307) Dec 6 04:33:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8041 DF PROTO=TCP SPT=38842 DPT=9105 SEQ=4216625217 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CEA3960000000001030307) Dec 6 04:33:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25227 DF PROTO=TCP SPT=32874 DPT=9100 SEQ=2169528005 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CEA7680000000001030307) Dec 6 04:33:19 localhost sshd[143888]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:33:19 localhost sshd[143890]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:33:20 localhost systemd-logind[766]: New session 46 of user zuul. Dec 6 04:33:20 localhost systemd[1]: Started Session 46 of User zuul. Dec 6 04:33:21 localhost python3.9[143983]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 6 04:33:22 localhost sshd[144080]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:33:22 localhost python3.9[144079]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None Dec 6 04:33:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61947 DF PROTO=TCP SPT=37094 DPT=9882 SEQ=2579427058 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CEC0090000000001030307) Dec 6 04:33:24 localhost python3.9[144175]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 6 04:33:25 localhost python3.9[144268]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:33:26 localhost python3.9[144361]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:33:26 localhost python3.9[144455]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:33:27 localhost python3.9[144550]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:33:28 localhost systemd-logind[766]: Session 46 logged out. Waiting for processes to exit. Dec 6 04:33:28 localhost systemd[1]: session-46.scope: Deactivated successfully. Dec 6 04:33:28 localhost systemd[1]: session-46.scope: Consumed 3.725s CPU time. Dec 6 04:33:28 localhost systemd-logind[766]: Removed session 46. Dec 6 04:33:33 localhost sshd[144566]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:33:33 localhost systemd-logind[766]: New session 47 of user zuul. Dec 6 04:33:33 localhost systemd[1]: Started Session 47 of User zuul. Dec 6 04:33:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27948 DF PROTO=TCP SPT=42274 DPT=9102 SEQ=3809184943 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CEE6360000000001030307) Dec 6 04:33:34 localhost python3.9[144659]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 6 04:33:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27949 DF PROTO=TCP SPT=42274 DPT=9102 SEQ=3809184943 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CEEA2F0000000001030307) Dec 6 04:33:35 localhost python3.9[144755]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 6 04:33:36 localhost python3.9[144809]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Dec 6 04:33:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27950 DF PROTO=TCP SPT=42274 DPT=9102 SEQ=3809184943 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CEF22F0000000001030307) Dec 6 04:33:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27951 DF PROTO=TCP SPT=42274 DPT=9102 SEQ=3809184943 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CF01EF0000000001030307) Dec 6 04:33:41 localhost python3.9[144901]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:33:42 localhost sshd[145073]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:33:42 localhost python3.9[145057]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/reboot_required/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:33:43 localhost python3.9[145166]: ansible-ansible.builtin.file Invoked with mode=0600 path=/var/lib/openstack/reboot_required/needs_restarting state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:33:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53827 DF PROTO=TCP SPT=58770 DPT=9101 SEQ=1433347702 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CF0FA00000000001030307) Dec 6 04:33:44 localhost python3.9[145258]: ansible-ansible.builtin.lineinfile Invoked with dest=/var/lib/openstack/reboot_required/needs_restarting line=Not root, Subscription Management repositories not updated#012Core libraries or services have been updated since boot-up:#012 * systemd#012#012Reboot is required to fully utilize these updates.#012More information: https://access.redhat.com/solutions/27943 path=/var/lib/openstack/reboot_required/needs_restarting state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:33:45 localhost python3.9[145348]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Dec 6 04:33:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53828 DF PROTO=TCP SPT=58770 DPT=9101 SEQ=1433347702 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CF13AF0000000001030307) Dec 6 04:33:46 localhost python3.9[145438]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:33:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9032 DF PROTO=TCP SPT=41690 DPT=9105 SEQ=2344888170 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CF18C40000000001030307) Dec 6 04:33:46 localhost python3.9[145530]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:33:47 localhost systemd[1]: session-47.scope: Deactivated successfully. Dec 6 04:33:47 localhost systemd[1]: session-47.scope: Consumed 8.803s CPU time. Dec 6 04:33:47 localhost systemd-logind[766]: Session 47 logged out. Waiting for processes to exit. Dec 6 04:33:47 localhost systemd-logind[766]: Removed session 47. Dec 6 04:33:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53829 DF PROTO=TCP SPT=58770 DPT=9101 SEQ=1433347702 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CF1BAF0000000001030307) Dec 6 04:33:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19904 DF PROTO=TCP SPT=60528 DPT=9100 SEQ=1211413549 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CF1C980000000001030307) Dec 6 04:33:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9033 DF PROTO=TCP SPT=41690 DPT=9105 SEQ=2344888170 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CF1CB00000000001030307) Dec 6 04:33:48 localhost sshd[145545]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:33:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19905 DF PROTO=TCP SPT=60528 DPT=9100 SEQ=1211413549 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CF20AF0000000001030307) Dec 6 04:33:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9034 DF PROTO=TCP SPT=41690 DPT=9105 SEQ=2344888170 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CF24AF0000000001030307) Dec 6 04:33:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9035 DF PROTO=TCP SPT=41690 DPT=9105 SEQ=2344888170 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CF346F0000000001030307) Dec 6 04:33:54 localhost sshd[145547]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:33:54 localhost systemd-logind[766]: New session 48 of user zuul. Dec 6 04:33:54 localhost systemd[1]: Started Session 48 of User zuul. Dec 6 04:33:55 localhost python3.9[145640]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 6 04:33:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36175 DF PROTO=TCP SPT=59930 DPT=9882 SEQ=3639643898 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CF412F0000000001030307) Dec 6 04:33:57 localhost python3.9[145736]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 04:33:58 localhost python3.9[145828]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:33:58 localhost sshd[145890]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:33:59 localhost python3.9[145903]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013637.5897832-182-160766225013724/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=31b29ee7333177b2eb4f4f85549af35c3d0ec3b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:33:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53831 DF PROTO=TCP SPT=58770 DPT=9101 SEQ=1433347702 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CF4BEF0000000001030307) Dec 6 04:33:59 localhost python3.9[145995]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-sriov setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 04:34:00 localhost python3.9[146087]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:34:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9036 DF PROTO=TCP SPT=41690 DPT=9105 SEQ=2344888170 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CF53EF0000000001030307) Dec 6 04:34:02 localhost python3.9[146160]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013640.0716984-253-139812967846941/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=31b29ee7333177b2eb4f4f85549af35c3d0ec3b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:34:02 localhost python3.9[146252]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-dhcp setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 04:34:04 localhost chronyd[137718]: Selected source 158.69.193.108 (pool.ntp.org) Dec 6 04:34:04 localhost python3.9[146344]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:34:04 localhost python3.9[146417]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013643.613141-341-259875819668189/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=31b29ee7333177b2eb4f4f85549af35c3d0ec3b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:34:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48060 DF PROTO=TCP SPT=56898 DPT=9102 SEQ=2227287577 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CF5F700000000001030307) Dec 6 04:34:05 localhost python3.9[146509]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 04:34:06 localhost python3.9[146601]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:34:06 localhost sshd[146675]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:34:06 localhost python3.9[146674]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013645.6995184-409-199463690800059/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=31b29ee7333177b2eb4f4f85549af35c3d0ec3b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:34:07 localhost python3.9[146768]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 04:34:07 localhost python3.9[146860]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:34:08 localhost python3.9[146933]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013647.459166-476-14489716808621/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=31b29ee7333177b2eb4f4f85549af35c3d0ec3b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:34:09 localhost python3.9[147025]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 04:34:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36177 DF PROTO=TCP SPT=59930 DPT=9882 SEQ=3639643898 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CF71EF0000000001030307) Dec 6 04:34:09 localhost python3.9[147117]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:34:10 localhost python3.9[147190]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013649.2545688-548-11558135951373/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=31b29ee7333177b2eb4f4f85549af35c3d0ec3b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:34:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48062 DF PROTO=TCP SPT=56898 DPT=9102 SEQ=2227287577 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CF772F0000000001030307) Dec 6 04:34:10 localhost python3.9[147282]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 04:34:11 localhost python3.9[147374]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:34:12 localhost python3.9[147447]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013651.0013497-618-155047141128834/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=31b29ee7333177b2eb4f4f85549af35c3d0ec3b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:34:13 localhost python3.9[147539]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 04:34:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58534 DF PROTO=TCP SPT=45046 DPT=9101 SEQ=31981411 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CF84D00000000001030307) Dec 6 04:34:14 localhost python3.9[147631]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:34:15 localhost python3.9[147704]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013653.197245-683-153019413141103/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=31b29ee7333177b2eb4f4f85549af35c3d0ec3b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:34:15 localhost sshd[147705]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:34:16 localhost systemd[1]: session-48.scope: Deactivated successfully. Dec 6 04:34:16 localhost systemd[1]: session-48.scope: Consumed 11.388s CPU time. Dec 6 04:34:16 localhost systemd-logind[766]: Session 48 logged out. Waiting for processes to exit. Dec 6 04:34:16 localhost systemd-logind[766]: Removed session 48. Dec 6 04:34:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58536 DF PROTO=TCP SPT=45046 DPT=9101 SEQ=31981411 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CF90EF0000000001030307) Dec 6 04:34:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21556 DF PROTO=TCP SPT=33844 DPT=9105 SEQ=1331080246 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CF99F00000000001030307) Dec 6 04:34:21 localhost sshd[147721]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:34:21 localhost systemd-logind[766]: New session 49 of user zuul. Dec 6 04:34:21 localhost systemd[1]: Started Session 49 of User zuul. Dec 6 04:34:22 localhost python3.9[147816]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:34:23 localhost sshd[147868]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:34:23 localhost python3.9[147910]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:34:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21557 DF PROTO=TCP SPT=33844 DPT=9105 SEQ=1331080246 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CFA9AF0000000001030307) Dec 6 04:34:24 localhost python3.9[147983]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013662.9781508-63-116259745451274/.source.conf _original_basename=ceph.conf follow=False checksum=74b6793c28400fa0a16ce9abdc4efa82feeb961d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:34:24 localhost python3.9[148075]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:34:25 localhost python3.9[148148]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013664.3649476-63-279652171569212/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=9d631b6552ddeaa0e75a39b18f2bdb583e0e85e3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:34:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36178 DF PROTO=TCP SPT=59930 DPT=9882 SEQ=3639643898 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CFB1EF0000000001030307) Dec 6 04:34:25 localhost systemd-logind[766]: Session 49 logged out. Waiting for processes to exit. Dec 6 04:34:25 localhost systemd[1]: session-49.scope: Deactivated successfully. Dec 6 04:34:25 localhost systemd[1]: session-49.scope: Consumed 2.250s CPU time. Dec 6 04:34:25 localhost systemd-logind[766]: Removed session 49. Dec 6 04:34:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58538 DF PROTO=TCP SPT=45046 DPT=9101 SEQ=31981411 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CFC1EF0000000001030307) Dec 6 04:34:31 localhost sshd[148163]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:34:31 localhost systemd-logind[766]: New session 50 of user zuul. Dec 6 04:34:31 localhost systemd[1]: Started Session 50 of User zuul. Dec 6 04:34:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21558 DF PROTO=TCP SPT=33844 DPT=9105 SEQ=1331080246 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CFC9EF0000000001030307) Dec 6 04:34:32 localhost python3.9[148256]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 6 04:34:33 localhost python3.9[148352]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 04:34:34 localhost sshd[148444]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:34:34 localhost python3.9[148446]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Dec 6 04:34:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51952 DF PROTO=TCP SPT=57376 DPT=9102 SEQ=1622180186 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CFD4AF0000000001030307) Dec 6 04:34:34 localhost python3.9[148536]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 6 04:34:36 localhost python3.9[148628]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False Dec 6 04:34:37 localhost python3.9[148720]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 6 04:34:38 localhost python3.9[148774]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 6 04:34:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42254 DF PROTO=TCP SPT=48326 DPT=9882 SEQ=995512109 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CFE5EF0000000001030307) Dec 6 04:34:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51954 DF PROTO=TCP SPT=57376 DPT=9102 SEQ=1622180186 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CFEC6F0000000001030307) Dec 6 04:34:43 localhost python3.9[148868]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Dec 6 04:34:43 localhost podman[148974]: 2025-12-06 09:34:43.766349425 +0000 UTC m=+0.075854292 container exec ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , RELEASE=main, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, vendor=Red Hat, Inc., version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 04:34:43 localhost podman[148974]: 2025-12-06 09:34:43.898246831 +0000 UTC m=+0.207751728 container exec_died ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, ceph=True, maintainer=Guillaume Abrioux , version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, release=1763362218, architecture=x86_64, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.openshift.expose-services=, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, vcs-type=git) Dec 6 04:34:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2759 DF PROTO=TCP SPT=33812 DPT=9101 SEQ=3519393 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52CFFA000000000001030307) Dec 6 04:34:44 localhost python3[149180]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012 rule:#012 proto: udp#012 dport: 4789#012- rule_name: 119 neutron geneve networks#012 rule:#012 proto: udp#012 dport: 6081#012 state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012 rule:#012 proto: udp#012 dport: 6081#012 table: raw#012 chain: OUTPUT#012 jump: NOTRACK#012 action: append#012 state: []#012- rule_name: 121 neutron geneve networks no conntrack#012 rule:#012 proto: udp#012 dport: 6081#012 table: raw#012 chain: PREROUTING#012 jump: NOTRACK#012 action: append#012 state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present Dec 6 04:34:45 localhost python3.9[149304]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:34:46 localhost python3.9[149396]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:34:47 localhost python3.9[149444]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:34:47 localhost sshd[149445]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:34:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2761 DF PROTO=TCP SPT=33812 DPT=9101 SEQ=3519393 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D005EF0000000001030307) Dec 6 04:34:48 localhost python3.9[149538]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:34:48 localhost python3.9[149586]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.gmuk7otn recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:34:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31193 DF PROTO=TCP SPT=49592 DPT=9105 SEQ=1936642673 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D00F2F0000000001030307) Dec 6 04:34:50 localhost python3.9[149678]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:34:50 localhost python3.9[149727]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:34:51 localhost python3.9[149819]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:34:51 localhost sshd[149835]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:34:52 localhost python3[149914]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall Dec 6 04:34:52 localhost python3.9[150006]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:34:53 localhost python3.9[150081]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013692.4487917-432-909690626717/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:34:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31194 DF PROTO=TCP SPT=49592 DPT=9105 SEQ=1936642673 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D01EEF0000000001030307) Dec 6 04:34:54 localhost python3.9[150173]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:34:54 localhost python3.9[150248]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013693.8711767-477-21938630296334/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:34:55 localhost python3.9[150340]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:34:56 localhost python3.9[150415]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013695.1787555-522-212090974891440/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:34:56 localhost python3.9[150507]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:34:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56313 DF PROTO=TCP SPT=42750 DPT=9882 SEQ=627142310 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D02BAF0000000001030307) Dec 6 04:34:57 localhost python3.9[150582]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013696.401759-567-237851580408851/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:34:58 localhost python3.9[150674]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:34:58 localhost python3.9[150749]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013697.5907106-612-157607836935768/.source.nft follow=False _original_basename=ruleset.j2 checksum=eb691bdb7d792c5f8ff0d719e807fe1c95b09438 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:34:59 localhost python3.9[150841]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:34:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2763 DF PROTO=TCP SPT=33812 DPT=9101 SEQ=3519393 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D035EF0000000001030307) Dec 6 04:35:00 localhost python3.9[150933]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:35:02 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31195 DF PROTO=TCP SPT=49592 DPT=9105 SEQ=1936642673 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D03FEF0000000001030307) Dec 6 04:35:02 localhost python3.9[151028]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:35:03 localhost python3.9[151120]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:35:04 localhost python3.9[151213]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:35:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17632 DF PROTO=TCP SPT=60550 DPT=9102 SEQ=3977837278 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D049EF0000000001030307) Dec 6 04:35:04 localhost python3.9[151307]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:35:05 localhost python3.9[151402]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:35:07 localhost python3.9[151492]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 6 04:35:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48065 DF PROTO=TCP SPT=56898 DPT=9102 SEQ=2227287577 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D055EF0000000001030307) Dec 6 04:35:08 localhost python3.9[151585]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=np0005548789.localdomain external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:a2:0d:dc:1c" external_ids:ovn-encap-ip=172.19.0.107 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=tcp:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:35:08 localhost ovs-vsctl[151586]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=np0005548789.localdomain external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:a2:0d:dc:1c external_ids:ovn-encap-ip=172.19.0.107 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=tcp:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch Dec 6 04:35:09 localhost python3.9[151678]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:35:09 localhost python3.9[151771]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:35:10 localhost python3.9[151865]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 04:35:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17634 DF PROTO=TCP SPT=60550 DPT=9102 SEQ=3977837278 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D061AF0000000001030307) Dec 6 04:35:11 localhost python3.9[151957]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:35:11 localhost python3.9[152005]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 04:35:12 localhost python3.9[152097]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:35:12 localhost python3.9[152145]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 04:35:13 localhost python3.9[152238]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:35:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62160 DF PROTO=TCP SPT=50974 DPT=9101 SEQ=3447700527 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D06F310000000001030307) Dec 6 04:35:14 localhost python3.9[152330]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:35:15 localhost sshd[152377]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:35:15 localhost python3.9[152380]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:35:15 localhost python3.9[152472]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:35:16 localhost python3.9[152520]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:35:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62162 DF PROTO=TCP SPT=50974 DPT=9101 SEQ=3447700527 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D07B2F0000000001030307) Dec 6 04:35:17 localhost python3.9[152612]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:35:17 localhost systemd[1]: Reloading. Dec 6 04:35:17 localhost systemd-sysv-generator[152643]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:35:17 localhost systemd-rc-local-generator[152637]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:35:17 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:35:18 localhost python3.9[152743]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:35:19 localhost python3.9[152791]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:35:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26543 DF PROTO=TCP SPT=41638 DPT=9105 SEQ=2004128370 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D084700000000001030307) Dec 6 04:35:20 localhost python3.9[152883]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:35:20 localhost python3.9[152931]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:35:21 localhost python3.9[153023]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:35:21 localhost systemd[1]: Reloading. Dec 6 04:35:21 localhost systemd-sysv-generator[153050]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:35:21 localhost systemd-rc-local-generator[153046]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:35:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:35:21 localhost systemd[1]: Starting Create netns directory... Dec 6 04:35:21 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Dec 6 04:35:21 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Dec 6 04:35:21 localhost systemd[1]: Finished Create netns directory. Dec 6 04:35:23 localhost sshd[153160]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:35:23 localhost python3.9[153159]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 04:35:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26544 DF PROTO=TCP SPT=41638 DPT=9105 SEQ=2004128370 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D094300000000001030307) Dec 6 04:35:24 localhost python3.9[153253]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:35:24 localhost python3.9[153326]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765013723.7786076-1344-86884193586755/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 6 04:35:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56316 DF PROTO=TCP SPT=42750 DPT=9882 SEQ=627142310 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D09BF00000000001030307) Dec 6 04:35:26 localhost python3.9[153418]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 04:35:26 localhost python3.9[153510]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:35:28 localhost python3.9[153585]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013726.4736302-1419-168476942466128/.source.json _original_basename=.4r3uqdxk follow=False checksum=38f75f59f5c2ef6b5da12297bfd31cd1e97012ac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:35:28 localhost python3.9[153677]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:35:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62164 DF PROTO=TCP SPT=50974 DPT=9101 SEQ=3447700527 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D0ABEF0000000001030307) Dec 6 04:35:31 localhost python3.9[153934]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False Dec 6 04:35:31 localhost python3.9[154026]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Dec 6 04:35:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26545 DF PROTO=TCP SPT=41638 DPT=9105 SEQ=2004128370 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D0B3EF0000000001030307) Dec 6 04:35:32 localhost python3.9[154118]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Dec 6 04:35:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17804 DF PROTO=TCP SPT=45458 DPT=9102 SEQ=3201948792 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D0BEF00000000001030307) Dec 6 04:35:37 localhost python3[154237]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Dec 6 04:35:37 localhost python3[154237]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c",#012 "Digest": "sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-12-01T06:38:47.246477714Z",#012 "Config": {#012 "User": "root",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 345722821,#012 "VirtualSize": 345722821,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/06baa34adcac19ffd1cac321f0c14e5e32037c7b357d2eb54e065b4d177d72fd/diff:/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/0dae0ae2501f0b947a8e64948b264823feec8c7ddb8b7849cb102fbfe0c75da8/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/0dae0ae2501f0b947a8e64948b264823feec8c7ddb8b7849cb102fbfe0c75da8/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",#012 "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",#012 "sha256:ba9362d2aeb297e34b0679b2fc8168350c70a5b0ec414daf293bf2bc013e9088",#012 "sha256:aae3b8a85314314b9db80a043fdf3f3b1d0b69927faca0303c73969a23dddd0f"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "root",#012 "History": [#012 {#012 "created": "2025-11-25T04:02:36.223494528Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:36.223562059Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251125\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:39.054452717Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-12-01T06:09:28.025707917Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025744608Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025767729Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025791379Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.02581523Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025867611Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.469442331Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:10:02.029095017Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:10:05.672474685Z",#012 "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-l Dec 6 04:35:37 localhost podman[154288]: 2025-12-06 09:35:37.897191792 +0000 UTC m=+0.069944319 container remove 1083479e63c7b72a6fd8d844623912e360922ce61d1a1bd165c8ac307f9bd076 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, io.buildah.version=1.41.4, architecture=x86_64, release=1761123044, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Dec 6 04:35:37 localhost python3[154237]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force ovn_controller Dec 6 04:35:37 localhost podman[154302]: Dec 6 04:35:37 localhost podman[154302]: 2025-12-06 09:35:37.975594533 +0000 UTC m=+0.065211091 container create 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 6 04:35:37 localhost podman[154302]: 2025-12-06 09:35:37.936408573 +0000 UTC m=+0.026025171 image pull quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified Dec 6 04:35:37 localhost python3[154237]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified Dec 6 04:35:38 localhost python3.9[154432]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:35:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2003 DF PROTO=TCP SPT=40512 DPT=9882 SEQ=469531224 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D0D1F00000000001030307) Dec 6 04:35:39 localhost python3.9[154526]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:35:40 localhost python3.9[154572]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:35:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17806 DF PROTO=TCP SPT=45458 DPT=9102 SEQ=3201948792 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D0D6B00000000001030307) Dec 6 04:35:40 localhost python3.9[154663]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765013740.2090127-1683-273520237774633/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:35:41 localhost sshd[154710]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:35:41 localhost python3.9[154709]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 6 04:35:41 localhost systemd[1]: Reloading. Dec 6 04:35:41 localhost systemd-rc-local-generator[154737]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:35:41 localhost systemd-sysv-generator[154743]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:35:41 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:35:42 localhost sshd[154794]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:35:42 localhost python3.9[154793]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:35:42 localhost systemd[1]: Reloading. Dec 6 04:35:42 localhost systemd-rc-local-generator[154824]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:35:42 localhost systemd-sysv-generator[154827]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:35:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:35:43 localhost systemd[1]: Starting ovn_controller container... Dec 6 04:35:43 localhost systemd[1]: Started libcrun container. Dec 6 04:35:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c05aff0a12864d1bd5bcddcfda0418c2fac87ac5e10778af1cef421189be2d3/merged/run/ovn supports timestamps until 2038 (0x7fffffff) Dec 6 04:35:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 04:35:43 localhost podman[154836]: 2025-12-06 09:35:43.321816754 +0000 UTC m=+0.151528859 container init 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 6 04:35:43 localhost systemd[1]: tmp-crun.97ZgYg.mount: Deactivated successfully. Dec 6 04:35:43 localhost ovn_controller[154851]: + sudo -E kolla_set_configs Dec 6 04:35:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 04:35:43 localhost podman[154836]: 2025-12-06 09:35:43.364308608 +0000 UTC m=+0.194020673 container start 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 04:35:43 localhost edpm-start-podman-container[154836]: ovn_controller Dec 6 04:35:43 localhost systemd[1]: Created slice User Slice of UID 0. Dec 6 04:35:43 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Dec 6 04:35:43 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Dec 6 04:35:43 localhost systemd[1]: Starting User Manager for UID 0... Dec 6 04:35:43 localhost podman[154859]: 2025-12-06 09:35:43.454688992 +0000 UTC m=+0.087352541 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, managed_by=edpm_ansible) Dec 6 04:35:43 localhost podman[154859]: 2025-12-06 09:35:43.546832641 +0000 UTC m=+0.179496180 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, managed_by=edpm_ansible, container_name=ovn_controller) Dec 6 04:35:43 localhost podman[154859]: unhealthy Dec 6 04:35:43 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:35:43 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Failed with result 'exit-code'. Dec 6 04:35:43 localhost edpm-start-podman-container[154835]: Creating additional drop-in dependency for "ovn_controller" (0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5) Dec 6 04:35:43 localhost systemd[154885]: Queued start job for default target Main User Target. Dec 6 04:35:43 localhost systemd[1]: Reloading. Dec 6 04:35:43 localhost systemd-journald[47810]: Field hash table of /run/log/journal/4b30904fc4748c16d0c72dbebcabab49/system.journal has a fill level at 75.1 (250 of 333 items), suggesting rotation. Dec 6 04:35:43 localhost systemd-journald[47810]: /run/log/journal/4b30904fc4748c16d0c72dbebcabab49/system.journal: Journal header limits reached or header out-of-date, rotating. Dec 6 04:35:43 localhost rsyslogd[760]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 6 04:35:43 localhost systemd[154885]: Created slice User Application Slice. Dec 6 04:35:43 localhost systemd[154885]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Dec 6 04:35:43 localhost systemd[154885]: Started Daily Cleanup of User's Temporary Directories. Dec 6 04:35:43 localhost systemd[154885]: Reached target Paths. Dec 6 04:35:43 localhost systemd[154885]: Reached target Timers. Dec 6 04:35:43 localhost systemd[154885]: Starting D-Bus User Message Bus Socket... Dec 6 04:35:43 localhost systemd[154885]: Starting Create User's Volatile Files and Directories... Dec 6 04:35:43 localhost systemd[154885]: Listening on D-Bus User Message Bus Socket. Dec 6 04:35:43 localhost systemd[154885]: Reached target Sockets. Dec 6 04:35:43 localhost systemd[154885]: Finished Create User's Volatile Files and Directories. Dec 6 04:35:43 localhost systemd[154885]: Reached target Basic System. Dec 6 04:35:43 localhost systemd[154885]: Reached target Main User Target. Dec 6 04:35:43 localhost systemd[154885]: Startup finished in 141ms. Dec 6 04:35:43 localhost rsyslogd[760]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 6 04:35:43 localhost systemd-sysv-generator[154942]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:35:43 localhost systemd-rc-local-generator[154937]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:35:43 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:35:43 localhost systemd[1]: Started User Manager for UID 0. Dec 6 04:35:43 localhost systemd[1]: Started ovn_controller container. Dec 6 04:35:43 localhost systemd[1]: Started Session c12 of User root. Dec 6 04:35:43 localhost ovn_controller[154851]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Dec 6 04:35:43 localhost ovn_controller[154851]: INFO:__main__:Validating config file Dec 6 04:35:43 localhost ovn_controller[154851]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Dec 6 04:35:43 localhost ovn_controller[154851]: INFO:__main__:Writing out command to execute Dec 6 04:35:43 localhost systemd[1]: session-c12.scope: Deactivated successfully. Dec 6 04:35:43 localhost ovn_controller[154851]: ++ cat /run_command Dec 6 04:35:43 localhost ovn_controller[154851]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock ' Dec 6 04:35:43 localhost ovn_controller[154851]: + ARGS= Dec 6 04:35:43 localhost ovn_controller[154851]: + sudo kolla_copy_cacerts Dec 6 04:35:43 localhost systemd[1]: Started Session c13 of User root. Dec 6 04:35:43 localhost systemd[1]: session-c13.scope: Deactivated successfully. Dec 6 04:35:43 localhost ovn_controller[154851]: + [[ ! -n '' ]] Dec 6 04:35:43 localhost ovn_controller[154851]: + . kolla_extend_start Dec 6 04:35:43 localhost ovn_controller[154851]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock ' Dec 6 04:35:43 localhost ovn_controller[154851]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock '\''' Dec 6 04:35:43 localhost ovn_controller[154851]: + umask 0022 Dec 6 04:35:43 localhost ovn_controller[154851]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock Dec 6 04:35:43 localhost ovn_controller[154851]: 2025-12-06T09:35:43Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting... Dec 6 04:35:43 localhost ovn_controller[154851]: 2025-12-06T09:35:43Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected Dec 6 04:35:44 localhost ovn_controller[154851]: 2025-12-06T09:35:44Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8] Dec 6 04:35:44 localhost ovn_controller[154851]: 2025-12-06T09:35:44Z|00004|main|INFO|OVS IDL reconnected, force recompute. Dec 6 04:35:44 localhost ovn_controller[154851]: 2025-12-06T09:35:44Z|00005|reconnect|INFO|tcp:ovsdbserver-sb.openstack.svc:6642: connecting... Dec 6 04:35:44 localhost ovn_controller[154851]: 2025-12-06T09:35:44Z|00006|main|INFO|OVNSB IDL reconnected, force recompute. Dec 6 04:35:44 localhost ovn_controller[154851]: 2025-12-06T09:35:44Z|00007|reconnect|INFO|tcp:ovsdbserver-sb.openstack.svc:6642: connected Dec 6 04:35:44 localhost ovn_controller[154851]: 2025-12-06T09:35:44Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch Dec 6 04:35:44 localhost ovn_controller[154851]: 2025-12-06T09:35:44Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... Dec 6 04:35:44 localhost ovn_controller[154851]: 2025-12-06T09:35:44Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported Dec 6 04:35:44 localhost ovn_controller[154851]: 2025-12-06T09:35:44Z|00011|features|INFO|OVS Feature: ct_flush, state: supported Dec 6 04:35:44 localhost ovn_controller[154851]: 2025-12-06T09:35:44Z|00012|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting... Dec 6 04:35:44 localhost ovn_controller[154851]: 2025-12-06T09:35:44Z|00013|main|INFO|OVS feature set changed, force recompute. Dec 6 04:35:44 localhost ovn_controller[154851]: 2025-12-06T09:35:44Z|00014|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch Dec 6 04:35:44 localhost ovn_controller[154851]: 2025-12-06T09:35:44Z|00015|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... Dec 6 04:35:44 localhost ovn_controller[154851]: 2025-12-06T09:35:44Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected Dec 6 04:35:44 localhost ovn_controller[154851]: 2025-12-06T09:35:44Z|00017|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms) Dec 6 04:35:44 localhost ovn_controller[154851]: 2025-12-06T09:35:44Z|00018|main|INFO|OVS OpenFlow connection reconnected,force recompute. Dec 6 04:35:44 localhost ovn_controller[154851]: 2025-12-06T09:35:44Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected Dec 6 04:35:44 localhost ovn_controller[154851]: 2025-12-06T09:35:44Z|00020|reconnect|INFO|unix:/run/openvswitch/db.sock: connected Dec 6 04:35:44 localhost ovn_controller[154851]: 2025-12-06T09:35:44Z|00021|main|INFO|OVS feature set changed, force recompute. Dec 6 04:35:44 localhost ovn_controller[154851]: 2025-12-06T09:35:44Z|00022|ovn_bfd|INFO|Disabled BFD on interface ovn-bd2a75-0 Dec 6 04:35:44 localhost ovn_controller[154851]: 2025-12-06T09:35:44Z|00023|ovn_bfd|INFO|Disabled BFD on interface ovn-ca3c1f-0 Dec 6 04:35:44 localhost ovn_controller[154851]: 2025-12-06T09:35:44Z|00024|ovn_bfd|INFO|Disabled BFD on interface ovn-afa07b-0 Dec 6 04:35:44 localhost ovn_controller[154851]: 2025-12-06T09:35:44Z|00025|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4 Dec 6 04:35:44 localhost ovn_controller[154851]: 2025-12-06T09:35:44Z|00026|binding|INFO|Claiming lport 86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b for this chassis. Dec 6 04:35:44 localhost ovn_controller[154851]: 2025-12-06T09:35:44Z|00027|binding|INFO|86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b: Claiming fa:16:3e:64:77:f3 192.168.0.162 Dec 6 04:35:44 localhost ovn_controller[154851]: 2025-12-06T09:35:44Z|00028|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 04:35:44 localhost ovn_controller[154851]: 2025-12-06T09:35:44Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch Dec 6 04:35:44 localhost ovn_controller[154851]: 2025-12-06T09:35:44Z|00029|binding|INFO|Removing lport 86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b ovn-installed in OVS Dec 6 04:35:44 localhost ovn_controller[154851]: 2025-12-06T09:35:44Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch Dec 6 04:35:44 localhost ovn_controller[154851]: 2025-12-06T09:35:44Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... Dec 6 04:35:44 localhost ovn_controller[154851]: 2025-12-06T09:35:44Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... Dec 6 04:35:44 localhost ovn_controller[154851]: 2025-12-06T09:35:44Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected Dec 6 04:35:44 localhost ovn_controller[154851]: 2025-12-06T09:35:44Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected Dec 6 04:35:44 localhost ovn_controller[154851]: 2025-12-06T09:35:44Z|00030|ovn_bfd|INFO|Enabled BFD on interface ovn-bd2a75-0 Dec 6 04:35:44 localhost ovn_controller[154851]: 2025-12-06T09:35:44Z|00031|ovn_bfd|INFO|Enabled BFD on interface ovn-ca3c1f-0 Dec 6 04:35:44 localhost ovn_controller[154851]: 2025-12-06T09:35:44Z|00032|ovn_bfd|INFO|Enabled BFD on interface ovn-afa07b-0 Dec 6 04:35:44 localhost ovn_controller[154851]: 2025-12-06T09:35:44Z|00033|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 04:35:44 localhost ovn_controller[154851]: 2025-12-06T09:35:44Z|00034|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 04:35:44 localhost ovn_controller[154851]: 2025-12-06T09:35:44Z|00035|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 04:35:44 localhost ovn_controller[154851]: 2025-12-06T09:35:44Z|00036|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 04:35:44 localhost ovn_controller[154851]: 2025-12-06T09:35:44Z|00037|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 04:35:44 localhost ovn_controller[154851]: 2025-12-06T09:35:44Z|00038|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 04:35:44 localhost sshd[154993]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:35:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52518 DF PROTO=TCP SPT=57100 DPT=9101 SEQ=4215202201 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D0E4600000000001030307) Dec 6 04:35:44 localhost python3.9[155055]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:35:44 localhost ovs-vsctl[155056]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload Dec 6 04:35:45 localhost ovn_controller[154851]: 2025-12-06T09:35:45Z|00039|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 04:35:45 localhost ovn_controller[154851]: 2025-12-06T09:35:45Z|00040|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 04:35:45 localhost python3.9[155148]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:35:45 localhost ovs-vsctl[155150]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids Dec 6 04:35:45 localhost ovn_controller[154851]: 2025-12-06T09:35:45Z|00041|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 04:35:46 localhost python3.9[155293]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:35:46 localhost ovs-vsctl[155306]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options Dec 6 04:35:47 localhost systemd[1]: session-50.scope: Deactivated successfully. Dec 6 04:35:47 localhost systemd[1]: session-50.scope: Consumed 39.693s CPU time. Dec 6 04:35:47 localhost systemd-logind[766]: Session 50 logged out. Waiting for processes to exit. Dec 6 04:35:47 localhost systemd-logind[766]: Removed session 50. Dec 6 04:35:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52520 DF PROTO=TCP SPT=57100 DPT=9101 SEQ=4215202201 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D0F06F0000000001030307) Dec 6 04:35:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15410 DF PROTO=TCP SPT=45386 DPT=9105 SEQ=486432858 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D0F9AF0000000001030307) Dec 6 04:35:52 localhost ovn_controller[154851]: 2025-12-06T09:35:52Z|00042|binding|INFO|Setting lport 86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b ovn-installed in OVS Dec 6 04:35:52 localhost ovn_controller[154851]: 2025-12-06T09:35:52Z|00043|binding|INFO|Setting lport 86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b up in Southbound Dec 6 04:35:52 localhost sshd[155336]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:35:52 localhost systemd-logind[766]: New session 52 of user zuul. Dec 6 04:35:52 localhost systemd[1]: Started Session 52 of User zuul. Dec 6 04:35:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15411 DF PROTO=TCP SPT=45386 DPT=9105 SEQ=486432858 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D109700000000001030307) Dec 6 04:35:53 localhost python3.9[155429]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 6 04:35:54 localhost systemd[1]: Stopping User Manager for UID 0... Dec 6 04:35:54 localhost systemd[154885]: Activating special unit Exit the Session... Dec 6 04:35:54 localhost systemd[154885]: Stopped target Main User Target. Dec 6 04:35:54 localhost systemd[154885]: Stopped target Basic System. Dec 6 04:35:54 localhost systemd[154885]: Stopped target Paths. Dec 6 04:35:54 localhost systemd[154885]: Stopped target Sockets. Dec 6 04:35:54 localhost systemd[154885]: Stopped target Timers. Dec 6 04:35:54 localhost systemd[154885]: Stopped Daily Cleanup of User's Temporary Directories. Dec 6 04:35:54 localhost systemd[154885]: Closed D-Bus User Message Bus Socket. Dec 6 04:35:54 localhost systemd[154885]: Stopped Create User's Volatile Files and Directories. Dec 6 04:35:54 localhost systemd[154885]: Removed slice User Application Slice. Dec 6 04:35:54 localhost systemd[154885]: Reached target Shutdown. Dec 6 04:35:54 localhost systemd[154885]: Finished Exit the Session. Dec 6 04:35:54 localhost systemd[154885]: Reached target Exit the Session. Dec 6 04:35:54 localhost systemd[1]: user@0.service: Deactivated successfully. Dec 6 04:35:54 localhost systemd[1]: Stopped User Manager for UID 0. Dec 6 04:35:54 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Dec 6 04:35:54 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Dec 6 04:35:54 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Dec 6 04:35:54 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Dec 6 04:35:54 localhost systemd[1]: Removed slice User Slice of UID 0. Dec 6 04:35:54 localhost sshd[155449]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:35:55 localhost python3.9[155528]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Dec 6 04:35:55 localhost python3.9[155620]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 04:35:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2004 DF PROTO=TCP SPT=40512 DPT=9882 SEQ=469531224 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D111EF0000000001030307) Dec 6 04:35:56 localhost python3.9[155712]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 04:35:56 localhost sshd[155805]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:35:56 localhost python3.9[155804]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 04:35:57 localhost python3.9[155898]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 04:35:58 localhost python3.9[155988]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 6 04:35:59 localhost python3.9[156080]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False Dec 6 04:35:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52522 DF PROTO=TCP SPT=57100 DPT=9101 SEQ=4215202201 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D11FEF0000000001030307) Dec 6 04:36:00 localhost python3.9[156170]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:36:00 localhost python3.9[156243]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765013759.4515567-219-15692034506086/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 04:36:01 localhost python3.9[156333]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:36:02 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15412 DF PROTO=TCP SPT=45386 DPT=9105 SEQ=486432858 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D129EF0000000001030307) Dec 6 04:36:02 localhost python3.9[156407]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765013760.890937-264-80620613485043/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 04:36:03 localhost python3.9[156499]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 6 04:36:03 localhost ovn_controller[154851]: 2025-12-06T09:36:03Z|00044|memory|INFO|17148 kB peak resident set size after 19.6 seconds Dec 6 04:36:03 localhost ovn_controller[154851]: 2025-12-06T09:36:03Z|00045|memory|INFO|idl-cells-OVN_Southbound:4028 idl-cells-Open_vSwitch:1045 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1 lflow-cache-entries-cache-expr:76 lflow-cache-entries-cache-matches:195 lflow-cache-size-KB:289 local_datapath_usage-KB:1 ofctrl_desired_flow_usage-KB:154 ofctrl_installed_flow_usage-KB:111 ofctrl_sb_flow_ref_usage-KB:67 Dec 6 04:36:04 localhost sshd[156554]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:36:04 localhost python3.9[156553]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 6 04:36:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62977 DF PROTO=TCP SPT=55758 DPT=9102 SEQ=1114809180 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D134300000000001030307) Dec 6 04:36:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17637 DF PROTO=TCP SPT=60550 DPT=9102 SEQ=3977837278 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D13FEF0000000001030307) Dec 6 04:36:08 localhost python3.9[156649]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Dec 6 04:36:09 localhost python3.9[156742]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:36:09 localhost python3.9[156813]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765013768.8289354-375-253776304143022/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 04:36:10 localhost python3.9[156903]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:36:10 localhost python3.9[156974]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765013769.83361-375-181523447643519/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 04:36:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62979 DF PROTO=TCP SPT=55758 DPT=9102 SEQ=1114809180 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D14BF00000000001030307) Dec 6 04:36:12 localhost python3.9[157064]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:36:12 localhost python3.9[157135]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765013771.6072345-507-171255576961655/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=aa9e89725fbcebf7a5c773d7b97083445b7b7759 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 04:36:13 localhost python3.9[157225]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:36:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 04:36:13 localhost systemd[1]: tmp-crun.O4f3va.mount: Deactivated successfully. Dec 6 04:36:13 localhost podman[157297]: 2025-12-06 09:36:13.931720814 +0000 UTC m=+0.083535455 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 04:36:13 localhost podman[157297]: 2025-12-06 09:36:13.998398188 +0000 UTC m=+0.150212829 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller) Dec 6 04:36:14 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 04:36:14 localhost python3.9[157296]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765013772.5851123-507-272127474319406/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=979187b925479d81d0609f4188e5b95fe1f92c18 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 04:36:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50508 DF PROTO=TCP SPT=33158 DPT=9101 SEQ=2459792528 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D159900000000001030307) Dec 6 04:36:14 localhost python3.9[157411]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:36:16 localhost python3.9[157505]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 04:36:16 localhost python3.9[157597]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:36:17 localhost python3.9[157645]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 04:36:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50510 DF PROTO=TCP SPT=33158 DPT=9101 SEQ=2459792528 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D165AF0000000001030307) Dec 6 04:36:17 localhost python3.9[157737]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:36:18 localhost python3.9[157785]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 04:36:18 localhost python3.9[157877]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:36:19 localhost python3.9[157969]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:36:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14800 DF PROTO=TCP SPT=40724 DPT=9105 SEQ=2893121207 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D16EAF0000000001030307) Dec 6 04:36:20 localhost python3.9[158017]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:36:20 localhost python3.9[158109]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:36:21 localhost python3.9[158157]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:36:22 localhost python3.9[158249]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:36:22 localhost systemd[1]: Reloading. Dec 6 04:36:22 localhost ovn_controller[154851]: 2025-12-06T09:36:22Z|00046|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory Dec 6 04:36:22 localhost systemd-sysv-generator[158273]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:36:22 localhost systemd-rc-local-generator[158270]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:36:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:36:23 localhost python3.9[158378]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:36:23 localhost python3.9[158426]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:36:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14801 DF PROTO=TCP SPT=40724 DPT=9105 SEQ=2893121207 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D17E700000000001030307) Dec 6 04:36:24 localhost python3.9[158518]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:36:24 localhost python3.9[158566]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:36:25 localhost python3.9[158658]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:36:25 localhost systemd[1]: Reloading. Dec 6 04:36:25 localhost systemd-sysv-generator[158683]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:36:25 localhost systemd-rc-local-generator[158679]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:36:25 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:36:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=375 DF PROTO=TCP SPT=42346 DPT=9882 SEQ=4209804336 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D185EF0000000001030307) Dec 6 04:36:25 localhost systemd[1]: Starting Create netns directory... Dec 6 04:36:25 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Dec 6 04:36:25 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Dec 6 04:36:25 localhost systemd[1]: Finished Create netns directory. Dec 6 04:36:26 localhost sshd[158716]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:36:27 localhost sshd[158795]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:36:27 localhost python3.9[158794]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 04:36:27 localhost python3.9[158888]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:36:28 localhost python3.9[158961]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765013787.351369-960-86414425805267/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 6 04:36:28 localhost sshd[158963]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:36:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50512 DF PROTO=TCP SPT=33158 DPT=9101 SEQ=2459792528 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D195EF0000000001030307) Dec 6 04:36:30 localhost python3.9[159056]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 04:36:30 localhost python3.9[159148]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:36:31 localhost python3.9[159223]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013790.2731814-1035-196142432322869/.source.json _original_basename=.zkyi8gbk follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:36:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14802 DF PROTO=TCP SPT=40724 DPT=9105 SEQ=2893121207 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D19DEF0000000001030307) Dec 6 04:36:31 localhost python3.9[159315]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:36:34 localhost python3.9[159572]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False Dec 6 04:36:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31750 DF PROTO=TCP SPT=34236 DPT=9102 SEQ=1521893675 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D1A96F0000000001030307) Dec 6 04:36:35 localhost python3.9[159664]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Dec 6 04:36:36 localhost python3.9[159756]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Dec 6 04:36:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17809 DF PROTO=TCP SPT=45458 DPT=9102 SEQ=3201948792 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D1B5EF0000000001030307) Dec 6 04:36:40 localhost python3[159874]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Dec 6 04:36:40 localhost python3[159874]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9",#012 "Digest": "sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-12-01T06:29:20.327314945Z",#012 "Config": {#012 "User": "neutron",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 784141054,#012 "VirtualSize": 784141054,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/c229f79c70cf5be9a27371d03399d655b2b0280f5e9159c8f223d964c49a7e53/diff:/var/lib/containers/storage/overlay/2bd01f86bd06174222a9d55fe041ff06edb278c28aedc59c96738054f88e995d/diff:/var/lib/containers/storage/overlay/11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60/diff:/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/70249a3a7715ea2081744d13dd83fad2e62b9b24ab69f2af1c4f45ccd311c7a7/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/70249a3a7715ea2081744d13dd83fad2e62b9b24ab69f2af1c4f45ccd311c7a7/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",#012 "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",#012 "sha256:86c2cd3987225f8a9bf38cc88e9c24b56bdf4a194f2301186519b4a7571b0c92",#012 "sha256:75abaaa40a93c0e2bba524b6f8d4eb5f1c4c9a33db70c892c7582ec5b0827e5e",#012 "sha256:01f43f620d1ea2a9e584abe0cc14c336bedcf55765127c000d743f536dd36f25",#012 "sha256:0bf5bd378602f28be423f5e84abddff3b103396fae3c167031b6e3fcfcf6f120"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "neutron",#012 "History": [#012 {#012 "created": "2025-11-25T04:02:36.223494528Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:36.223562059Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251125\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:39.054452717Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-12-01T06:09:28.025707917Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025744608Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025767729Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025791379Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.02581523Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025867611Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.469442331Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:10:02.029095017Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf Dec 6 04:36:40 localhost podman[159923]: 2025-12-06 09:36:40.495160274 +0000 UTC m=+0.084974068 container remove 87083982c89f1c3d9da67a4aa7fe7496e6609bad80280d2bae3a3e4e828efe54 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, container_name=ovn_metadata_agent, batch=17.1_20251118.1, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '270cf6e6b67cba1ef197c7fa89d5bb20'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 6 04:36:40 localhost python3[159874]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force ovn_metadata_agent Dec 6 04:36:40 localhost podman[159937]: Dec 6 04:36:40 localhost podman[159937]: 2025-12-06 09:36:40.600868441 +0000 UTC m=+0.088418625 container create 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent) Dec 6 04:36:40 localhost podman[159937]: 2025-12-06 09:36:40.557558037 +0000 UTC m=+0.045108231 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Dec 6 04:36:40 localhost python3[159874]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311 --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Dec 6 04:36:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31752 DF PROTO=TCP SPT=34236 DPT=9102 SEQ=1521893675 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D1C12F0000000001030307) Dec 6 04:36:41 localhost python3.9[160067]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:36:42 localhost python3.9[160161]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:36:42 localhost python3.9[160207]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:36:43 localhost python3.9[160298]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765013802.9471653-1299-237582816213639/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:36:44 localhost python3.9[160344]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 6 04:36:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 04:36:44 localhost systemd[1]: Reloading. Dec 6 04:36:44 localhost podman[160346]: 2025-12-06 09:36:44.178011986 +0000 UTC m=+0.083978288 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0) Dec 6 04:36:44 localhost systemd-rc-local-generator[160387]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:36:44 localhost systemd-sysv-generator[160390]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:36:44 localhost podman[160346]: 2025-12-06 09:36:44.223670403 +0000 UTC m=+0.129636725 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller) Dec 6 04:36:44 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:36:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59073 DF PROTO=TCP SPT=40368 DPT=9101 SEQ=3006578736 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D1CEC00000000001030307) Dec 6 04:36:44 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 04:36:45 localhost python3.9[160450]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:36:45 localhost systemd[1]: Reloading. Dec 6 04:36:45 localhost systemd-sysv-generator[160480]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:36:45 localhost systemd-rc-local-generator[160474]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:36:45 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:36:45 localhost systemd[1]: Starting ovn_metadata_agent container... Dec 6 04:36:45 localhost systemd[1]: Started libcrun container. Dec 6 04:36:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec60694536734bdc4f05abf8c315b77759f80d4c5e7f43137384cbac97f56aea/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Dec 6 04:36:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec60694536734bdc4f05abf8c315b77759f80d4c5e7f43137384cbac97f56aea/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 04:36:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 04:36:45 localhost podman[160491]: 2025-12-06 09:36:45.529283373 +0000 UTC m=+0.126740005 container init 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent) Dec 6 04:36:45 localhost ovn_metadata_agent[160504]: + sudo -E kolla_set_configs Dec 6 04:36:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 04:36:45 localhost podman[160491]: 2025-12-06 09:36:45.565046325 +0000 UTC m=+0.162502927 container start 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 6 04:36:45 localhost edpm-start-podman-container[160491]: ovn_metadata_agent Dec 6 04:36:45 localhost ovn_metadata_agent[160504]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Dec 6 04:36:45 localhost ovn_metadata_agent[160504]: INFO:__main__:Validating config file Dec 6 04:36:45 localhost ovn_metadata_agent[160504]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Dec 6 04:36:45 localhost ovn_metadata_agent[160504]: INFO:__main__:Copying service configuration files Dec 6 04:36:45 localhost ovn_metadata_agent[160504]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Dec 6 04:36:45 localhost ovn_metadata_agent[160504]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Dec 6 04:36:45 localhost ovn_metadata_agent[160504]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Dec 6 04:36:45 localhost ovn_metadata_agent[160504]: INFO:__main__:Writing out command to execute Dec 6 04:36:45 localhost ovn_metadata_agent[160504]: INFO:__main__:Setting permission for /var/lib/neutron Dec 6 04:36:45 localhost ovn_metadata_agent[160504]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Dec 6 04:36:45 localhost ovn_metadata_agent[160504]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Dec 6 04:36:45 localhost ovn_metadata_agent[160504]: INFO:__main__:Setting permission for /var/lib/neutron/external Dec 6 04:36:45 localhost ovn_metadata_agent[160504]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Dec 6 04:36:45 localhost ovn_metadata_agent[160504]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Dec 6 04:36:45 localhost ovn_metadata_agent[160504]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Dec 6 04:36:45 localhost ovn_metadata_agent[160504]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Dec 6 04:36:45 localhost ovn_metadata_agent[160504]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Dec 6 04:36:45 localhost ovn_metadata_agent[160504]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/adac9f827fd7fb11fb07020ef60ee06a1fede4feab743856dc8fb3266181d934 Dec 6 04:36:45 localhost ovn_metadata_agent[160504]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Dec 6 04:36:45 localhost ovn_metadata_agent[160504]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids/652b6bdc-40ce-45b7-8aa5-3bca79987993.pid.haproxy Dec 6 04:36:45 localhost ovn_metadata_agent[160504]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy/652b6bdc-40ce-45b7-8aa5-3bca79987993.conf Dec 6 04:36:45 localhost ovn_metadata_agent[160504]: ++ cat /run_command Dec 6 04:36:45 localhost ovn_metadata_agent[160504]: + CMD=neutron-ovn-metadata-agent Dec 6 04:36:45 localhost ovn_metadata_agent[160504]: + ARGS= Dec 6 04:36:45 localhost ovn_metadata_agent[160504]: + sudo kolla_copy_cacerts Dec 6 04:36:45 localhost ovn_metadata_agent[160504]: + [[ ! -n '' ]] Dec 6 04:36:45 localhost ovn_metadata_agent[160504]: + . kolla_extend_start Dec 6 04:36:45 localhost ovn_metadata_agent[160504]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\''' Dec 6 04:36:45 localhost ovn_metadata_agent[160504]: Running command: 'neutron-ovn-metadata-agent' Dec 6 04:36:45 localhost ovn_metadata_agent[160504]: + umask 0022 Dec 6 04:36:45 localhost ovn_metadata_agent[160504]: + exec neutron-ovn-metadata-agent Dec 6 04:36:45 localhost podman[160512]: 2025-12-06 09:36:45.636409224 +0000 UTC m=+0.068187813 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=starting, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 6 04:36:45 localhost edpm-start-podman-container[160490]: Creating additional drop-in dependency for "ovn_metadata_agent" (5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999) Dec 6 04:36:45 localhost systemd[1]: Reloading. Dec 6 04:36:45 localhost podman[160512]: 2025-12-06 09:36:45.719207684 +0000 UTC m=+0.150986303 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent) Dec 6 04:36:45 localhost systemd-rc-local-generator[160575]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:36:45 localhost systemd-sysv-generator[160578]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:36:45 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:36:45 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 04:36:45 localhost systemd[1]: Started ovn_metadata_agent container. Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.219 160509 INFO neutron.common.config [-] Logging enabled!#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.220 160509 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.220 160509 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.220 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.221 160509 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.221 160509 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.221 160509 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.221 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.221 160509 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.221 160509 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.221 160509 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.222 160509 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.222 160509 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.222 160509 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.222 160509 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.222 160509 DEBUG neutron.agent.ovn.metadata_agent [-] backlog = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.222 160509 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.222 160509 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.223 160509 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.223 160509 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.223 160509 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.223 160509 DEBUG neutron.agent.ovn.metadata_agent [-] config_file = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.223 160509 DEBUG neutron.agent.ovn.metadata_agent [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.223 160509 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.223 160509 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.224 160509 DEBUG neutron.agent.ovn.metadata_agent [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.224 160509 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.224 160509 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.224 160509 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.224 160509 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.224 160509 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.224 160509 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.225 160509 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.225 160509 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.225 160509 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.225 160509 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.225 160509 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.225 160509 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.226 160509 DEBUG neutron.agent.ovn.metadata_agent [-] host = np0005548789.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.226 160509 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.226 160509 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.226 160509 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.226 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.226 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.226 160509 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.227 160509 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.227 160509 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.227 160509 DEBUG neutron.agent.ovn.metadata_agent [-] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.227 160509 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.227 160509 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.227 160509 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.227 160509 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.228 160509 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.228 160509 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.228 160509 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.228 160509 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.228 160509 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.228 160509 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.228 160509 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.229 160509 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.229 160509 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.229 160509 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.229 160509 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.229 160509 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.229 160509 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.230 160509 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.230 160509 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.230 160509 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.230 160509 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.230 160509 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.230 160509 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.230 160509 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.231 160509 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.231 160509 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.231 160509 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.231 160509 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.231 160509 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol = http log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.231 160509 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.232 160509 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.232 160509 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.232 160509 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.232 160509 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.232 160509 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.232 160509 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.233 160509 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.233 160509 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.233 160509 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.233 160509 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.233 160509 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.233 160509 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.234 160509 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.234 160509 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.234 160509 DEBUG neutron.agent.ovn.metadata_agent [-] state_path = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.234 160509 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.234 160509 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.234 160509 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.234 160509 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.235 160509 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.235 160509 DEBUG neutron.agent.ovn.metadata_agent [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.235 160509 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.235 160509 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.235 160509 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.235 160509 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.235 160509 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.235 160509 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.236 160509 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.236 160509 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.236 160509 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.236 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.236 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.236 160509 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.237 160509 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.237 160509 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.237 160509 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.237 160509 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.237 160509 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.237 160509 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.237 160509 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.238 160509 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.238 160509 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.238 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.238 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.238 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.238 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.239 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.239 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.239 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.239 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.239 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.239 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.239 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.240 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.240 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.240 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.240 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.241 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.241 160509 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.241 160509 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.241 160509 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.241 160509 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.241 160509 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.242 160509 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.242 160509 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.242 160509 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.242 160509 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.242 160509 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.242 160509 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.242 160509 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.243 160509 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.243 160509 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.243 160509 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.243 160509 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.243 160509 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.243 160509 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.243 160509 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.244 160509 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.244 160509 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.244 160509 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.244 160509 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.244 160509 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.244 160509 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.244 160509 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.245 160509 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.245 160509 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.245 160509 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.245 160509 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.245 160509 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.245 160509 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.245 160509 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.246 160509 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.246 160509 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.246 160509 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.246 160509 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.246 160509 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.246 160509 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.247 160509 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.247 160509 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.247 160509 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.247 160509 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.247 160509 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.247 160509 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.247 160509 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.247 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.248 160509 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.248 160509 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.248 160509 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.248 160509 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.248 160509 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.248 160509 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.249 160509 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.249 160509 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.249 160509 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.249 160509 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.249 160509 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.249 160509 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.249 160509 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.250 160509 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.250 160509 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.250 160509 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.250 160509 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.250 160509 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.250 160509 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.250 160509 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.251 160509 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.251 160509 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.251 160509 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.251 160509 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.251 160509 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.251 160509 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.251 160509 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.252 160509 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.252 160509 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.252 160509 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.252 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.252 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.252 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.253 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.253 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.253 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.253 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.253 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.253 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.253 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.254 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.254 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.254 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.254 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.254 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.254 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.254 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.255 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.255 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.255 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.255 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.255 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.255 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.255 160509 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.256 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.256 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.256 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.256 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.256 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.256 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.256 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.257 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.257 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.257 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.257 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.257 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.258 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.258 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.258 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.258 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.258 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.258 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.258 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection = tcp:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.259 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.259 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.259 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.259 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.259 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.259 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.259 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.260 160509 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.260 160509 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.260 160509 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.260 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.260 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.260 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.261 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.261 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.261 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.261 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.261 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.261 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.261 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.262 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.262 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.262 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.262 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.262 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.262 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.263 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.263 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.263 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.263 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.263 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.263 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.263 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.264 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.264 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.264 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.264 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.264 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.264 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.264 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.265 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.265 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.265 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.265 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.265 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.265 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.265 160509 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.266 160509 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.277 160509 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.277 160509 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.277 160509 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.278 160509 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.278 160509 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.296 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name b142a5ef-fbed-4e92-aa78-e3ad080c6370 (UUID: b142a5ef-fbed-4e92-aa78-e3ad080c6370) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.316 160509 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.317 160509 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.317 160509 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.317 160509 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.319 160509 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.321 160509 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connected#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.329 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: PortBindingCreateWithChassis(events=('create',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:77:f3 192.168.0.162'], port_security=['fa:16:3e:64:77:f3 192.168.0.162'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.162/24', 'neutron:device_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'neutron:device_owner': 'compute:nova', 'neutron:host_id': 'np0005548789.localdomain', 'neutron:mtu': '', 'neutron:network_name': 'neutron-652b6bdc-40ce-45b7-8aa5-3bca79987993', 'neutron:port_capabilities': '', 'neutron:port_fip': '192.168.122.20', 'neutron:port_name': '', 'neutron:project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'neutron:revision_number': '7', 'neutron:security_group_ids': '65e67ecb-ffcf-41e6-8b8b-ed491f2580ec 7ce08e20-be94-4509-a371-aa5c036416af', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7872d306-938e-4ee0-be61-57ba3983d747, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.330 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'b142a5ef-fbed-4e92-aa78-e3ad080c6370'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[], external_ids={'neutron:ovn-metadata-id': 'ebeaa3f7-4a1f-5fad-955a-c95905ca8ce8', 'neutron:ovn-metadata-sb-cfg': '1'}, name=b142a5ef-fbed-4e92-aa78-e3ad080c6370, nb_cfg_timestamp=1765013752689, nb_cfg=4) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.331 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b in datapath 652b6bdc-40ce-45b7-8aa5-3bca79987993 bound to our chassis on insert#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.331 160509 DEBUG neutron_lib.callbacks.manager [-] Subscribe: > process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.332 160509 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.332 160509 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.332 160509 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.333 160509 INFO oslo_service.service [-] Starting 1 workers#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.335 160509 DEBUG oslo_service.service [-] Started child 160637 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.337 160637 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-152989'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.337 160509 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 652b6bdc-40ce-45b7-8aa5-3bca79987993#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.339 160509 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpw60da7x0/privsep.sock']#033[00m Dec 6 04:36:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59075 DF PROTO=TCP SPT=40368 DPT=9101 SEQ=3006578736 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D1DAB00000000001030307) Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.357 160637 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.357 160637 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.358 160637 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.360 160637 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.362 160637 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connected#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.371 160637 INFO eventlet.wsgi.server [-] (160637) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m Dec 6 04:36:47 localhost systemd[1]: session-52.scope: Deactivated successfully. Dec 6 04:36:47 localhost systemd[1]: session-52.scope: Consumed 30.495s CPU time. Dec 6 04:36:47 localhost systemd-logind[766]: Session 52 logged out. Waiting for processes to exit. Dec 6 04:36:47 localhost systemd-logind[766]: Removed session 52. Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.926 160509 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.927 160509 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpw60da7x0/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.839 160674 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.842 160674 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.845 160674 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.845 160674 INFO oslo.privsep.daemon [-] privsep daemon running as pid 160674#033[00m Dec 6 04:36:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:47.931 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[3bb2c9c0-38f2-4ab6-8c69-2e0c4b560f07]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 04:36:48 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:48.373 160674 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 04:36:48 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:48.373 160674 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 04:36:48 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:48.373 160674 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 04:36:48 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:48.832 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[ca83f95c-ad91-44d2-9e3f-74d3093b6a5b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 04:36:48 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:48.834 160509 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmp67br8jf_/privsep.sock']#033[00m Dec 6 04:36:49 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:49.381 160509 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m Dec 6 04:36:49 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:49.382 160509 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp67br8jf_/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m Dec 6 04:36:49 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:49.298 160700 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Dec 6 04:36:49 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:49.301 160700 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Dec 6 04:36:49 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:49.303 160700 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m Dec 6 04:36:49 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:49.304 160700 INFO oslo.privsep.daemon [-] privsep daemon running as pid 160700#033[00m Dec 6 04:36:49 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:49.385 160700 DEBUG oslo.privsep.daemon [-] privsep: reply[8ad8ad8a-bcb7-43d4-8ea4-d160d5895ea6]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 04:36:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8259 DF PROTO=TCP SPT=54056 DPT=9105 SEQ=1085199004 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D1E3EF0000000001030307) Dec 6 04:36:49 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:49.822 160700 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 04:36:49 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:49.822 160700 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 04:36:49 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:49.822 160700 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 04:36:50 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:50.302 160700 DEBUG oslo.privsep.daemon [-] privsep: reply[a6de2c49-8b0d-452d-bad1-c67929586341]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 04:36:50 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:50.305 160700 DEBUG oslo.privsep.daemon [-] privsep: reply[72902aee-8fe7-4cab-aec8-27ff3ae1e0ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 04:36:50 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:50.327 160700 DEBUG oslo.privsep.daemon [-] privsep: reply[8c18e598-90dc-40c3-b7d9-f847d966b2bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 04:36:50 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:50.345 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[a1f6f77a-cb3d-4d66-93d8-ba825139df2b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap652b6bdc-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:b4:a7:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 104, 'tx_packets': 69, 'rx_bytes': 8926, 'tx_bytes': 7209, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 104, 'tx_packets': 69, 'rx_bytes': 8926, 'tx_bytes': 7209, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 14], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483664], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 710085, 'reachable_time': 41718, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 17, 'outoctets': 1164, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 17, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 1164, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 17, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 160710, 'error': None, 'target': 'ovnmeta-652b6bdc-40ce-45b7-8aa5-3bca79987993', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 04:36:50 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:50.360 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[5e1dafa5-946e-4d6c-8bcf-ab0f9a984f45]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap652b6bdc-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 710094, 'tstamp': 710094}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 160711, 'error': None, 'target': 'ovnmeta-652b6bdc-40ce-45b7-8aa5-3bca79987993', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap652b6bdc-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 710097, 'tstamp': 710097}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 160711, 'error': None, 'target': 'ovnmeta-652b6bdc-40ce-45b7-8aa5-3bca79987993', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 10, 'prefixlen': 64, 'flags': 128, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::a9fe:a9fe'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 710099, 'tstamp': 710099}], ['IFA_FLAGS', 128]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 160711, 'error': None, 'target': 'ovnmeta-652b6bdc-40ce-45b7-8aa5-3bca79987993', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 10, 'prefixlen': 64, 'flags': 128, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb4:a70c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 710085, 'tstamp': 710085}], ['IFA_FLAGS', 128]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 160711, 'error': None, 'target': 'ovnmeta-652b6bdc-40ce-45b7-8aa5-3bca79987993', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 04:36:50 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:50.410 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[9ffcdce4-f6a7-4f49-916e-250ccfcd1f33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 04:36:50 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:50.411 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap652b6bdc-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 04:36:50 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:50.415 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap652b6bdc-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 04:36:50 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:50.416 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 6 04:36:50 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:50.417 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap652b6bdc-40, col_values=(('external_ids', {'iface-id': '4fb81ffd-e198-4628-9bd0-0c0f0c89c33a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 04:36:50 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:50.417 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 6 04:36:50 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:50.421 160509 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmppe3r5ww7/privsep.sock']#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.061 160509 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.062 160509 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmppe3r5ww7/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:50.976 160720 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:50.982 160720 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:50.985 160720 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:50.986 160720 INFO oslo.privsep.daemon [-] privsep daemon running as pid 160720#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.065 160720 DEBUG oslo.privsep.daemon [-] privsep: reply[332fc036-0489-4972-97ad-d51c31f71629]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.481 160720 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.482 160720 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.482 160720 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.940 160720 DEBUG oslo.privsep.daemon [-] privsep: reply[f231409a-c5e6-4288-9792-fbc804639319]: (4, ['ovnmeta-652b6bdc-40ce-45b7-8aa5-3bca79987993']) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.943 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=b142a5ef-fbed-4e92-aa78-e3ad080c6370, column=external_ids, values=({'neutron:ovn-metadata-id': 'ebeaa3f7-4a1f-5fad-955a-c95905ca8ce8'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.944 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.945 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b142a5ef-fbed-4e92-aa78-e3ad080c6370, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.958 160509 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.958 160509 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.959 160509 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.959 160509 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.959 160509 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.959 160509 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.960 160509 DEBUG oslo_service.service [-] agent_down_time = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.960 160509 DEBUG oslo_service.service [-] allow_bulk = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.960 160509 DEBUG oslo_service.service [-] api_extensions_path = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.960 160509 DEBUG oslo_service.service [-] api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.961 160509 DEBUG oslo_service.service [-] api_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.961 160509 DEBUG oslo_service.service [-] auth_ca_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.961 160509 DEBUG oslo_service.service [-] auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.962 160509 DEBUG oslo_service.service [-] backlog = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.962 160509 DEBUG oslo_service.service [-] base_mac = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.962 160509 DEBUG oslo_service.service [-] bind_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.962 160509 DEBUG oslo_service.service [-] bind_port = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.963 160509 DEBUG oslo_service.service [-] client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.963 160509 DEBUG oslo_service.service [-] config_dir = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.963 160509 DEBUG oslo_service.service [-] config_file = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.963 160509 DEBUG oslo_service.service [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.964 160509 DEBUG oslo_service.service [-] control_exchange = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.964 160509 DEBUG oslo_service.service [-] core_plugin = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.964 160509 DEBUG oslo_service.service [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.964 160509 DEBUG oslo_service.service [-] default_availability_zones = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.965 160509 DEBUG oslo_service.service [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.965 160509 DEBUG oslo_service.service [-] dhcp_agent_notification = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.965 160509 DEBUG oslo_service.service [-] dhcp_lease_duration = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.965 160509 DEBUG oslo_service.service [-] dhcp_load_type = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.966 160509 DEBUG oslo_service.service [-] dns_domain = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.966 160509 DEBUG oslo_service.service [-] enable_new_agents = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.966 160509 DEBUG oslo_service.service [-] enable_traditional_dhcp = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.966 160509 DEBUG oslo_service.service [-] external_dns_driver = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.967 160509 DEBUG oslo_service.service [-] external_pids = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.967 160509 DEBUG oslo_service.service [-] filter_validation = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.967 160509 DEBUG oslo_service.service [-] global_physnet_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.967 160509 DEBUG oslo_service.service [-] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.968 160509 DEBUG oslo_service.service [-] host = np0005548789.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.968 160509 DEBUG oslo_service.service [-] http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.968 160509 DEBUG oslo_service.service [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.969 160509 DEBUG oslo_service.service [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.969 160509 DEBUG oslo_service.service [-] ipam_driver = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.969 160509 DEBUG oslo_service.service [-] ipv6_pd_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.970 160509 DEBUG oslo_service.service [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.970 160509 DEBUG oslo_service.service [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.970 160509 DEBUG oslo_service.service [-] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.970 160509 DEBUG oslo_service.service [-] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.971 160509 DEBUG oslo_service.service [-] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.971 160509 DEBUG oslo_service.service [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.971 160509 DEBUG oslo_service.service [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.971 160509 DEBUG oslo_service.service [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.972 160509 DEBUG oslo_service.service [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.972 160509 DEBUG oslo_service.service [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.972 160509 DEBUG oslo_service.service [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.972 160509 DEBUG oslo_service.service [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.973 160509 DEBUG oslo_service.service [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.973 160509 DEBUG oslo_service.service [-] max_dns_nameservers = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.973 160509 DEBUG oslo_service.service [-] max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.973 160509 DEBUG oslo_service.service [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.974 160509 DEBUG oslo_service.service [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.974 160509 DEBUG oslo_service.service [-] max_subnet_host_routes = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.974 160509 DEBUG oslo_service.service [-] metadata_backlog = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.974 160509 DEBUG oslo_service.service [-] metadata_proxy_group = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.975 160509 DEBUG oslo_service.service [-] metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.975 160509 DEBUG oslo_service.service [-] metadata_proxy_socket = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.975 160509 DEBUG oslo_service.service [-] metadata_proxy_socket_mode = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.975 160509 DEBUG oslo_service.service [-] metadata_proxy_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.976 160509 DEBUG oslo_service.service [-] metadata_workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.976 160509 DEBUG oslo_service.service [-] network_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.976 160509 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.976 160509 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.977 160509 DEBUG oslo_service.service [-] nova_client_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.977 160509 DEBUG oslo_service.service [-] nova_client_priv_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.977 160509 DEBUG oslo_service.service [-] nova_metadata_host = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.977 160509 DEBUG oslo_service.service [-] nova_metadata_insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.978 160509 DEBUG oslo_service.service [-] nova_metadata_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.978 160509 DEBUG oslo_service.service [-] nova_metadata_protocol = http log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.978 160509 DEBUG oslo_service.service [-] pagination_max_limit = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.978 160509 DEBUG oslo_service.service [-] periodic_fuzzy_delay = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.979 160509 DEBUG oslo_service.service [-] periodic_interval = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.979 160509 DEBUG oslo_service.service [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.979 160509 DEBUG oslo_service.service [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.979 160509 DEBUG oslo_service.service [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.980 160509 DEBUG oslo_service.service [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.980 160509 DEBUG oslo_service.service [-] retry_until_window = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.980 160509 DEBUG oslo_service.service [-] rpc_resources_processing_step = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.980 160509 DEBUG oslo_service.service [-] rpc_response_max_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.980 160509 DEBUG oslo_service.service [-] rpc_state_report_workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.981 160509 DEBUG oslo_service.service [-] rpc_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.981 160509 DEBUG oslo_service.service [-] send_events_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.981 160509 DEBUG oslo_service.service [-] service_plugins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.982 160509 DEBUG oslo_service.service [-] setproctitle = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.982 160509 DEBUG oslo_service.service [-] state_path = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.982 160509 DEBUG oslo_service.service [-] syslog_log_facility = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.982 160509 DEBUG oslo_service.service [-] tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.983 160509 DEBUG oslo_service.service [-] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.983 160509 DEBUG oslo_service.service [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.983 160509 DEBUG oslo_service.service [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.983 160509 DEBUG oslo_service.service [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.984 160509 DEBUG oslo_service.service [-] use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.984 160509 DEBUG oslo_service.service [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.984 160509 DEBUG oslo_service.service [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.984 160509 DEBUG oslo_service.service [-] vlan_transparent = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.984 160509 DEBUG oslo_service.service [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.985 160509 DEBUG oslo_service.service [-] wsgi_default_pool_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.985 160509 DEBUG oslo_service.service [-] wsgi_keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.985 160509 DEBUG oslo_service.service [-] wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.985 160509 DEBUG oslo_service.service [-] wsgi_server_debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.985 160509 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.986 160509 DEBUG oslo_service.service [-] oslo_concurrency.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.986 160509 DEBUG oslo_service.service [-] profiler.connection_string = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.986 160509 DEBUG oslo_service.service [-] profiler.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.986 160509 DEBUG oslo_service.service [-] profiler.es_doc_type = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.987 160509 DEBUG oslo_service.service [-] profiler.es_scroll_size = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.987 160509 DEBUG oslo_service.service [-] profiler.es_scroll_time = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.987 160509 DEBUG oslo_service.service [-] profiler.filter_error_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.987 160509 DEBUG oslo_service.service [-] profiler.hmac_keys = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.988 160509 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.988 160509 DEBUG oslo_service.service [-] profiler.socket_timeout = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.988 160509 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.988 160509 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.988 160509 DEBUG oslo_service.service [-] oslo_policy.enforce_scope = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.989 160509 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.989 160509 DEBUG oslo_service.service [-] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.989 160509 DEBUG oslo_service.service [-] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.989 160509 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.990 160509 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.990 160509 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.990 160509 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.990 160509 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.991 160509 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.991 160509 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.991 160509 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.991 160509 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.992 160509 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.992 160509 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.992 160509 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.992 160509 DEBUG oslo_service.service [-] privsep.capabilities = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.993 160509 DEBUG oslo_service.service [-] privsep.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.993 160509 DEBUG oslo_service.service [-] privsep.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.993 160509 DEBUG oslo_service.service [-] privsep.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.993 160509 DEBUG oslo_service.service [-] privsep.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.993 160509 DEBUG oslo_service.service [-] privsep.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.994 160509 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.994 160509 DEBUG oslo_service.service [-] privsep_dhcp_release.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.994 160509 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.995 160509 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.995 160509 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.995 160509 DEBUG oslo_service.service [-] privsep_dhcp_release.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.995 160509 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.996 160509 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.996 160509 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.996 160509 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.996 160509 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.996 160509 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.997 160509 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.997 160509 DEBUG oslo_service.service [-] privsep_namespace.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.997 160509 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.997 160509 DEBUG oslo_service.service [-] privsep_namespace.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.998 160509 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.998 160509 DEBUG oslo_service.service [-] privsep_namespace.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.998 160509 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.998 160509 DEBUG oslo_service.service [-] privsep_conntrack.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.998 160509 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.999 160509 DEBUG oslo_service.service [-] privsep_conntrack.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.999 160509 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:51 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.999 160509 DEBUG oslo_service.service [-] privsep_conntrack.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:51.999 160509 DEBUG oslo_service.service [-] privsep_link.capabilities = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.000 160509 DEBUG oslo_service.service [-] privsep_link.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.000 160509 DEBUG oslo_service.service [-] privsep_link.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.000 160509 DEBUG oslo_service.service [-] privsep_link.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.000 160509 DEBUG oslo_service.service [-] privsep_link.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.000 160509 DEBUG oslo_service.service [-] privsep_link.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.001 160509 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.001 160509 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.001 160509 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.001 160509 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.002 160509 DEBUG oslo_service.service [-] AGENT.kill_scripts_path = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.002 160509 DEBUG oslo_service.service [-] AGENT.root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.002 160509 DEBUG oslo_service.service [-] AGENT.root_helper_daemon = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.002 160509 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.002 160509 DEBUG oslo_service.service [-] AGENT.use_random_fully = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.002 160509 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.002 160509 DEBUG oslo_service.service [-] QUOTAS.default_quota = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.003 160509 DEBUG oslo_service.service [-] QUOTAS.quota_driver = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.003 160509 DEBUG oslo_service.service [-] QUOTAS.quota_network = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.003 160509 DEBUG oslo_service.service [-] QUOTAS.quota_port = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.003 160509 DEBUG oslo_service.service [-] QUOTAS.quota_security_group = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.003 160509 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.003 160509 DEBUG oslo_service.service [-] QUOTAS.quota_subnet = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.004 160509 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.004 160509 DEBUG oslo_service.service [-] nova.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.004 160509 DEBUG oslo_service.service [-] nova.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.004 160509 DEBUG oslo_service.service [-] nova.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.004 160509 DEBUG oslo_service.service [-] nova.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.004 160509 DEBUG oslo_service.service [-] nova.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.004 160509 DEBUG oslo_service.service [-] nova.endpoint_type = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.005 160509 DEBUG oslo_service.service [-] nova.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.005 160509 DEBUG oslo_service.service [-] nova.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.005 160509 DEBUG oslo_service.service [-] nova.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.005 160509 DEBUG oslo_service.service [-] nova.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.005 160509 DEBUG oslo_service.service [-] nova.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.005 160509 DEBUG oslo_service.service [-] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.005 160509 DEBUG oslo_service.service [-] placement.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.006 160509 DEBUG oslo_service.service [-] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.006 160509 DEBUG oslo_service.service [-] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.006 160509 DEBUG oslo_service.service [-] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.006 160509 DEBUG oslo_service.service [-] placement.endpoint_type = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.006 160509 DEBUG oslo_service.service [-] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.006 160509 DEBUG oslo_service.service [-] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.006 160509 DEBUG oslo_service.service [-] placement.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.006 160509 DEBUG oslo_service.service [-] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.007 160509 DEBUG oslo_service.service [-] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.007 160509 DEBUG oslo_service.service [-] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.007 160509 DEBUG oslo_service.service [-] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.007 160509 DEBUG oslo_service.service [-] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.007 160509 DEBUG oslo_service.service [-] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.007 160509 DEBUG oslo_service.service [-] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.007 160509 DEBUG oslo_service.service [-] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.008 160509 DEBUG oslo_service.service [-] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.008 160509 DEBUG oslo_service.service [-] ironic.enable_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.008 160509 DEBUG oslo_service.service [-] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.008 160509 DEBUG oslo_service.service [-] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.008 160509 DEBUG oslo_service.service [-] ironic.interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.008 160509 DEBUG oslo_service.service [-] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.008 160509 DEBUG oslo_service.service [-] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.009 160509 DEBUG oslo_service.service [-] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.009 160509 DEBUG oslo_service.service [-] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.009 160509 DEBUG oslo_service.service [-] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.009 160509 DEBUG oslo_service.service [-] ironic.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.009 160509 DEBUG oslo_service.service [-] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.009 160509 DEBUG oslo_service.service [-] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.009 160509 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.010 160509 DEBUG oslo_service.service [-] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.010 160509 DEBUG oslo_service.service [-] ironic.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.010 160509 DEBUG oslo_service.service [-] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.010 160509 DEBUG oslo_service.service [-] cli_script.dry_run = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.010 160509 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.010 160509 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.010 160509 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.011 160509 DEBUG oslo_service.service [-] ovn.dns_servers = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.011 160509 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.011 160509 DEBUG oslo_service.service [-] ovn.neutron_sync_mode = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.011 160509 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.011 160509 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.011 160509 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.011 160509 DEBUG oslo_service.service [-] ovn.ovn_l3_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.012 160509 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.012 160509 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.012 160509 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.012 160509 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.012 160509 DEBUG oslo_service.service [-] ovn.ovn_nb_connection = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.012 160509 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.012 160509 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.013 160509 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.013 160509 DEBUG oslo_service.service [-] ovn.ovn_sb_connection = tcp:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.013 160509 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.013 160509 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.013 160509 DEBUG oslo_service.service [-] ovn.ovsdb_log_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.013 160509 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.013 160509 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.014 160509 DEBUG oslo_service.service [-] ovn.vhost_sock_dir = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.014 160509 DEBUG oslo_service.service [-] ovn.vif_type = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.014 160509 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.014 160509 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.014 160509 DEBUG oslo_service.service [-] OVS.ovsdb_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.014 160509 DEBUG oslo_service.service [-] ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.014 160509 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.015 160509 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.015 160509 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.015 160509 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.015 160509 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.015 160509 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.015 160509 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.015 160509 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.016 160509 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.016 160509 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.016 160509 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.016 160509 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.016 160509 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.016 160509 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.016 160509 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.017 160509 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.017 160509 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.017 160509 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.017 160509 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.017 160509 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.017 160509 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.018 160509 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.018 160509 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.018 160509 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.018 160509 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.018 160509 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.019 160509 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.019 160509 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.019 160509 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.019 160509 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.019 160509 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.019 160509 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.020 160509 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.020 160509 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.020 160509 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.020 160509 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:36:52 localhost ovn_metadata_agent[160504]: 2025-12-06 09:36:52.020 160509 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Dec 6 04:36:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8260 DF PROTO=TCP SPT=54056 DPT=9105 SEQ=1085199004 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D1F3AF0000000001030307) Dec 6 04:36:53 localhost sshd[160725]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:36:54 localhost systemd-logind[766]: New session 53 of user zuul. Dec 6 04:36:54 localhost systemd[1]: Started Session 53 of User zuul. Dec 6 04:36:55 localhost python3.9[160818]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 6 04:36:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47394 DF PROTO=TCP SPT=39376 DPT=9882 SEQ=2275117447 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D1FBEF0000000001030307) Dec 6 04:36:56 localhost python3.9[160914]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:36:57 localhost python3.9[161019]: ansible-ansible.legacy.command Invoked with _raw_params=podman stop nova_virtlogd _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:36:57 localhost systemd[1]: tmp-crun.VHTygU.mount: Deactivated successfully. Dec 6 04:36:57 localhost systemd[1]: libpod-5519229fe01370c92f47b0e8e46cddb6cb973cc807c8388a053612d951244964.scope: Deactivated successfully. Dec 6 04:36:57 localhost podman[161020]: 2025-12-06 09:36:57.102830882 +0000 UTC m=+0.085940579 container died 5519229fe01370c92f47b0e8e46cddb6cb973cc807c8388a053612d951244964 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-libvirt-container, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, distribution-scope=public, version=17.1.12, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:35:22Z, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-libvirt, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-libvirt) Dec 6 04:36:57 localhost podman[161020]: 2025-12-06 09:36:57.136113868 +0000 UTC m=+0.119223525 container cleanup 5519229fe01370c92f47b0e8e46cddb6cb973cc807c8388a053612d951244964 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, com.redhat.component=openstack-nova-libvirt-container, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, tcib_managed=true, vcs-type=git, name=rhosp17/openstack-nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vendor=Red Hat, Inc., build-date=2025-11-19T00:35:22Z) Dec 6 04:36:57 localhost podman[161033]: 2025-12-06 09:36:57.198109667 +0000 UTC m=+0.086494545 container remove 5519229fe01370c92f47b0e8e46cddb6cb973cc807c8388a053612d951244964 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, release=1761123044, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:35:22Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, vcs-type=git, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt) Dec 6 04:36:57 localhost systemd[1]: libpod-conmon-5519229fe01370c92f47b0e8e46cddb6cb973cc807c8388a053612d951244964.scope: Deactivated successfully. Dec 6 04:36:58 localhost systemd[1]: var-lib-containers-storage-overlay-8a1de60586d08fb5b92518366abbb713eb2f96c306cc36f6078e1ea96b940056-merged.mount: Deactivated successfully. Dec 6 04:36:58 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5519229fe01370c92f47b0e8e46cddb6cb973cc807c8388a053612d951244964-userdata-shm.mount: Deactivated successfully. Dec 6 04:36:58 localhost python3.9[161140]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 6 04:36:58 localhost systemd[1]: Reloading. Dec 6 04:36:58 localhost systemd-rc-local-generator[161169]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:36:58 localhost systemd-sysv-generator[161172]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:36:58 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:36:59 localhost sshd[161237]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:36:59 localhost python3.9[161269]: ansible-ansible.builtin.service_facts Invoked Dec 6 04:36:59 localhost network[161286]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 6 04:36:59 localhost network[161287]: 'network-scripts' will be removed from distribution in near future. Dec 6 04:36:59 localhost network[161288]: It is advised to switch to 'NetworkManager' instead for network management. Dec 6 04:36:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59077 DF PROTO=TCP SPT=40368 DPT=9101 SEQ=3006578736 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D209EF0000000001030307) Dec 6 04:37:00 localhost sshd[161296]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:37:02 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8261 DF PROTO=TCP SPT=54056 DPT=9105 SEQ=1085199004 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D213EF0000000001030307) Dec 6 04:37:02 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:37:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16580 DF PROTO=TCP SPT=46134 DPT=9102 SEQ=2267506048 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D21E6F0000000001030307) Dec 6 04:37:05 localhost python3.9[161491]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:37:05 localhost systemd[1]: Reloading. Dec 6 04:37:05 localhost systemd-rc-local-generator[161515]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:37:05 localhost systemd-sysv-generator[161519]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:37:05 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:37:06 localhost systemd[1]: Stopped target tripleo_nova_libvirt.target. Dec 6 04:37:06 localhost python3.9[161623]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:37:07 localhost python3.9[161716]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:37:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62982 DF PROTO=TCP SPT=55758 DPT=9102 SEQ=1114809180 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D229F00000000001030307) Dec 6 04:37:09 localhost python3.9[161809]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:37:09 localhost python3.9[161902]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:37:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16582 DF PROTO=TCP SPT=46134 DPT=9102 SEQ=2267506048 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D2362F0000000001030307) Dec 6 04:37:11 localhost python3.9[161995]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:37:12 localhost python3.9[162088]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:37:13 localhost python3.9[162181]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:37:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48154 DF PROTO=TCP SPT=34958 DPT=9101 SEQ=2103639703 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D243EF0000000001030307) Dec 6 04:37:14 localhost python3.9[162273]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:37:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 04:37:14 localhost podman[162365]: 2025-12-06 09:37:14.870332949 +0000 UTC m=+0.091354486 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3) Dec 6 04:37:14 localhost podman[162365]: 2025-12-06 09:37:14.909876317 +0000 UTC m=+0.130897844 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 6 04:37:14 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 04:37:14 localhost python3.9[162366]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:37:15 localhost python3.9[162482]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:37:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 04:37:16 localhost systemd[1]: tmp-crun.Fpjkgk.mount: Deactivated successfully. Dec 6 04:37:16 localhost podman[162575]: 2025-12-06 09:37:16.423007399 +0000 UTC m=+0.077875690 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 04:37:16 localhost podman[162575]: 2025-12-06 09:37:16.433197534 +0000 UTC m=+0.088065835 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 6 04:37:16 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 04:37:16 localhost sshd[162593]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:37:16 localhost python3.9[162574]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:37:17 localhost python3.9[162686]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:37:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48156 DF PROTO=TCP SPT=34958 DPT=9101 SEQ=2103639703 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D24FEF0000000001030307) Dec 6 04:37:17 localhost python3.9[162778]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:37:18 localhost python3.9[162870]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:37:19 localhost python3.9[162962]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:37:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64689 DF PROTO=TCP SPT=59664 DPT=9105 SEQ=313070873 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D259300000000001030307) Dec 6 04:37:19 localhost python3.9[163054]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:37:20 localhost python3.9[163146]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:37:21 localhost python3.9[163238]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:37:21 localhost sshd[163298]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:37:21 localhost python3.9[163332]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:37:22 localhost python3.9[163424]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:37:23 localhost python3.9[163516]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:37:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64690 DF PROTO=TCP SPT=59664 DPT=9105 SEQ=313070873 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D268EF0000000001030307) Dec 6 04:37:23 localhost python3.9[163608]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Dec 6 04:37:24 localhost python3.9[163700]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 6 04:37:24 localhost systemd[1]: Reloading. Dec 6 04:37:24 localhost systemd-rc-local-generator[163728]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:37:24 localhost systemd-sysv-generator[163732]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:37:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:37:26 localhost python3.9[163828]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:37:26 localhost python3.9[163921]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:37:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21093 DF PROTO=TCP SPT=53914 DPT=9882 SEQ=907222399 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D275AF0000000001030307) Dec 6 04:37:27 localhost python3.9[164014]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:37:29 localhost python3.9[164107]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:37:29 localhost python3.9[164200]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:37:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48158 DF PROTO=TCP SPT=34958 DPT=9101 SEQ=2103639703 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D27FF00000000001030307) Dec 6 04:37:30 localhost python3.9[164293]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:37:30 localhost python3.9[164386]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:37:31 localhost python3.9[164479]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None Dec 6 04:37:32 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64691 DF PROTO=TCP SPT=59664 DPT=9105 SEQ=313070873 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D289F00000000001030307) Dec 6 04:37:32 localhost python3.9[164572]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None Dec 6 04:37:33 localhost python3.9[164670]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005548789.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None Dec 6 04:37:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23207 DF PROTO=TCP SPT=56162 DPT=9102 SEQ=3353457552 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D293B00000000001030307) Dec 6 04:37:34 localhost python3.9[164770]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 6 04:37:35 localhost python3.9[164824]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 6 04:37:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31755 DF PROTO=TCP SPT=34236 DPT=9102 SEQ=1521893675 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D29FF00000000001030307) Dec 6 04:37:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23209 DF PROTO=TCP SPT=56162 DPT=9102 SEQ=3353457552 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D2AB6F0000000001030307) Dec 6 04:37:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19439 DF PROTO=TCP SPT=44680 DPT=9101 SEQ=343523345 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D2B9210000000001030307) Dec 6 04:37:44 localhost sshd[164896]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:37:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 04:37:45 localhost podman[164898]: 2025-12-06 09:37:45.930389365 +0000 UTC m=+0.085342022 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 04:37:45 localhost systemd[1]: tmp-crun.tOUMtJ.mount: Deactivated successfully. Dec 6 04:37:46 localhost podman[164898]: 2025-12-06 09:37:46.012392329 +0000 UTC m=+0.167344936 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller) Dec 6 04:37:46 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 04:37:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 04:37:46 localhost podman[164922]: 2025-12-06 09:37:46.917532343 +0000 UTC m=+0.078269627 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible) Dec 6 04:37:46 localhost podman[164922]: 2025-12-06 09:37:46.920869089 +0000 UTC m=+0.081606413 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 6 04:37:46 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 04:37:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:37:47.267 160509 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 04:37:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:37:47.267 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 04:37:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:37:47.269 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 04:37:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19441 DF PROTO=TCP SPT=44680 DPT=9101 SEQ=343523345 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D2C52F0000000001030307) Dec 6 04:37:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21272 DF PROTO=TCP SPT=45938 DPT=9105 SEQ=4088601586 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D2CE300000000001030307) Dec 6 04:37:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21273 DF PROTO=TCP SPT=45938 DPT=9105 SEQ=4088601586 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D2DDEF0000000001030307) Dec 6 04:37:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21096 DF PROTO=TCP SPT=53914 DPT=9882 SEQ=907222399 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D2E5EF0000000001030307) Dec 6 04:37:55 localhost sshd[165046]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:37:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19443 DF PROTO=TCP SPT=44680 DPT=9101 SEQ=343523345 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D2F5EF0000000001030307) Dec 6 04:38:01 localhost kernel: SELinux: Converting 2759 SID table entries... Dec 6 04:38:01 localhost kernel: SELinux: Context system_u:object_r:insights_client_cache_t:s0 became invalid (unmapped). Dec 6 04:38:01 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 6 04:38:01 localhost kernel: SELinux: policy capability open_perms=1 Dec 6 04:38:01 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 6 04:38:01 localhost kernel: SELinux: policy capability always_check_network=0 Dec 6 04:38:01 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 6 04:38:01 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 6 04:38:01 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 6 04:38:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21274 DF PROTO=TCP SPT=45938 DPT=9105 SEQ=4088601586 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D2FDEF0000000001030307) Dec 6 04:38:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33932 DF PROTO=TCP SPT=58314 DPT=9102 SEQ=3077742811 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D308EF0000000001030307) Dec 6 04:38:05 localhost sshd[166074]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:38:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64469 DF PROTO=TCP SPT=33104 DPT=9882 SEQ=1567678979 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D319F00000000001030307) Dec 6 04:38:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33934 DF PROTO=TCP SPT=58314 DPT=9102 SEQ=3077742811 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D320B00000000001030307) Dec 6 04:38:12 localhost kernel: SELinux: Converting 2762 SID table entries... Dec 6 04:38:12 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 6 04:38:12 localhost kernel: SELinux: policy capability open_perms=1 Dec 6 04:38:12 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 6 04:38:12 localhost kernel: SELinux: policy capability always_check_network=0 Dec 6 04:38:12 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 6 04:38:12 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 6 04:38:12 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 6 04:38:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26469 DF PROTO=TCP SPT=35112 DPT=9101 SEQ=2682104277 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D32E500000000001030307) Dec 6 04:38:16 localhost dbus-broker-launch[756]: avc: op=load_policy lsm=selinux seqno=20 res=1 Dec 6 04:38:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 04:38:16 localhost podman[166084]: 2025-12-06 09:38:16.936298685 +0000 UTC m=+0.089097530 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true) Dec 6 04:38:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 04:38:16 localhost podman[166084]: 2025-12-06 09:38:16.99219916 +0000 UTC m=+0.144998005 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.license=GPLv2) Dec 6 04:38:17 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 04:38:17 localhost podman[166109]: 2025-12-06 09:38:17.065700278 +0000 UTC m=+0.075147456 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 6 04:38:17 localhost podman[166109]: 2025-12-06 09:38:17.074518154 +0000 UTC m=+0.083965322 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 6 04:38:17 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 04:38:17 localhost sshd[166127]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:38:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26471 DF PROTO=TCP SPT=35112 DPT=9101 SEQ=2682104277 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D33A700000000001030307) Dec 6 04:38:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55750 DF PROTO=TCP SPT=39902 DPT=9105 SEQ=3210868471 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D3436F0000000001030307) Dec 6 04:38:20 localhost sshd[166129]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:38:20 localhost kernel: SELinux: Converting 2762 SID table entries... Dec 6 04:38:20 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 6 04:38:20 localhost kernel: SELinux: policy capability open_perms=1 Dec 6 04:38:20 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 6 04:38:20 localhost kernel: SELinux: policy capability always_check_network=0 Dec 6 04:38:20 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 6 04:38:20 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 6 04:38:20 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 6 04:38:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55751 DF PROTO=TCP SPT=39902 DPT=9105 SEQ=3210868471 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D353300000000001030307) Dec 6 04:38:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17110 DF PROTO=TCP SPT=54638 DPT=9882 SEQ=1117969624 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D35FEF0000000001030307) Dec 6 04:38:28 localhost kernel: SELinux: Converting 2762 SID table entries... Dec 6 04:38:28 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 6 04:38:28 localhost kernel: SELinux: policy capability open_perms=1 Dec 6 04:38:28 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 6 04:38:28 localhost kernel: SELinux: policy capability always_check_network=0 Dec 6 04:38:28 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 6 04:38:28 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 6 04:38:28 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 6 04:38:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26473 DF PROTO=TCP SPT=35112 DPT=9101 SEQ=2682104277 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D369EF0000000001030307) Dec 6 04:38:32 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55752 DF PROTO=TCP SPT=39902 DPT=9105 SEQ=3210868471 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D373F00000000001030307) Dec 6 04:38:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23225 DF PROTO=TCP SPT=48930 DPT=9102 SEQ=3124958974 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D37E2F0000000001030307) Dec 6 04:38:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23212 DF PROTO=TCP SPT=56162 DPT=9102 SEQ=3353457552 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D389EF0000000001030307) Dec 6 04:38:38 localhost kernel: SELinux: Converting 2762 SID table entries... Dec 6 04:38:38 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 6 04:38:38 localhost kernel: SELinux: policy capability open_perms=1 Dec 6 04:38:38 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 6 04:38:38 localhost kernel: SELinux: policy capability always_check_network=0 Dec 6 04:38:38 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 6 04:38:38 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 6 04:38:38 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 6 04:38:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23227 DF PROTO=TCP SPT=48930 DPT=9102 SEQ=3124958974 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D395F00000000001030307) Dec 6 04:38:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23203 DF PROTO=TCP SPT=32928 DPT=9101 SEQ=3537228615 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D3A3800000000001030307) Dec 6 04:38:47 localhost kernel: SELinux: Converting 2762 SID table entries... Dec 6 04:38:47 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 6 04:38:47 localhost kernel: SELinux: policy capability open_perms=1 Dec 6 04:38:47 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 6 04:38:47 localhost kernel: SELinux: policy capability always_check_network=0 Dec 6 04:38:47 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 6 04:38:47 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 6 04:38:47 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 6 04:38:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:38:47.268 160509 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 04:38:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:38:47.270 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 04:38:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:38:47.271 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 04:38:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23205 DF PROTO=TCP SPT=32928 DPT=9101 SEQ=3537228615 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D3AF6F0000000001030307) Dec 6 04:38:47 localhost dbus-broker-launch[756]: avc: op=load_policy lsm=selinux seqno=24 res=1 Dec 6 04:38:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 04:38:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 04:38:47 localhost systemd[1]: Reloading. Dec 6 04:38:47 localhost podman[166176]: 2025-12-06 09:38:47.818794454 +0000 UTC m=+0.100096932 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible) Dec 6 04:38:47 localhost systemd-sysv-generator[166230]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:38:47 localhost systemd-rc-local-generator[166224]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:38:47 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:38:47 localhost podman[166176]: 2025-12-06 09:38:47.893066326 +0000 UTC m=+0.174368794 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0) Dec 6 04:38:47 localhost podman[166177]: 2025-12-06 09:38:47.906394714 +0000 UTC m=+0.186526066 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 6 04:38:47 localhost podman[166177]: 2025-12-06 09:38:47.91510867 +0000 UTC m=+0.195239972 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 6 04:38:47 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 04:38:47 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 04:38:48 localhost systemd[1]: Reloading. Dec 6 04:38:48 localhost systemd-sysv-generator[166280]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:38:48 localhost systemd-rc-local-generator[166275]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:38:48 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:38:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29804 DF PROTO=TCP SPT=48604 DPT=9105 SEQ=983361846 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D3B8B00000000001030307) Dec 6 04:38:51 localhost sshd[166365]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:38:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29805 DF PROTO=TCP SPT=48604 DPT=9105 SEQ=983361846 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D3C86F0000000001030307) Dec 6 04:38:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17113 DF PROTO=TCP SPT=54638 DPT=9882 SEQ=1117969624 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D3CFEF0000000001030307) Dec 6 04:38:55 localhost sshd[166385]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:38:56 localhost sshd[166387]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:38:57 localhost kernel: SELinux: Converting 2763 SID table entries... Dec 6 04:38:57 localhost kernel: SELinux: policy capability network_peer_controls=1 Dec 6 04:38:57 localhost kernel: SELinux: policy capability open_perms=1 Dec 6 04:38:57 localhost kernel: SELinux: policy capability extended_socket_class=1 Dec 6 04:38:57 localhost kernel: SELinux: policy capability always_check_network=0 Dec 6 04:38:57 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Dec 6 04:38:57 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 6 04:38:57 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Dec 6 04:38:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23207 DF PROTO=TCP SPT=32928 DPT=9101 SEQ=3537228615 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D3DFEF0000000001030307) Dec 6 04:39:01 localhost dbus-broker-launch[752]: Noticed file-system modification, trigger reload. Dec 6 04:39:01 localhost dbus-broker-launch[756]: avc: op=load_policy lsm=selinux seqno=25 res=1 Dec 6 04:39:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29806 DF PROTO=TCP SPT=48604 DPT=9105 SEQ=983361846 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D3E7F00000000001030307) Dec 6 04:39:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45655 DF PROTO=TCP SPT=38490 DPT=9102 SEQ=4244599726 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D3F32F0000000001030307) Dec 6 04:39:04 localhost sshd[166458]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:39:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33937 DF PROTO=TCP SPT=58314 DPT=9102 SEQ=3077742811 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D3FFEF0000000001030307) Dec 6 04:39:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45657 DF PROTO=TCP SPT=38490 DPT=9102 SEQ=4244599726 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D40AEF0000000001030307) Dec 6 04:39:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4511 DF PROTO=TCP SPT=58514 DPT=9101 SEQ=826127641 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D418B00000000001030307) Dec 6 04:39:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4513 DF PROTO=TCP SPT=58514 DPT=9101 SEQ=826127641 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D424AF0000000001030307) Dec 6 04:39:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 04:39:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 04:39:18 localhost podman[169320]: 2025-12-06 09:39:18.948958921 +0000 UTC m=+0.100492564 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent) Dec 6 04:39:18 localhost podman[169320]: 2025-12-06 09:39:18.98097224 +0000 UTC m=+0.132505883 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 6 04:39:18 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 04:39:19 localhost podman[169316]: 2025-12-06 09:39:19.032132865 +0000 UTC m=+0.183773341 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Dec 6 04:39:19 localhost podman[169316]: 2025-12-06 09:39:19.070715195 +0000 UTC m=+0.222355671 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller) Dec 6 04:39:19 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 04:39:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56496 DF PROTO=TCP SPT=51976 DPT=9105 SEQ=3791762787 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D42DEF0000000001030307) Dec 6 04:39:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56497 DF PROTO=TCP SPT=51976 DPT=9105 SEQ=3791762787 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D43DAF0000000001030307) Dec 6 04:39:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39788 DF PROTO=TCP SPT=34382 DPT=9882 SEQ=1781381716 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D445EF0000000001030307) Dec 6 04:39:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4515 DF PROTO=TCP SPT=58514 DPT=9101 SEQ=826127641 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D453EF0000000001030307) Dec 6 04:39:29 localhost sshd[177364]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:39:31 localhost sshd[178678]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:39:32 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56498 DF PROTO=TCP SPT=51976 DPT=9105 SEQ=3791762787 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D45DEF0000000001030307) Dec 6 04:39:33 localhost sshd[180375]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:39:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56639 DF PROTO=TCP SPT=36000 DPT=9102 SEQ=1210804314 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D4686F0000000001030307) Dec 6 04:39:35 localhost sshd[181673]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:39:37 localhost sshd[183296]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:39:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23230 DF PROTO=TCP SPT=48930 DPT=9102 SEQ=3124958974 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D473F00000000001030307) Dec 6 04:39:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56641 DF PROTO=TCP SPT=36000 DPT=9102 SEQ=1210804314 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D4802F0000000001030307) Dec 6 04:39:42 localhost sshd[183690]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:39:43 localhost systemd[1]: Stopping OpenSSH server daemon... Dec 6 04:39:43 localhost systemd[1]: sshd.service: Deactivated successfully. Dec 6 04:39:43 localhost systemd[1]: sshd.service: Unit process 183690 (sshd) remains running after unit stopped. Dec 6 04:39:43 localhost systemd[1]: sshd.service: Unit process 183701 (sshd) remains running after unit stopped. Dec 6 04:39:43 localhost systemd[1]: Stopped OpenSSH server daemon. Dec 6 04:39:43 localhost systemd[1]: sshd.service: Consumed 8.154s CPU time, read 32.0K from disk, written 92.0K to disk. Dec 6 04:39:43 localhost systemd[1]: Stopped target sshd-keygen.target. Dec 6 04:39:43 localhost systemd[1]: Stopping sshd-keygen.target... Dec 6 04:39:43 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 6 04:39:43 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 6 04:39:43 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Dec 6 04:39:43 localhost systemd[1]: Reached target sshd-keygen.target. Dec 6 04:39:43 localhost systemd[1]: Starting OpenSSH server daemon... Dec 6 04:39:43 localhost sshd[184255]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:39:43 localhost systemd[1]: Started OpenSSH server daemon. Dec 6 04:39:44 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 04:39:44 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Dec 6 04:39:44 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 04:39:44 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:39:44 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:39:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49590 DF PROTO=TCP SPT=55096 DPT=9101 SEQ=578874230 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D48DE00000000001030307) Dec 6 04:39:44 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:39:44 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:39:44 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:39:44 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:39:44 localhost sshd[184362]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:39:44 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 04:39:44 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:39:44 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:39:45 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:39:45 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:39:45 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:39:45 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:39:46 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 6 04:39:46 localhost systemd[1]: Starting man-db-cache-update.service... Dec 6 04:39:46 localhost systemd[1]: Reloading. Dec 6 04:39:46 localhost systemd-sysv-generator[184498]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:39:46 localhost systemd-rc-local-generator[184494]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:39:46 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 04:39:46 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Dec 6 04:39:46 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:39:46 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 04:39:46 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:39:46 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:39:46 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:39:46 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:39:46 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:39:46 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:39:46 localhost systemd[1]: Queuing reload/restart jobs for marked units… Dec 6 04:39:46 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 6 04:39:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:39:47.269 160509 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 04:39:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:39:47.270 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 04:39:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:39:47.271 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 04:39:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49592 DF PROTO=TCP SPT=55096 DPT=9101 SEQ=578874230 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D499EF0000000001030307) Dec 6 04:39:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 04:39:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 04:39:49 localhost podman[188969]: 2025-12-06 09:39:49.178934202 +0000 UTC m=+0.086720285 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3) Dec 6 04:39:49 localhost podman[188969]: 2025-12-06 09:39:49.20885367 +0000 UTC m=+0.116639683 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 6 04:39:49 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 04:39:49 localhost podman[189087]: 2025-12-06 09:39:49.296248534 +0000 UTC m=+0.116786618 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true) Dec 6 04:39:49 localhost podman[189087]: 2025-12-06 09:39:49.330261484 +0000 UTC m=+0.150799588 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125) Dec 6 04:39:49 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 04:39:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4181 DF PROTO=TCP SPT=40810 DPT=9105 SEQ=2956413822 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D4A2EF0000000001030307) Dec 6 04:39:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4182 DF PROTO=TCP SPT=40810 DPT=9105 SEQ=2956413822 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D4B2AF0000000001030307) Dec 6 04:39:56 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Dec 6 04:39:56 localhost systemd[1]: Finished man-db-cache-update.service. Dec 6 04:39:56 localhost systemd[1]: man-db-cache-update.service: Consumed 12.693s CPU time. Dec 6 04:39:56 localhost systemd[1]: run-r58d9d66a043744a1868bfd422b592b96.service: Deactivated successfully. Dec 6 04:39:56 localhost systemd[1]: run-r5671576059fd4d47ade689acb4dab74f.service: Deactivated successfully. Dec 6 04:39:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41395 DF PROTO=TCP SPT=56308 DPT=9882 SEQ=2296920758 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D4BF6F0000000001030307) Dec 6 04:39:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49594 DF PROTO=TCP SPT=55096 DPT=9101 SEQ=578874230 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D4C9EF0000000001030307) Dec 6 04:40:02 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4183 DF PROTO=TCP SPT=40810 DPT=9105 SEQ=2956413822 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D4D3EF0000000001030307) Dec 6 04:40:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30788 DF PROTO=TCP SPT=40024 DPT=9102 SEQ=2904579337 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D4DDAF0000000001030307) Dec 6 04:40:04 localhost python3.9[193205]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Dec 6 04:40:04 localhost systemd[1]: Reloading. Dec 6 04:40:05 localhost systemd-rc-local-generator[193228]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:40:05 localhost systemd-sysv-generator[193233]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:40:05 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:05 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:40:05 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:05 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:05 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:05 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:05 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:05 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:05 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:06 localhost python3.9[193354]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Dec 6 04:40:06 localhost systemd[1]: Reloading. Dec 6 04:40:06 localhost systemd-rc-local-generator[193381]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:40:06 localhost systemd-sysv-generator[193387]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:40:06 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:06 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:40:06 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:06 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:06 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:06 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:06 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:06 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:06 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:07 localhost python3.9[193503]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Dec 6 04:40:07 localhost systemd[1]: Reloading. Dec 6 04:40:07 localhost systemd-rc-local-generator[193532]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:40:07 localhost systemd-sysv-generator[193537]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:40:07 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:40:07 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:07 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:07 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:07 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:07 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:07 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:07 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45660 DF PROTO=TCP SPT=38490 DPT=9102 SEQ=4244599726 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D4E9EF0000000001030307) Dec 6 04:40:08 localhost python3.9[193652]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Dec 6 04:40:08 localhost systemd[1]: Reloading. Dec 6 04:40:08 localhost systemd-rc-local-generator[193677]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:40:08 localhost systemd-sysv-generator[193682]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:40:09 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:09 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:40:09 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:09 localhost ceph-osd[31726]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 6 04:40:09 localhost ceph-osd[31726]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 5761 writes, 25K keys, 5761 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5761 writes, 760 syncs, 7.58 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.005 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.005 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.005 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d1180b62d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d1180b62d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_sl Dec 6 04:40:09 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:09 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:09 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:09 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:09 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:09 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:09 localhost python3.9[193801]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 6 04:40:10 localhost systemd[1]: Reloading. Dec 6 04:40:10 localhost systemd-sysv-generator[193830]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:40:10 localhost systemd-rc-local-generator[193827]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:40:10 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:10 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:40:10 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:10 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:10 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:10 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:10 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:10 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:10 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30790 DF PROTO=TCP SPT=40024 DPT=9102 SEQ=2904579337 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D4F56F0000000001030307) Dec 6 04:40:11 localhost python3.9[193950]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 6 04:40:11 localhost systemd[1]: Reloading. Dec 6 04:40:11 localhost systemd-rc-local-generator[193979]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:40:11 localhost systemd-sysv-generator[193983]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:40:11 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:11 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:11 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:40:11 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:11 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:11 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:11 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:11 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:11 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:12 localhost python3.9[194098]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 6 04:40:12 localhost systemd[1]: Reloading. Dec 6 04:40:12 localhost systemd-rc-local-generator[194127]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:40:12 localhost systemd-sysv-generator[194131]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:40:12 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:12 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:12 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:40:12 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:12 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:12 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:12 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:12 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:12 localhost ceph-osd[32665]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 6 04:40:12 localhost ceph-osd[32665]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.2 total, 600.0 interval#012Cumulative writes: 4879 writes, 21K keys, 4879 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 4879 writes, 669 syncs, 7.29 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.006 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.2 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e1468ea2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.2 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e1468ea2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.2 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_sl Dec 6 04:40:13 localhost python3.9[194246]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 6 04:40:14 localhost python3.9[194359]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 6 04:40:14 localhost systemd[1]: Reloading. Dec 6 04:40:14 localhost systemd-rc-local-generator[194391]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:40:14 localhost systemd-sysv-generator[194394]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:40:14 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:14 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:14 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:14 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:14 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:40:14 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:14 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:14 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:14 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22094 DF PROTO=TCP SPT=54726 DPT=9101 SEQ=1750144204 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D503100000000001030307) Dec 6 04:40:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22096 DF PROTO=TCP SPT=54726 DPT=9101 SEQ=1750144204 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D50F2F0000000001030307) Dec 6 04:40:18 localhost python3.9[194507]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Dec 6 04:40:18 localhost systemd[1]: Reloading. Dec 6 04:40:18 localhost systemd-sysv-generator[194540]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:40:18 localhost systemd-rc-local-generator[194537]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:40:18 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:18 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:18 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:18 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:18 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:40:18 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:18 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:18 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:18 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:40:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32175 DF PROTO=TCP SPT=45532 DPT=9105 SEQ=4017140069 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D518300000000001030307) Dec 6 04:40:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 04:40:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 04:40:19 localhost podman[194613]: 2025-12-06 09:40:19.917653922 +0000 UTC m=+0.072830403 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Dec 6 04:40:19 localhost podman[194613]: 2025-12-06 09:40:19.956200395 +0000 UTC m=+0.111376836 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 04:40:19 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 04:40:20 localhost podman[194617]: 2025-12-06 09:40:19.966157461 +0000 UTC m=+0.119309403 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent) Dec 6 04:40:20 localhost podman[194617]: 2025-12-06 09:40:20.05001307 +0000 UTC m=+0.203165012 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent) Dec 6 04:40:20 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 04:40:20 localhost python3.9[194700]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 6 04:40:21 localhost python3.9[194813]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 6 04:40:22 localhost sshd[194927]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:40:22 localhost python3.9[194926]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 6 04:40:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32176 DF PROTO=TCP SPT=45532 DPT=9105 SEQ=4017140069 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D527EF0000000001030307) Dec 6 04:40:23 localhost python3.9[195041]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 6 04:40:25 localhost python3.9[195154]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 6 04:40:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41398 DF PROTO=TCP SPT=56308 DPT=9882 SEQ=2296920758 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D52FEF0000000001030307) Dec 6 04:40:26 localhost sshd[195268]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:40:26 localhost python3.9[195267]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 6 04:40:28 localhost python3.9[195382]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 6 04:40:29 localhost python3.9[195495]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 6 04:40:29 localhost python3.9[195608]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 6 04:40:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22098 DF PROTO=TCP SPT=54726 DPT=9101 SEQ=1750144204 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D53FF00000000001030307) Dec 6 04:40:30 localhost python3.9[195721]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 6 04:40:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32177 DF PROTO=TCP SPT=45532 DPT=9105 SEQ=4017140069 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D547EF0000000001030307) Dec 6 04:40:32 localhost python3.9[195834]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 6 04:40:32 localhost python3.9[195947]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 6 04:40:33 localhost python3.9[196060]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 6 04:40:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=179 DF PROTO=TCP SPT=45314 DPT=9102 SEQ=2710989866 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D552EF0000000001030307) Dec 6 04:40:34 localhost python3.9[196173]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Dec 6 04:40:35 localhost python3.9[196286]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Dec 6 04:40:36 localhost sshd[196397]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:40:36 localhost python3.9[196396]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Dec 6 04:40:37 localhost python3.9[196508]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 04:40:37 localhost python3.9[196618]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 04:40:38 localhost python3.9[196728]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 04:40:38 localhost python3.9[196838]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Dec 6 04:40:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42484 DF PROTO=TCP SPT=59730 DPT=9882 SEQ=179938619 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D563EF0000000001030307) Dec 6 04:40:39 localhost python3.9[196948]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:40:40 localhost python3.9[197038]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765014039.3518567-1644-243531893126623/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:40:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=181 DF PROTO=TCP SPT=45314 DPT=9102 SEQ=2710989866 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D56AB00000000001030307) Dec 6 04:40:41 localhost python3.9[197149]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:40:41 localhost python3.9[197239]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765014040.800713-1644-215500618630509/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:40:42 localhost python3.9[197349]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:40:42 localhost python3.9[197439]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765014041.8903732-1644-47980772668363/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:40:43 localhost python3.9[197549]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:40:44 localhost python3.9[197639]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765014043.0292294-1644-272244161885259/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:40:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45037 DF PROTO=TCP SPT=47250 DPT=9101 SEQ=4230461905 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D578410000000001030307) Dec 6 04:40:44 localhost python3.9[197749]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:40:45 localhost python3.9[197839]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765014044.169851-1644-249129172537316/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=8d9b2057482987a531d808ceb2ac4bc7d43bf17c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:40:45 localhost python3.9[197949]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:40:46 localhost python3.9[198039]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765014045.3556237-1644-109866090232975/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:40:46 localhost python3.9[198149]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:40:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:40:47.270 160509 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 04:40:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:40:47.271 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 04:40:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:40:47.272 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 04:40:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45039 DF PROTO=TCP SPT=47250 DPT=9101 SEQ=4230461905 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D5842F0000000001030307) Dec 6 04:40:47 localhost python3.9[198237]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765014046.500331-1644-144119703668803/.source.conf follow=False _original_basename=auth.conf checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:40:48 localhost python3.9[198347]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:40:48 localhost python3.9[198437]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765014047.5667634-1644-162433883724825/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:40:49 localhost python3.9[198547]: ansible-ansible.builtin.file Invoked with path=/etc/libvirt/passwd.db state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:40:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62226 DF PROTO=TCP SPT=60330 DPT=9105 SEQ=1203010640 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D58D700000000001030307) Dec 6 04:40:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 04:40:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 04:40:50 localhost podman[198659]: 2025-12-06 09:40:50.236566766 +0000 UTC m=+0.079256505 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true) Dec 6 04:40:50 localhost podman[198659]: 2025-12-06 09:40:50.272157211 +0000 UTC m=+0.114846950 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 6 04:40:50 localhost podman[198657]: 2025-12-06 09:40:50.282191017 +0000 UTC m=+0.125043911 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Dec 6 04:40:50 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 04:40:50 localhost podman[198657]: 2025-12-06 09:40:50.321109842 +0000 UTC m=+0.163962796 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 04:40:50 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 04:40:50 localhost python3.9[198658]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:40:51 localhost python3.9[198808]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:40:51 localhost python3.9[198918]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:40:52 localhost python3.9[199028]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:40:52 localhost python3.9[199138]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:40:53 localhost python3.9[199264]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:40:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62227 DF PROTO=TCP SPT=60330 DPT=9105 SEQ=1203010640 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D59D2F0000000001030307) Dec 6 04:40:54 localhost python3.9[199412]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:40:54 localhost python3.9[199536]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:40:55 localhost python3.9[199664]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:40:56 localhost python3.9[199774]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:40:56 localhost python3.9[199884]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:40:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27768 DF PROTO=TCP SPT=56404 DPT=9882 SEQ=532615157 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D5A9EF0000000001030307) Dec 6 04:40:57 localhost python3.9[199994]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:40:57 localhost python3.9[200104]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:40:59 localhost python3.9[200214]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:40:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45041 DF PROTO=TCP SPT=47250 DPT=9101 SEQ=4230461905 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D5B3EF0000000001030307) Dec 6 04:40:59 localhost python3.9[200324]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:41:02 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62228 DF PROTO=TCP SPT=60330 DPT=9105 SEQ=1203010640 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D5BDF00000000001030307) Dec 6 04:41:02 localhost python3.9[200412]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014059.3268197-2307-260790214299972/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:41:03 localhost python3.9[200522]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:41:03 localhost python3.9[200610]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014062.7543066-2307-279831913583817/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:41:04 localhost python3.9[200720]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:41:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56345 DF PROTO=TCP SPT=41422 DPT=9102 SEQ=3406258533 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D5C8300000000001030307) Dec 6 04:41:04 localhost python3.9[200808]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014063.9284487-2307-91829065378216/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:41:05 localhost python3.9[200918]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:41:06 localhost python3.9[201006]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014065.1403632-2307-145135552790145/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:41:06 localhost python3.9[201116]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:41:07 localhost python3.9[201204]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014066.3170414-2307-152918529932135/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:41:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30793 DF PROTO=TCP SPT=40024 DPT=9102 SEQ=2904579337 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D5D3EF0000000001030307) Dec 6 04:41:07 localhost python3.9[201314]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:41:08 localhost python3.9[201402]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014067.408739-2307-265050007237985/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:41:08 localhost python3.9[201512]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:41:09 localhost python3.9[201600]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014068.547441-2307-254833164115308/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:41:10 localhost python3.9[201710]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:41:10 localhost python3.9[201798]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014069.665544-2307-257104182139347/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:41:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56347 DF PROTO=TCP SPT=41422 DPT=9102 SEQ=3406258533 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D5DFEF0000000001030307) Dec 6 04:41:11 localhost python3.9[201908]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:41:11 localhost python3.9[201996]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014070.751107-2307-225721217987632/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:41:12 localhost python3.9[202106]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:41:12 localhost python3.9[202194]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014071.8027453-2307-51517089730812/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:41:12 localhost sshd[202195]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:41:13 localhost python3.9[202306]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:41:13 localhost python3.9[202394]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014072.8748844-2307-15507973832661/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:41:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16008 DF PROTO=TCP SPT=48766 DPT=9101 SEQ=2535142758 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D5ED700000000001030307) Dec 6 04:41:14 localhost python3.9[202504]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:41:14 localhost python3.9[202592]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014073.9188316-2307-209534536075623/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:41:15 localhost python3.9[202702]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:41:16 localhost python3.9[202790]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014075.0138829-2307-267883138721481/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:41:16 localhost python3.9[202900]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:41:17 localhost python3.9[202988]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014076.3405855-2307-204485640268918/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:41:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16010 DF PROTO=TCP SPT=48766 DPT=9101 SEQ=2535142758 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D5F96F0000000001030307) Dec 6 04:41:18 localhost python3.9[203096]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:41:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50792 DF PROTO=TCP SPT=56146 DPT=9105 SEQ=2037736614 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D602AF0000000001030307) Dec 6 04:41:19 localhost python3.9[203209]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False Dec 6 04:41:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 04:41:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 04:41:20 localhost podman[203228]: 2025-12-06 09:41:20.949578013 +0000 UTC m=+0.106627260 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 04:41:20 localhost podman[203227]: 2025-12-06 09:41:20.991031766 +0000 UTC m=+0.147244348 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller) Dec 6 04:41:21 localhost podman[203227]: 2025-12-06 09:41:21.023118073 +0000 UTC m=+0.179330665 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3) Dec 6 04:41:21 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 04:41:21 localhost podman[203228]: 2025-12-06 09:41:21.092179568 +0000 UTC m=+0.249228795 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 04:41:21 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 04:41:21 localhost python3.9[203361]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 6 04:41:21 localhost systemd[1]: Reloading. Dec 6 04:41:21 localhost systemd-sysv-generator[203389]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:41:21 localhost systemd-rc-local-generator[203383]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:41:21 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:41:21 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 04:41:21 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:41:21 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:41:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:41:22 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 04:41:22 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:41:22 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:41:22 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:41:22 localhost systemd[1]: Starting libvirt logging daemon socket... Dec 6 04:41:22 localhost systemd[1]: Listening on libvirt logging daemon socket. Dec 6 04:41:22 localhost systemd[1]: Starting libvirt logging daemon admin socket... Dec 6 04:41:22 localhost systemd[1]: Listening on libvirt logging daemon admin socket. Dec 6 04:41:22 localhost systemd[1]: Starting libvirt logging daemon... Dec 6 04:41:22 localhost systemd[1]: Started libvirt logging daemon. Dec 6 04:41:22 localhost sshd[203420]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:41:23 localhost python3.9[203514]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 6 04:41:23 localhost systemd[1]: Reloading. Dec 6 04:41:23 localhost systemd-rc-local-generator[203541]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:41:23 localhost systemd-sysv-generator[203544]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:41:23 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:41:23 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 04:41:23 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:41:23 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:41:23 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:41:23 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 04:41:23 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:41:23 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:41:23 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:41:23 localhost systemd[1]: Starting libvirt nodedev daemon socket... Dec 6 04:41:23 localhost systemd[1]: Listening on libvirt nodedev daemon socket. Dec 6 04:41:23 localhost systemd[1]: Starting libvirt nodedev daemon admin socket... Dec 6 04:41:23 localhost systemd[1]: Starting libvirt nodedev daemon read-only socket... Dec 6 04:41:23 localhost systemd[1]: Listening on libvirt nodedev daemon admin socket. Dec 6 04:41:23 localhost systemd[1]: Listening on libvirt nodedev daemon read-only socket. Dec 6 04:41:23 localhost systemd[1]: Started libvirt nodedev daemon. Dec 6 04:41:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50793 DF PROTO=TCP SPT=56146 DPT=9105 SEQ=2037736614 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D612700000000001030307) Dec 6 04:41:24 localhost python3.9[203688]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 6 04:41:24 localhost systemd[1]: Reloading. Dec 6 04:41:24 localhost systemd-rc-local-generator[203712]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:41:24 localhost systemd-sysv-generator[203717]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:41:24 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:41:24 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 04:41:24 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:41:24 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:41:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:41:24 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 04:41:24 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:41:24 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:41:24 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:41:24 localhost systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs... Dec 6 04:41:24 localhost systemd[1]: Starting libvirt proxy daemon socket... Dec 6 04:41:24 localhost systemd[1]: Listening on libvirt proxy daemon socket. Dec 6 04:41:24 localhost systemd[1]: Starting libvirt proxy daemon admin socket... Dec 6 04:41:24 localhost systemd[1]: Starting libvirt proxy daemon read-only socket... Dec 6 04:41:24 localhost systemd[1]: Listening on libvirt proxy daemon admin socket. Dec 6 04:41:24 localhost systemd[1]: Listening on libvirt proxy daemon read-only socket. Dec 6 04:41:24 localhost systemd[1]: Started libvirt proxy daemon. Dec 6 04:41:24 localhost systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs. Dec 6 04:41:24 localhost setroubleshoot[203725]: Deleting alert 58e2bb45-d8cf-42a0-b321-404a4f96b4c3, it is allowed in current policy Dec 6 04:41:24 localhost systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@1.service. Dec 6 04:41:25 localhost python3.9[203866]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 6 04:41:25 localhost systemd[1]: Reloading. Dec 6 04:41:25 localhost systemd-rc-local-generator[203893]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:41:25 localhost systemd-sysv-generator[203899]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:41:25 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:41:25 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 04:41:25 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:41:25 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:41:25 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:41:25 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 04:41:25 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:41:25 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:41:25 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:41:25 localhost systemd[1]: Listening on libvirt locking daemon socket. Dec 6 04:41:25 localhost systemd[1]: Starting libvirt QEMU daemon socket... Dec 6 04:41:25 localhost systemd[1]: Listening on libvirt QEMU daemon socket. Dec 6 04:41:25 localhost systemd[1]: Starting libvirt QEMU daemon admin socket... Dec 6 04:41:25 localhost systemd[1]: Starting libvirt QEMU daemon read-only socket... Dec 6 04:41:25 localhost systemd[1]: Listening on libvirt QEMU daemon admin socket. Dec 6 04:41:25 localhost systemd[1]: Listening on libvirt QEMU daemon read-only socket. Dec 6 04:41:25 localhost systemd[1]: Started libvirt QEMU daemon. Dec 6 04:41:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27771 DF PROTO=TCP SPT=56404 DPT=9882 SEQ=532615157 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D619EF0000000001030307) Dec 6 04:41:25 localhost setroubleshoot[203725]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l c13484d1-2fe1-4721-ae6c-59ffaec2470f Dec 6 04:41:25 localhost setroubleshoot[203725]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012***** Plugin dac_override (91.4 confidence) suggests **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012***** Plugin catchall (9.59 confidence) suggests **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012 Dec 6 04:41:25 localhost setroubleshoot[203725]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l c13484d1-2fe1-4721-ae6c-59ffaec2470f Dec 6 04:41:25 localhost setroubleshoot[203725]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012***** Plugin dac_override (91.4 confidence) suggests **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012***** Plugin catchall (9.59 confidence) suggests **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012 Dec 6 04:41:26 localhost python3.9[204051]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 6 04:41:26 localhost systemd[1]: Reloading. Dec 6 04:41:26 localhost systemd-rc-local-generator[204085]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:41:26 localhost systemd-sysv-generator[204090]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:41:26 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:41:26 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 04:41:26 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:41:26 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:41:26 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:41:26 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 04:41:26 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:41:26 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:41:26 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:41:26 localhost systemd[1]: Starting libvirt secret daemon socket... Dec 6 04:41:26 localhost systemd[1]: Listening on libvirt secret daemon socket. Dec 6 04:41:26 localhost systemd[1]: Starting libvirt secret daemon admin socket... Dec 6 04:41:26 localhost systemd[1]: Starting libvirt secret daemon read-only socket... Dec 6 04:41:26 localhost systemd[1]: Listening on libvirt secret daemon admin socket. Dec 6 04:41:26 localhost systemd[1]: Listening on libvirt secret daemon read-only socket. Dec 6 04:41:26 localhost systemd[1]: Started libvirt secret daemon. Dec 6 04:41:27 localhost python3.9[204234]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:41:28 localhost python3.9[204344]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Dec 6 04:41:29 localhost python3.9[204454]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:41:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16012 DF PROTO=TCP SPT=48766 DPT=9101 SEQ=2535142758 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D629EF0000000001030307) Dec 6 04:41:29 localhost python3.9[204566]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Dec 6 04:41:30 localhost python3.9[204674]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:41:31 localhost python3.9[204760]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014090.3217163-3171-164138750994441/.source.xml follow=False _original_basename=secret.xml.j2 checksum=9621e6cf70c8e0de93f1c73ff2a387c8c3ac4910 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:41:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50794 DF PROTO=TCP SPT=56146 DPT=9105 SEQ=2037736614 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D631EF0000000001030307) Dec 6 04:41:32 localhost python3.9[204870]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 1939e851-b10c-5c3b-9bb7-8e7f380233e8#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:41:33 localhost python3.9[204990]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:41:34 localhost sshd[205101]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:41:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1415 DF PROTO=TCP SPT=37706 DPT=9102 SEQ=949871166 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D63D2F0000000001030307) Dec 6 04:41:35 localhost systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@1.service: Deactivated successfully. Dec 6 04:41:35 localhost systemd[1]: setroubleshootd.service: Deactivated successfully. Dec 6 04:41:37 localhost sshd[205332]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:41:37 localhost python3.9[205331]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:41:37 localhost python3.9[205443]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:41:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=184 DF PROTO=TCP SPT=45314 DPT=9102 SEQ=2710989866 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D649F00000000001030307) Dec 6 04:41:38 localhost python3.9[205531]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014097.4925287-3337-25413590663113/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=dc5ee7162311c27a6084cbee4052b901d56cb1ba backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:41:39 localhost sshd[205620]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:41:39 localhost sshd[205642]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:41:39 localhost python3.9[205644]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:41:40 localhost python3.9[205755]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:41:40 localhost python3.9[205812]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:41:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1417 DF PROTO=TCP SPT=37706 DPT=9102 SEQ=949871166 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D654EF0000000001030307) Dec 6 04:41:41 localhost sshd[205888]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:41:41 localhost python3.9[205923]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:41:41 localhost python3.9[205981]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.if4akpn7 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:41:42 localhost python3.9[206091]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:41:42 localhost python3.9[206148]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:41:43 localhost sshd[206259]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:41:43 localhost python3.9[206258]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:41:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33004 DF PROTO=TCP SPT=49220 DPT=9101 SEQ=1657222880 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D662A00000000001030307) Dec 6 04:41:44 localhost python3[206371]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall Dec 6 04:41:45 localhost python3.9[206481]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:41:45 localhost python3.9[206538]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:41:46 localhost sshd[206556]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:41:47 localhost python3.9[206650]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:41:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:41:47.271 160509 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 04:41:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:41:47.272 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 04:41:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:41:47.273 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 04:41:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33006 DF PROTO=TCP SPT=49220 DPT=9101 SEQ=1657222880 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D66EAF0000000001030307) Dec 6 04:41:47 localhost python3.9[206707]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:41:48 localhost python3.9[206817]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:41:49 localhost python3.9[206874]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:41:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44721 DF PROTO=TCP SPT=35312 DPT=9105 SEQ=125070986 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D677AF0000000001030307) Dec 6 04:41:50 localhost python3.9[206984]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:41:50 localhost python3.9[207041]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:41:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 04:41:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 04:41:51 localhost systemd[1]: tmp-crun.5x0S7M.mount: Deactivated successfully. Dec 6 04:41:51 localhost podman[207152]: 2025-12-06 09:41:51.340245432 +0000 UTC m=+0.087269784 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller) Dec 6 04:41:51 localhost podman[207153]: 2025-12-06 09:41:51.394023727 +0000 UTC m=+0.139285565 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible) Dec 6 04:41:51 localhost systemd[1]: tmp-crun.plWUsp.mount: Deactivated successfully. Dec 6 04:41:51 localhost podman[207152]: 2025-12-06 09:41:51.407188543 +0000 UTC m=+0.154212855 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 6 04:41:51 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 04:41:51 localhost podman[207153]: 2025-12-06 09:41:51.427126592 +0000 UTC m=+0.172388440 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent) Dec 6 04:41:51 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 04:41:51 localhost python3.9[207151]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:41:52 localhost python3.9[207285]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014110.8487735-3712-137168062748546/.source.nft follow=False _original_basename=ruleset.j2 checksum=e2e2635f27347d386f310e86d2b40c40289835bb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:41:52 localhost python3.9[207395]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:41:53 localhost python3.9[207505]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:41:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44722 DF PROTO=TCP SPT=35312 DPT=9105 SEQ=125070986 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D687700000000001030307) Dec 6 04:41:54 localhost sshd[207619]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:41:54 localhost python3.9[207618]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:41:55 localhost python3.9[207766]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:41:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26940 DF PROTO=TCP SPT=42488 DPT=9882 SEQ=458478708 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D68FEF0000000001030307) Dec 6 04:41:56 localhost python3.9[207910]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:41:56 localhost python3.9[208022]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:41:57 localhost python3.9[208135]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:41:58 localhost python3.9[208245]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:41:58 localhost python3.9[208351]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014117.8059533-3927-110576517304984/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:41:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33008 DF PROTO=TCP SPT=49220 DPT=9101 SEQ=1657222880 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D69DEF0000000001030307) Dec 6 04:41:59 localhost python3.9[208461]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:42:00 localhost python3.9[208549]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014119.0941432-3972-268370244555265/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:42:00 localhost python3.9[208659]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:42:01 localhost python3.9[208747]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014120.3321948-4017-219940237060375/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:42:02 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44723 DF PROTO=TCP SPT=35312 DPT=9105 SEQ=125070986 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D6A7EF0000000001030307) Dec 6 04:42:02 localhost python3.9[208857]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:42:02 localhost systemd[1]: Reloading. Dec 6 04:42:02 localhost systemd-sysv-generator[208882]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:42:02 localhost systemd-rc-local-generator[208878]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:42:02 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:42:02 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 04:42:02 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:42:02 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:42:02 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:42:02 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 04:42:02 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:42:02 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:42:02 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:42:02 localhost systemd[1]: Reached target edpm_libvirt.target. Dec 6 04:42:03 localhost python3.9[209007]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None Dec 6 04:42:03 localhost systemd[1]: Reloading. Dec 6 04:42:03 localhost systemd-rc-local-generator[209032]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:42:03 localhost systemd-sysv-generator[209036]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:42:03 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:42:03 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 04:42:03 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:42:03 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:42:03 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:42:03 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 04:42:03 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:42:03 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:42:03 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:42:03 localhost systemd[1]: Reloading. Dec 6 04:42:03 localhost systemd-sysv-generator[209070]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:42:04 localhost systemd-rc-local-generator[209067]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:42:04 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:42:04 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 04:42:04 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:42:04 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:42:04 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:42:04 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 04:42:04 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:42:04 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:42:04 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:42:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37729 DF PROTO=TCP SPT=48238 DPT=9102 SEQ=4120414039 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D6B26F0000000001030307) Dec 6 04:42:04 localhost systemd[1]: session-53.scope: Deactivated successfully. Dec 6 04:42:04 localhost systemd[1]: session-53.scope: Consumed 3min 34.839s CPU time. Dec 6 04:42:04 localhost systemd-logind[766]: Session 53 logged out. Waiting for processes to exit. Dec 6 04:42:04 localhost systemd-logind[766]: Removed session 53. Dec 6 04:42:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56350 DF PROTO=TCP SPT=41422 DPT=9102 SEQ=3406258533 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D6BDEF0000000001030307) Dec 6 04:42:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37731 DF PROTO=TCP SPT=48238 DPT=9102 SEQ=4120414039 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D6CA2F0000000001030307) Dec 6 04:42:11 localhost sshd[209098]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:42:11 localhost systemd-logind[766]: New session 54 of user zuul. Dec 6 04:42:11 localhost systemd[1]: Started Session 54 of User zuul. Dec 6 04:42:12 localhost python3.9[209209]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 6 04:42:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59301 DF PROTO=TCP SPT=42230 DPT=9101 SEQ=3380191360 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D6D7D20000000001030307) Dec 6 04:42:14 localhost python3.9[209321]: ansible-ansible.builtin.service_facts Invoked Dec 6 04:42:14 localhost network[209338]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 6 04:42:14 localhost network[209339]: 'network-scripts' will be removed from distribution in near future. Dec 6 04:42:14 localhost network[209340]: It is advised to switch to 'NetworkManager' instead for network management. Dec 6 04:42:15 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:42:16 localhost sshd[209419]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:42:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59303 DF PROTO=TCP SPT=42230 DPT=9101 SEQ=3380191360 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D6E3EF0000000001030307) Dec 6 04:42:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27424 DF PROTO=TCP SPT=60166 DPT=9105 SEQ=2454835699 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D6ECEF0000000001030307) Dec 6 04:42:21 localhost python3.9[209574]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Dec 6 04:42:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 04:42:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 04:42:21 localhost podman[209638]: 2025-12-06 09:42:21.912723313 +0000 UTC m=+0.063887911 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 04:42:21 localhost podman[209638]: 2025-12-06 09:42:21.954803597 +0000 UTC m=+0.105968275 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Dec 6 04:42:21 localhost systemd[1]: tmp-crun.EMgViU.mount: Deactivated successfully. Dec 6 04:42:21 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 04:42:21 localhost podman[209637]: 2025-12-06 09:42:21.973228801 +0000 UTC m=+0.125149412 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=ovn_controller) Dec 6 04:42:22 localhost podman[209637]: 2025-12-06 09:42:21.999975495 +0000 UTC m=+0.151896096 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 04:42:22 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 04:42:22 localhost python3.9[209649]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 6 04:42:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27425 DF PROTO=TCP SPT=60166 DPT=9105 SEQ=2454835699 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D6FCAF0000000001030307) Dec 6 04:42:26 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60540 DF PROTO=TCP SPT=38562 DPT=9882 SEQ=4256135414 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D7096F0000000001030307) Dec 6 04:42:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59305 DF PROTO=TCP SPT=42230 DPT=9101 SEQ=3380191360 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D713F00000000001030307) Dec 6 04:42:30 localhost python3.9[209792]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:42:31 localhost python3.9[209904]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi mode=preserve remote_src=True src=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi/ backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:42:32 localhost python3.9[210014]: ansible-ansible.legacy.command Invoked with _raw_params=mv "/var/lib/config-data/puppet-generated/iscsid/etc/iscsi" "/var/lib/config-data/puppet-generated/iscsid/etc/iscsi.adopted"#012 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:42:32 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27426 DF PROTO=TCP SPT=60166 DPT=9105 SEQ=2454835699 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D71DF00000000001030307) Dec 6 04:42:33 localhost python3.9[210125]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:42:34 localhost python3.9[210236]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -rF /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:42:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26719 DF PROTO=TCP SPT=42314 DPT=9102 SEQ=1157572135 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D727B00000000001030307) Dec 6 04:42:35 localhost python3.9[210347]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:42:36 localhost sshd[210460]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:42:36 localhost python3.9[210459]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:42:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1420 DF PROTO=TCP SPT=37706 DPT=9102 SEQ=949871166 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D733EF0000000001030307) Dec 6 04:42:37 localhost python3.9[210571]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:42:38 localhost systemd[1]: Listening on Open-iSCSI iscsid Socket. Dec 6 04:42:39 localhost python3.9[210685]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:42:39 localhost systemd[1]: Reloading. Dec 6 04:42:39 localhost systemd-sysv-generator[210712]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:42:39 localhost systemd-rc-local-generator[210709]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:42:39 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:42:39 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 04:42:39 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:42:39 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:42:39 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:42:39 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 04:42:39 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:42:39 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:42:39 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:42:39 localhost systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi). Dec 6 04:42:39 localhost systemd[1]: Starting Open-iSCSI... Dec 6 04:42:39 localhost iscsid[210726]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 6 04:42:39 localhost iscsid[210726]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 6 04:42:39 localhost iscsid[210726]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 6 04:42:39 localhost iscsid[210726]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 6 04:42:39 localhost iscsid[210726]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 6 04:42:39 localhost iscsid[210726]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 6 04:42:39 localhost iscsid[210726]: iscsid: can't open iscsid.ipc_auth_uid configuration file /etc/iscsi/iscsid.conf Dec 6 04:42:39 localhost systemd[1]: Started Open-iSCSI. Dec 6 04:42:39 localhost systemd[1]: Starting Logout off all iSCSI sessions on shutdown... Dec 6 04:42:39 localhost systemd[1]: Finished Logout off all iSCSI sessions on shutdown. Dec 6 04:42:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26721 DF PROTO=TCP SPT=42314 DPT=9102 SEQ=1157572135 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D73F6F0000000001030307) Dec 6 04:42:41 localhost systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs... Dec 6 04:42:41 localhost systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs. Dec 6 04:42:41 localhost systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@2.service. Dec 6 04:42:41 localhost python3.9[210838]: ansible-ansible.builtin.service_facts Invoked Dec 6 04:42:41 localhost network[210868]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 6 04:42:41 localhost network[210869]: 'network-scripts' will be removed from distribution in near future. Dec 6 04:42:41 localhost network[210870]: It is advised to switch to 'NetworkManager' instead for network management. Dec 6 04:42:42 localhost setroubleshoot[210761]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 43e61b33-f78c-4af8-8283-58c8652417d8 Dec 6 04:42:42 localhost setroubleshoot[210761]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Dec 6 04:42:42 localhost setroubleshoot[210761]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 43e61b33-f78c-4af8-8283-58c8652417d8 Dec 6 04:42:42 localhost setroubleshoot[210761]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Dec 6 04:42:42 localhost setroubleshoot[210761]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 43e61b33-f78c-4af8-8283-58c8652417d8 Dec 6 04:42:42 localhost setroubleshoot[210761]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Dec 6 04:42:42 localhost setroubleshoot[210761]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 43e61b33-f78c-4af8-8283-58c8652417d8 Dec 6 04:42:42 localhost setroubleshoot[210761]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Dec 6 04:42:42 localhost setroubleshoot[210761]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 43e61b33-f78c-4af8-8283-58c8652417d8 Dec 6 04:42:42 localhost setroubleshoot[210761]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Dec 6 04:42:42 localhost setroubleshoot[210761]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 43e61b33-f78c-4af8-8283-58c8652417d8 Dec 6 04:42:42 localhost setroubleshoot[210761]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Dec 6 04:42:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:42:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59714 DF PROTO=TCP SPT=51474 DPT=9101 SEQ=1479808194 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D74D030000000001030307) Dec 6 04:42:46 localhost python3.9[211104]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Dec 6 04:42:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:42:47.272 160509 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 04:42:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:42:47.273 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 04:42:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:42:47.275 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 04:42:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59716 DF PROTO=TCP SPT=51474 DPT=9101 SEQ=1479808194 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D758EF0000000001030307) Dec 6 04:42:47 localhost python3.9[211214]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled Dec 6 04:42:48 localhost python3.9[211328]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:42:49 localhost python3.9[211416]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014168.0053067-456-67675844310832/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:42:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61308 DF PROTO=TCP SPT=48238 DPT=9100 SEQ=4031396398 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D761EF0000000001030307) Dec 6 04:42:49 localhost python3.9[211526]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:42:50 localhost python3.9[211636]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 6 04:42:50 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 6 04:42:50 localhost systemd[1]: Stopped Load Kernel Modules. Dec 6 04:42:50 localhost systemd[1]: Stopping Load Kernel Modules... Dec 6 04:42:50 localhost systemd[1]: Starting Load Kernel Modules... Dec 6 04:42:51 localhost systemd-modules-load[211640]: Module 'msr' is built in Dec 6 04:42:51 localhost systemd[1]: Finished Load Kernel Modules. Dec 6 04:42:51 localhost sshd[211729]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:42:51 localhost python3.9[211752]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 04:42:52 localhost systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@2.service: Deactivated successfully. Dec 6 04:42:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 04:42:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 04:42:52 localhost systemd[1]: setroubleshootd.service: Deactivated successfully. Dec 6 04:42:52 localhost podman[211770]: 2025-12-06 09:42:52.559699562 +0000 UTC m=+0.064591468 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 04:42:52 localhost podman[211771]: 2025-12-06 09:42:52.614931993 +0000 UTC m=+0.119900182 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 6 04:42:52 localhost podman[211770]: 2025-12-06 09:42:52.641731714 +0000 UTC m=+0.146623640 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Dec 6 04:42:52 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 04:42:52 localhost podman[211771]: 2025-12-06 09:42:52.695778928 +0000 UTC m=+0.200747117 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 6 04:42:52 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 04:42:53 localhost python3.9[211905]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:42:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41787 DF PROTO=TCP SPT=40548 DPT=9105 SEQ=3123358156 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D771F00000000001030307) Dec 6 04:42:53 localhost python3.9[212015]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:42:55 localhost python3.9[212125]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:42:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60543 DF PROTO=TCP SPT=38562 DPT=9882 SEQ=4256135414 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D779F00000000001030307) Dec 6 04:42:55 localhost python3.9[212213]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014174.881062-630-98071389199335/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:42:56 localhost sshd[212231]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:42:56 localhost python3.9[212325]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:42:57 localhost python3.9[212436]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:42:58 localhost python3.9[212546]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:42:59 localhost python3.9[212692]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:42:59 localhost python3.9[212858]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:42:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59718 DF PROTO=TCP SPT=51474 DPT=9101 SEQ=1479808194 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D789F00000000001030307) Dec 6 04:43:00 localhost python3.9[213001]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:43:01 localhost python3.9[213129]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:43:01 localhost python3.9[213239]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:43:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41788 DF PROTO=TCP SPT=40548 DPT=9105 SEQ=3123358156 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D791F00000000001030307) Dec 6 04:43:02 localhost python3.9[213349]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:43:03 localhost python3.9[213461]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:43:04 localhost python3.9[213571]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 04:43:04 localhost sshd[213572]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:43:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6895 DF PROTO=TCP SPT=56978 DPT=9102 SEQ=1819866283 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D79CF00000000001030307) Dec 6 04:43:05 localhost python3.9[213683]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:43:05 localhost python3.9[213740]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 04:43:07 localhost python3.9[213850]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:43:07 localhost python3.9[213907]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 04:43:08 localhost python3.9[214017]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:43:09 localhost python3.9[214127]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:43:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35844 DF PROTO=TCP SPT=33376 DPT=9882 SEQ=1522697681 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D7ADEF0000000001030307) Dec 6 04:43:09 localhost python3.9[214184]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:43:10 localhost python3.9[214294]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:43:10 localhost python3.9[214351]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:43:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6897 DF PROTO=TCP SPT=56978 DPT=9102 SEQ=1819866283 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D7B4AF0000000001030307) Dec 6 04:43:11 localhost python3.9[214461]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:43:11 localhost systemd[1]: Reloading. Dec 6 04:43:11 localhost systemd-rc-local-generator[214484]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:43:11 localhost systemd-sysv-generator[214488]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:43:11 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:11 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:11 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:11 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:11 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:43:11 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:11 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:11 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:11 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:12 localhost python3.9[214609]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:43:13 localhost python3.9[214666]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:43:13 localhost python3.9[214776]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:43:14 localhost python3.9[214833]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:43:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60980 DF PROTO=TCP SPT=41308 DPT=9101 SEQ=302165480 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D7C2310000000001030307) Dec 6 04:43:15 localhost python3.9[214943]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:43:15 localhost systemd[1]: Reloading. Dec 6 04:43:15 localhost systemd-sysv-generator[214975]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:43:15 localhost systemd-rc-local-generator[214970]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:43:15 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:15 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:15 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:15 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:15 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:43:15 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:15 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:15 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:15 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:15 localhost systemd[1]: Starting Create netns directory... Dec 6 04:43:15 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Dec 6 04:43:15 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Dec 6 04:43:15 localhost systemd[1]: Finished Create netns directory. Dec 6 04:43:16 localhost python3.9[215096]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 04:43:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60982 DF PROTO=TCP SPT=41308 DPT=9101 SEQ=302165480 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D7CE2F0000000001030307) Dec 6 04:43:17 localhost python3.9[215206]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:43:17 localhost python3.9[215294]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014196.9030704-1251-139307617380406/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 6 04:43:19 localhost python3.9[215404]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 04:43:19 localhost python3.9[215514]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:43:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15154 DF PROTO=TCP SPT=49014 DPT=9105 SEQ=189848899 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D7D76F0000000001030307) Dec 6 04:43:19 localhost sshd[215575]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:43:20 localhost python3.9[215604]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014199.2339156-1326-242869659582514/.source.json _original_basename=.fffio8w1 follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:43:20 localhost python3.9[215714]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:43:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 04:43:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 04:43:22 localhost podman[215968]: 2025-12-06 09:43:22.927079526 +0000 UTC m=+0.086384075 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller) Dec 6 04:43:22 localhost podman[215969]: 2025-12-06 09:43:22.979209581 +0000 UTC m=+0.137117648 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent) Dec 6 04:43:22 localhost podman[215968]: 2025-12-06 09:43:22.993315853 +0000 UTC m=+0.152620422 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true) Dec 6 04:43:23 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 04:43:23 localhost podman[215969]: 2025-12-06 09:43:23.009752246 +0000 UTC m=+0.167660323 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, managed_by=edpm_ansible) Dec 6 04:43:23 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 04:43:23 localhost python3.9[216065]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False Dec 6 04:43:23 localhost systemd[1]: virtnodedevd.service: Deactivated successfully. Dec 6 04:43:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15155 DF PROTO=TCP SPT=49014 DPT=9105 SEQ=189848899 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D7E72F0000000001030307) Dec 6 04:43:24 localhost python3.9[216176]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Dec 6 04:43:24 localhost systemd[1]: virtproxyd.service: Deactivated successfully. Dec 6 04:43:25 localhost python3.9[216287]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Dec 6 04:43:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52920 DF PROTO=TCP SPT=56466 DPT=9882 SEQ=1807279947 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D7F3EF0000000001030307) Dec 6 04:43:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60984 DF PROTO=TCP SPT=41308 DPT=9101 SEQ=302165480 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D7FDEF0000000001030307) Dec 6 04:43:29 localhost python3[216424]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Dec 6 04:43:31 localhost podman[216438]: 2025-12-06 09:43:29.957711791 +0000 UTC m=+0.031573557 image pull quay.io/podified-antelope-centos9/openstack-multipathd:current-podified Dec 6 04:43:31 localhost podman[216485]: Dec 6 04:43:31 localhost podman[216485]: 2025-12-06 09:43:31.875802779 +0000 UTC m=+0.089606674 container create 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Dec 6 04:43:31 localhost podman[216485]: 2025-12-06 09:43:31.831977907 +0000 UTC m=+0.045781842 image pull quay.io/podified-antelope-centos9/openstack-multipathd:current-podified Dec 6 04:43:31 localhost python3[216424]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified Dec 6 04:43:32 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15156 DF PROTO=TCP SPT=49014 DPT=9105 SEQ=189848899 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D807EF0000000001030307) Dec 6 04:43:33 localhost python3.9[216632]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:43:34 localhost python3.9[216744]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:43:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45753 DF PROTO=TCP SPT=55426 DPT=9102 SEQ=3589475987 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D811EF0000000001030307) Dec 6 04:43:35 localhost python3.9[216799]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:43:35 localhost systemd[1]: virtsecretd.service: Deactivated successfully. Dec 6 04:43:36 localhost python3.9[216909]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765014215.5963624-1590-86490635546436/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:43:36 localhost python3.9[216964]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 6 04:43:36 localhost systemd[1]: Reloading. Dec 6 04:43:36 localhost systemd-sysv-generator[216991]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:43:36 localhost systemd-rc-local-generator[216987]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:43:36 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:36 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:36 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:36 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:36 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:43:36 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:36 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:36 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:37 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26724 DF PROTO=TCP SPT=42314 DPT=9102 SEQ=1157572135 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D81DEF0000000001030307) Dec 6 04:43:37 localhost python3.9[217055]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:43:38 localhost systemd[1]: Reloading. Dec 6 04:43:38 localhost systemd-rc-local-generator[217080]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:43:38 localhost systemd-sysv-generator[217087]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:43:39 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:39 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:39 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:39 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:39 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:43:39 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:39 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:39 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:39 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:39 localhost systemd[1]: Starting multipathd container... Dec 6 04:43:39 localhost systemd[1]: Started libcrun container. Dec 6 04:43:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/998945b2ed3c4b11e3b9ed62edc3a70e401b7792f1cfdc1d9e9e385864b9cbe5/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Dec 6 04:43:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/998945b2ed3c4b11e3b9ed62edc3a70e401b7792f1cfdc1d9e9e385864b9cbe5/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Dec 6 04:43:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 04:43:39 localhost podman[217096]: 2025-12-06 09:43:39.372265836 +0000 UTC m=+0.157640597 container init 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Dec 6 04:43:39 localhost multipathd[217111]: + sudo -E kolla_set_configs Dec 6 04:43:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 04:43:39 localhost podman[217096]: 2025-12-06 09:43:39.420567354 +0000 UTC m=+0.205942085 container start 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3) Dec 6 04:43:39 localhost podman[217096]: multipathd Dec 6 04:43:39 localhost systemd[1]: Started multipathd container. Dec 6 04:43:39 localhost multipathd[217111]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Dec 6 04:43:39 localhost multipathd[217111]: INFO:__main__:Validating config file Dec 6 04:43:39 localhost multipathd[217111]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Dec 6 04:43:39 localhost multipathd[217111]: INFO:__main__:Writing out command to execute Dec 6 04:43:39 localhost multipathd[217111]: ++ cat /run_command Dec 6 04:43:39 localhost multipathd[217111]: + CMD='/usr/sbin/multipathd -d' Dec 6 04:43:39 localhost multipathd[217111]: + ARGS= Dec 6 04:43:39 localhost multipathd[217111]: + sudo kolla_copy_cacerts Dec 6 04:43:39 localhost multipathd[217111]: + [[ ! -n '' ]] Dec 6 04:43:39 localhost multipathd[217111]: + . kolla_extend_start Dec 6 04:43:39 localhost multipathd[217111]: Running command: '/usr/sbin/multipathd -d' Dec 6 04:43:39 localhost multipathd[217111]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\''' Dec 6 04:43:39 localhost multipathd[217111]: + umask 0022 Dec 6 04:43:39 localhost multipathd[217111]: + exec /usr/sbin/multipathd -d Dec 6 04:43:39 localhost multipathd[217111]: 10637.748465 | --------start up-------- Dec 6 04:43:39 localhost multipathd[217111]: 10637.748484 | read /etc/multipath.conf Dec 6 04:43:39 localhost multipathd[217111]: 10637.752458 | path checkers start up Dec 6 04:43:39 localhost podman[217120]: 2025-12-06 09:43:39.519030448 +0000 UTC m=+0.092569785 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 6 04:43:39 localhost podman[217120]: 2025-12-06 09:43:39.533206762 +0000 UTC m=+0.106746099 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true) Dec 6 04:43:39 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 04:43:40 localhost python3.9[217258]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:43:40 localhost systemd[1]: tmp-crun.D9LIbm.mount: Deactivated successfully. Dec 6 04:43:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45755 DF PROTO=TCP SPT=55426 DPT=9102 SEQ=3589475987 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D829B00000000001030307) Dec 6 04:43:41 localhost python3.9[217370]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:43:42 localhost python3.9[217493]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 6 04:43:42 localhost systemd[1]: Stopping multipathd container... Dec 6 04:43:42 localhost multipathd[217111]: 10641.158282 | exit (signal) Dec 6 04:43:42 localhost multipathd[217111]: 10641.158377 | --------shut down------- Dec 6 04:43:42 localhost systemd[1]: libpod-44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.scope: Deactivated successfully. Dec 6 04:43:42 localhost podman[217497]: 2025-12-06 09:43:42.940173058 +0000 UTC m=+0.097429753 container died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 04:43:42 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.timer: Deactivated successfully. Dec 6 04:43:42 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 04:43:42 localhost systemd[1]: tmp-crun.GjYfYO.mount: Deactivated successfully. Dec 6 04:43:42 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6-userdata-shm.mount: Deactivated successfully. Dec 6 04:43:42 localhost systemd[1]: var-lib-containers-storage-overlay-998945b2ed3c4b11e3b9ed62edc3a70e401b7792f1cfdc1d9e9e385864b9cbe5-merged.mount: Deactivated successfully. Dec 6 04:43:43 localhost podman[217497]: 2025-12-06 09:43:43.103499468 +0000 UTC m=+0.260756123 container cleanup 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125) Dec 6 04:43:43 localhost podman[217497]: multipathd Dec 6 04:43:43 localhost podman[217524]: 2025-12-06 09:43:43.197899607 +0000 UTC m=+0.065604469 container cleanup 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3) Dec 6 04:43:43 localhost podman[217524]: multipathd Dec 6 04:43:43 localhost systemd[1]: edpm_multipathd.service: Deactivated successfully. Dec 6 04:43:43 localhost systemd[1]: Stopped multipathd container. Dec 6 04:43:43 localhost systemd[1]: Starting multipathd container... Dec 6 04:43:43 localhost systemd[1]: Started libcrun container. Dec 6 04:43:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/998945b2ed3c4b11e3b9ed62edc3a70e401b7792f1cfdc1d9e9e385864b9cbe5/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Dec 6 04:43:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/998945b2ed3c4b11e3b9ed62edc3a70e401b7792f1cfdc1d9e9e385864b9cbe5/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Dec 6 04:43:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 04:43:43 localhost podman[217537]: 2025-12-06 09:43:43.363974982 +0000 UTC m=+0.136942443 container init 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 6 04:43:43 localhost multipathd[217551]: + sudo -E kolla_set_configs Dec 6 04:43:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 04:43:43 localhost podman[217537]: 2025-12-06 09:43:43.407065951 +0000 UTC m=+0.180033372 container start 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Dec 6 04:43:43 localhost podman[217537]: multipathd Dec 6 04:43:43 localhost systemd[1]: Started multipathd container. Dec 6 04:43:43 localhost multipathd[217551]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Dec 6 04:43:43 localhost multipathd[217551]: INFO:__main__:Validating config file Dec 6 04:43:43 localhost multipathd[217551]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Dec 6 04:43:43 localhost multipathd[217551]: INFO:__main__:Writing out command to execute Dec 6 04:43:43 localhost multipathd[217551]: ++ cat /run_command Dec 6 04:43:43 localhost multipathd[217551]: + CMD='/usr/sbin/multipathd -d' Dec 6 04:43:43 localhost multipathd[217551]: + ARGS= Dec 6 04:43:43 localhost multipathd[217551]: + sudo kolla_copy_cacerts Dec 6 04:43:43 localhost multipathd[217551]: + [[ ! -n '' ]] Dec 6 04:43:43 localhost multipathd[217551]: + . kolla_extend_start Dec 6 04:43:43 localhost multipathd[217551]: Running command: '/usr/sbin/multipathd -d' Dec 6 04:43:43 localhost multipathd[217551]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\''' Dec 6 04:43:43 localhost multipathd[217551]: + umask 0022 Dec 6 04:43:43 localhost multipathd[217551]: + exec /usr/sbin/multipathd -d Dec 6 04:43:43 localhost multipathd[217551]: 10641.742103 | --------start up-------- Dec 6 04:43:43 localhost podman[217559]: 2025-12-06 09:43:43.49226618 +0000 UTC m=+0.087992116 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 6 04:43:43 localhost multipathd[217551]: 10641.742124 | read /etc/multipath.conf Dec 6 04:43:43 localhost multipathd[217551]: 10641.745786 | path checkers start up Dec 6 04:43:43 localhost podman[217559]: 2025-12-06 09:43:43.505063611 +0000 UTC m=+0.100789577 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 04:43:43 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 04:43:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28925 DF PROTO=TCP SPT=48408 DPT=9101 SEQ=2497040735 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D837610000000001030307) Dec 6 04:43:44 localhost python3.9[217699]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:43:45 localhost python3.9[217809]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Dec 6 04:43:46 localhost python3.9[217919]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled Dec 6 04:43:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:43:47.273 160509 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 04:43:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:43:47.273 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 04:43:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:43:47.274 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 04:43:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28927 DF PROTO=TCP SPT=48408 DPT=9101 SEQ=2497040735 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D8436F0000000001030307) Dec 6 04:43:48 localhost python3.9[218038]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:43:48 localhost python3.9[218126]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014227.5146396-1830-144475602242227/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:43:49 localhost python3.9[218236]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:43:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23163 DF PROTO=TCP SPT=50212 DPT=9105 SEQ=2414219979 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D84C6F0000000001030307) Dec 6 04:43:50 localhost python3.9[218346]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 6 04:43:50 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 6 04:43:50 localhost systemd[1]: Stopped Load Kernel Modules. Dec 6 04:43:50 localhost systemd[1]: Stopping Load Kernel Modules... Dec 6 04:43:50 localhost systemd[1]: Starting Load Kernel Modules... Dec 6 04:43:50 localhost systemd-modules-load[218350]: Module 'msr' is built in Dec 6 04:43:50 localhost systemd[1]: Finished Load Kernel Modules. Dec 6 04:43:51 localhost python3.9[218460]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 6 04:43:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23164 DF PROTO=TCP SPT=50212 DPT=9105 SEQ=2414219979 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D85C2F0000000001030307) Dec 6 04:43:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 04:43:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 04:43:53 localhost podman[218463]: 2025-12-06 09:43:53.917869871 +0000 UTC m=+0.081332415 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true) Dec 6 04:43:54 localhost systemd[1]: tmp-crun.J3FzmN.mount: Deactivated successfully. Dec 6 04:43:54 localhost podman[218464]: 2025-12-06 09:43:54.015933254 +0000 UTC m=+0.176574730 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 6 04:43:54 localhost podman[218464]: 2025-12-06 09:43:54.022055814 +0000 UTC m=+0.182697270 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3) Dec 6 04:43:54 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 04:43:54 localhost podman[218463]: 2025-12-06 09:43:54.043264873 +0000 UTC m=+0.206727467 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 6 04:43:54 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 04:43:54 localhost sshd[218505]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:43:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52923 DF PROTO=TCP SPT=56466 DPT=9882 SEQ=1807279947 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D863F00000000001030307) Dec 6 04:43:55 localhost systemd[1]: Reloading. Dec 6 04:43:55 localhost systemd-rc-local-generator[218541]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:43:55 localhost systemd-sysv-generator[218544]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:43:55 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:55 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:55 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:55 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:55 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:43:56 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:56 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:56 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:56 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:56 localhost systemd[1]: Reloading. Dec 6 04:43:56 localhost systemd-rc-local-generator[218574]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:43:56 localhost systemd-sysv-generator[218579]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:43:56 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:56 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:56 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:56 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:56 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:43:56 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:56 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:56 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:56 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:56 localhost systemd-logind[766]: Watching system buttons on /dev/input/event0 (Power Button) Dec 6 04:43:56 localhost systemd-logind[766]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Dec 6 04:43:56 localhost lvm[218626]: PV /dev/loop3 online, VG ceph_vg0 is complete. Dec 6 04:43:56 localhost lvm[218626]: VG ceph_vg0 finished Dec 6 04:43:56 localhost lvm[218627]: PV /dev/loop4 online, VG ceph_vg1 is complete. Dec 6 04:43:56 localhost lvm[218627]: VG ceph_vg1 finished Dec 6 04:43:56 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Dec 6 04:43:56 localhost systemd[1]: Starting man-db-cache-update.service... Dec 6 04:43:56 localhost systemd[1]: Reloading. Dec 6 04:43:56 localhost systemd-sysv-generator[218679]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:43:56 localhost systemd-rc-local-generator[218674]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:43:56 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:56 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:56 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:56 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:56 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:43:56 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:56 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:57 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:57 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:43:57 localhost systemd[1]: Queuing reload/restart jobs for marked units… Dec 6 04:43:57 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Dec 6 04:43:57 localhost systemd[1]: Finished man-db-cache-update.service. Dec 6 04:43:57 localhost systemd[1]: man-db-cache-update.service: Consumed 1.247s CPU time. Dec 6 04:43:57 localhost systemd[1]: run-r63cb5f11d6da489d9f9b9756a15b8a69.service: Deactivated successfully. Dec 6 04:43:59 localhost python3.9[219921]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 6 04:43:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28929 DF PROTO=TCP SPT=48408 DPT=9101 SEQ=2497040735 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D873EF0000000001030307) Dec 6 04:44:00 localhost python3.9[220035]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:44:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23165 DF PROTO=TCP SPT=50212 DPT=9105 SEQ=2414219979 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D87BEF0000000001030307) Dec 6 04:44:02 localhost python3.9[220213]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 6 04:44:02 localhost systemd[1]: Reloading. Dec 6 04:44:02 localhost sshd[220233]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:44:02 localhost systemd-sysv-generator[220261]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:44:02 localhost systemd-rc-local-generator[220258]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:44:02 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:44:02 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 04:44:02 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:44:02 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:44:02 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:44:02 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 04:44:02 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:44:02 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:44:02 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:44:03 localhost python3.9[220377]: ansible-ansible.builtin.service_facts Invoked Dec 6 04:44:03 localhost network[220394]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 6 04:44:03 localhost network[220395]: 'network-scripts' will be removed from distribution in near future. Dec 6 04:44:03 localhost network[220396]: It is advised to switch to 'NetworkManager' instead for network management. Dec 6 04:44:04 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:44:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24775 DF PROTO=TCP SPT=44052 DPT=9102 SEQ=2062190782 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D8872F0000000001030307) Dec 6 04:44:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6900 DF PROTO=TCP SPT=56978 DPT=9102 SEQ=1819866283 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D893EF0000000001030307) Dec 6 04:44:08 localhost python3.9[220631]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:44:10 localhost python3.9[220742]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:44:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24777 DF PROTO=TCP SPT=44052 DPT=9102 SEQ=2062190782 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D89EEF0000000001030307) Dec 6 04:44:10 localhost python3.9[220853]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:44:11 localhost python3.9[220964]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:44:12 localhost python3.9[221075]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:44:13 localhost python3.9[221186]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:44:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 04:44:13 localhost podman[221297]: 2025-12-06 09:44:13.887685338 +0000 UTC m=+0.085119253 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 04:44:13 localhost podman[221297]: 2025-12-06 09:44:13.900694831 +0000 UTC m=+0.098128756 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 6 04:44:13 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 04:44:14 localhost python3.9[221298]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:44:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6285 DF PROTO=TCP SPT=51992 DPT=9101 SEQ=2735779511 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D8AC900000000001030307) Dec 6 04:44:14 localhost python3.9[221429]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:44:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6287 DF PROTO=TCP SPT=51992 DPT=9101 SEQ=2735779511 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D8B8AF0000000001030307) Dec 6 04:44:19 localhost python3.9[221540]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:44:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56655 DF PROTO=TCP SPT=58192 DPT=9105 SEQ=3867780288 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D8C1AF0000000001030307) Dec 6 04:44:19 localhost python3.9[221650]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:44:20 localhost python3.9[221760]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:44:21 localhost python3.9[221870]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:44:21 localhost python3.9[221980]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:44:22 localhost sshd[222069]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:44:22 localhost python3.9[222092]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:44:23 localhost python3.9[222202]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:44:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56656 DF PROTO=TCP SPT=58192 DPT=9105 SEQ=3867780288 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D8D16F0000000001030307) Dec 6 04:44:23 localhost python3.9[222312]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:44:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 04:44:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 04:44:24 localhost systemd[1]: tmp-crun.Shilia.mount: Deactivated successfully. Dec 6 04:44:24 localhost podman[222330]: 2025-12-06 09:44:24.379449456 +0000 UTC m=+0.094071081 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Dec 6 04:44:24 localhost podman[222330]: 2025-12-06 09:44:24.429896802 +0000 UTC m=+0.144518477 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 6 04:44:24 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 04:44:24 localhost podman[222331]: 2025-12-06 09:44:24.432126621 +0000 UTC m=+0.144791065 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 6 04:44:24 localhost podman[222331]: 2025-12-06 09:44:24.511951758 +0000 UTC m=+0.224616192 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Dec 6 04:44:24 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 04:44:25 localhost systemd[1]: tmp-crun.wyUxBE.mount: Deactivated successfully. Dec 6 04:44:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8334 DF PROTO=TCP SPT=44462 DPT=9882 SEQ=3116503452 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D8D9EF0000000001030307) Dec 6 04:44:25 localhost python3.9[222466]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:44:26 localhost python3.9[222576]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:44:27 localhost python3.9[222686]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:44:27 localhost sshd[222797]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:44:28 localhost python3.9[222796]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:44:28 localhost python3.9[222908]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:44:29 localhost python3.9[223018]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:44:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6289 DF PROTO=TCP SPT=51992 DPT=9101 SEQ=2735779511 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D8E7EF0000000001030307) Dec 6 04:44:29 localhost python3.9[223128]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:44:30 localhost python3.9[223238]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:44:31 localhost python3.9[223348]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:44:32 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56657 DF PROTO=TCP SPT=58192 DPT=9105 SEQ=3867780288 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D8F1EF0000000001030307) Dec 6 04:44:32 localhost python3.9[223458]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Dec 6 04:44:33 localhost python3.9[223568]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 6 04:44:33 localhost systemd[1]: Reloading. Dec 6 04:44:33 localhost systemd-sysv-generator[223595]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:44:33 localhost systemd-rc-local-generator[223592]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:44:33 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:44:33 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 04:44:33 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:44:33 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:44:33 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:44:33 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 04:44:33 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:44:33 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:44:33 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:44:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3810 DF PROTO=TCP SPT=44092 DPT=9102 SEQ=1703384252 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D8FC6F0000000001030307) Dec 6 04:44:35 localhost python3.9[223714]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:44:36 localhost python3.9[223825]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:44:36 localhost python3.9[223936]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:44:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45758 DF PROTO=TCP SPT=55426 DPT=9102 SEQ=3589475987 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D907F00000000001030307) Dec 6 04:44:38 localhost python3.9[224047]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:44:38 localhost python3.9[224158]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:44:39 localhost python3.9[224269]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:44:39 localhost python3.9[224380]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:44:40 localhost python3.9[224491]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:44:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3812 DF PROTO=TCP SPT=44092 DPT=9102 SEQ=1703384252 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D9142F0000000001030307) Dec 6 04:44:42 localhost python3.9[224602]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 04:44:43 localhost python3.9[224712]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 04:44:44 localhost python3.9[224822]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 04:44:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37487 DF PROTO=TCP SPT=48462 DPT=9101 SEQ=1261408538 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D921C00000000001030307) Dec 6 04:44:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 04:44:44 localhost sshd[224944]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:44:44 localhost systemd[1]: tmp-crun.qj1j5H.mount: Deactivated successfully. Dec 6 04:44:44 localhost podman[224933]: 2025-12-06 09:44:44.79319465 +0000 UTC m=+0.095000439 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, io.buildah.version=1.41.3) Dec 6 04:44:44 localhost podman[224933]: 2025-12-06 09:44:44.802945122 +0000 UTC m=+0.104750911 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 6 04:44:44 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 04:44:44 localhost python3.9[224932]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 04:44:45 localhost python3.9[225063]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 04:44:46 localhost python3.9[225173]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 04:44:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46888 DF PROTO=TCP SPT=51098 DPT=9105 SEQ=1871906699 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D92AFD0000000001030307) Dec 6 04:44:46 localhost python3.9[225283]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 04:44:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:44:47.274 160509 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 04:44:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:44:47.275 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 04:44:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:44:47.276 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 04:44:47 localhost python3.9[225393]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Dec 6 04:44:49 localhost python3.9[225503]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Dec 6 04:44:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46890 DF PROTO=TCP SPT=51098 DPT=9105 SEQ=1871906699 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D936EF0000000001030307) Dec 6 04:44:49 localhost python3.9[225613]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Dec 6 04:44:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46891 DF PROTO=TCP SPT=51098 DPT=9105 SEQ=1871906699 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D946AF0000000001030307) Dec 6 04:44:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 04:44:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 04:44:54 localhost systemd[1]: tmp-crun.TJrHiV.mount: Deactivated successfully. Dec 6 04:44:54 localhost podman[225632]: 2025-12-06 09:44:54.93068448 +0000 UTC m=+0.088131834 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 6 04:44:54 localhost podman[225632]: 2025-12-06 09:44:54.965202589 +0000 UTC m=+0.122649893 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 04:44:54 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 04:44:54 localhost podman[225631]: 2025-12-06 09:44:54.978082853 +0000 UTC m=+0.138755900 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller) Dec 6 04:44:55 localhost podman[225631]: 2025-12-06 09:44:55.045538792 +0000 UTC m=+0.206211789 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125) Dec 6 04:44:55 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 04:44:56 localhost python3.9[225767]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None Dec 6 04:44:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50295 DF PROTO=TCP SPT=60554 DPT=9882 SEQ=1031214754 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D9536F0000000001030307) Dec 6 04:44:57 localhost python3.9[225878]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None Dec 6 04:44:58 localhost python3.9[225994]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005548789.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None Dec 6 04:44:59 localhost sshd[226020]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:44:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37491 DF PROTO=TCP SPT=48462 DPT=9101 SEQ=1261408538 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D95DF00000000001030307) Dec 6 04:44:59 localhost systemd-logind[766]: New session 55 of user zuul. Dec 6 04:44:59 localhost systemd[1]: Started Session 55 of User zuul. Dec 6 04:44:59 localhost systemd[1]: session-55.scope: Deactivated successfully. Dec 6 04:44:59 localhost systemd-logind[766]: Session 55 logged out. Waiting for processes to exit. Dec 6 04:44:59 localhost systemd-logind[766]: Removed session 55. Dec 6 04:45:00 localhost python3.9[226131]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:45:01 localhost python3.9[226217]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014300.128928-3389-239748245121034/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 04:45:02 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46892 DF PROTO=TCP SPT=51098 DPT=9105 SEQ=1871906699 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D967F00000000001030307) Dec 6 04:45:02 localhost python3.9[226361]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:45:03 localhost python3.9[226439]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 04:45:03 localhost podman[226504]: 2025-12-06 09:45:03.221429453 +0000 UTC m=+0.082208183 container exec ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, com.redhat.component=rhceph-container, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, CEPH_POINT_RELEASE=, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , GIT_BRANCH=main, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, io.buildah.version=1.41.4, version=7) Dec 6 04:45:03 localhost podman[226504]: 2025-12-06 09:45:03.321461288 +0000 UTC m=+0.182240008 container exec_died ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, GIT_CLEAN=True, GIT_BRANCH=main, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, version=7, io.buildah.version=1.41.4, RELEASE=main, ceph=True, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., name=rhceph, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, io.openshift.expose-services=) Dec 6 04:45:04 localhost python3.9[226711]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:45:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24998 DF PROTO=TCP SPT=58212 DPT=9102 SEQ=2104232652 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D971B00000000001030307) Dec 6 04:45:04 localhost python3.9[226816]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014303.8460283-3389-193099518521270/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 04:45:05 localhost python3.9[226941]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:45:05 localhost python3.9[227028]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014304.958637-3389-88115518599741/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=84cd402761cf817a5c030b63eb0a858a413df311 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 04:45:06 localhost python3.9[227136]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:45:07 localhost python3.9[227222]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014306.1094482-3389-222959011683758/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 04:45:07 localhost python3.9[227330]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:45:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24780 DF PROTO=TCP SPT=44052 DPT=9102 SEQ=2062190782 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D97DEF0000000001030307) Dec 6 04:45:08 localhost python3.9[227416]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014307.2097983-3389-89954502211531/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 04:45:09 localhost python3.9[227526]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:45:10 localhost python3.9[227636]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:45:10 localhost python3.9[227746]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:45:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25000 DF PROTO=TCP SPT=58212 DPT=9102 SEQ=2104232652 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D989700000000001030307) Dec 6 04:45:11 localhost python3.9[227858]: ansible-ansible.builtin.file Invoked with group=nova mode=0400 owner=nova path=/var/lib/nova/compute_id state=file recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:45:12 localhost python3.9[227966]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:45:13 localhost python3.9[228076]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:45:13 localhost python3.9[228162]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014312.5535722-3765-177223898333608/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 04:45:14 localhost python3.9[228270]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:45:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19451 DF PROTO=TCP SPT=36780 DPT=9101 SEQ=3422919803 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D996F00000000001030307) Dec 6 04:45:14 localhost python3.9[228356]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014313.7258832-3809-30460689960347/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 04:45:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 04:45:15 localhost podman[228467]: 2025-12-06 09:45:15.684963323 +0000 UTC m=+0.081469819 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 6 04:45:15 localhost podman[228467]: 2025-12-06 09:45:15.697099433 +0000 UTC m=+0.093605929 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125) Dec 6 04:45:15 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 04:45:15 localhost python3.9[228466]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False Dec 6 04:45:16 localhost python3.9[228593]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Dec 6 04:45:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19453 DF PROTO=TCP SPT=36780 DPT=9101 SEQ=3422919803 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D9A2EF0000000001030307) Dec 6 04:45:17 localhost python3[228703]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False Dec 6 04:45:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23944 DF PROTO=TCP SPT=38432 DPT=9105 SEQ=1457126476 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D9AC2F0000000001030307) Dec 6 04:45:20 localhost sshd[228730]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:45:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23945 DF PROTO=TCP SPT=38432 DPT=9105 SEQ=1457126476 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D9BBEF0000000001030307) Dec 6 04:45:25 localhost sshd[228758]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:45:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50298 DF PROTO=TCP SPT=60554 DPT=9882 SEQ=1031214754 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D9C3F00000000001030307) Dec 6 04:45:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 04:45:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 04:45:27 localhost podman[228760]: 2025-12-06 09:45:27.669911172 +0000 UTC m=+1.830018975 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller) Dec 6 04:45:27 localhost podman[228760]: 2025-12-06 09:45:27.693995126 +0000 UTC m=+1.854102919 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 04:45:27 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 04:45:27 localhost podman[228717]: 2025-12-06 09:45:17.807641773 +0000 UTC m=+0.045823724 image pull quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified Dec 6 04:45:27 localhost podman[228761]: 2025-12-06 09:45:27.772435827 +0000 UTC m=+1.927701115 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Dec 6 04:45:27 localhost podman[228761]: 2025-12-06 09:45:27.803055101 +0000 UTC m=+1.958320389 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 6 04:45:27 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 04:45:27 localhost podman[228827]: Dec 6 04:45:27 localhost podman[228827]: 2025-12-06 09:45:27.909723029 +0000 UTC m=+0.073781962 container create a90088f7335f0424abe9208b181fba6d3fc6d1408325e575f0ba866a5d87ad9b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_managed=true, org.label-schema.schema-version=1.0) Dec 6 04:45:27 localhost podman[228827]: 2025-12-06 09:45:27.870036644 +0000 UTC m=+0.034095617 image pull quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified Dec 6 04:45:27 localhost python3[228703]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init Dec 6 04:45:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19455 DF PROTO=TCP SPT=36780 DPT=9101 SEQ=3422919803 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D9D3EF0000000001030307) Dec 6 04:45:30 localhost python3.9[228975]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:45:31 localhost python3.9[229087]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False Dec 6 04:45:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23946 DF PROTO=TCP SPT=38432 DPT=9105 SEQ=1457126476 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D9DBEF0000000001030307) Dec 6 04:45:32 localhost python3.9[229197]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Dec 6 04:45:32 localhost sshd[229253]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:45:33 localhost python3[229309]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False Dec 6 04:45:33 localhost python3[229309]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3",#012 "Digest": "sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-12-01T06:31:10.62653219Z",#012 "Config": {#012 "User": "nova",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 1211779450,#012 "VirtualSize": 1211779450,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22/diff:/var/lib/containers/storage/overlay/11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60/diff:/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",#012 "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",#012 "sha256:86c2cd3987225f8a9bf38cc88e9c24b56bdf4a194f2301186519b4a7571b0c92",#012 "sha256:baa8e0bc73d6b505f07c40d4f69a464312cc41ae2045c7975dd4759c27721a22",#012 "sha256:d0cde44181262e43c105085c32a5af158b232f2e2ce4fe4b50530d7cdc5126cd"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "nova",#012 "History": [#012 {#012 "created": "2025-11-25T04:02:36.223494528Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:36.223562059Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251125\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:39.054452717Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-12-01T06:09:28.025707917Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025744608Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025767729Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025791379Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.02581523Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025867611Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.469442331Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:10:02.029095017Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 Dec 6 04:45:33 localhost podman[229359]: 2025-12-06 09:45:33.853419821 +0000 UTC m=+0.138638117 container remove 41cf0c2eef405ae219098f5959c425c7035944a28046cea63417ef9b175b7007 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18576754feb36b85b5c8742ad9b5643d-179caa3982511c1fd3314b961771f96c'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, managed_by=tripleo_ansible) Dec 6 04:45:33 localhost python3[229309]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force nova_compute Dec 6 04:45:33 localhost podman[229372]: Dec 6 04:45:33 localhost podman[229372]: 2025-12-06 09:45:33.953947781 +0000 UTC m=+0.081804170 container create 6674d58fdb9d90e78bfb85f434c919baa1836ad3e98a097c0a64c1152f7163c8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Dec 6 04:45:33 localhost podman[229372]: 2025-12-06 09:45:33.917845172 +0000 UTC m=+0.045701481 image pull quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified Dec 6 04:45:33 localhost python3[229309]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start Dec 6 04:45:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=507 DF PROTO=TCP SPT=40508 DPT=9102 SEQ=222705752 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D9E6AF0000000001030307) Dec 6 04:45:35 localhost python3.9[229520]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:45:36 localhost python3.9[229632]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:45:37 localhost python3.9[229741]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765014336.761537-4085-13462632909343/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:45:37 localhost python3.9[229796]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 6 04:45:37 localhost systemd[1]: Reloading. Dec 6 04:45:37 localhost systemd-rc-local-generator[229822]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:45:37 localhost systemd-sysv-generator[229825]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:45:38 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:45:38 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 04:45:38 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:45:38 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:45:38 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:45:38 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 04:45:38 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:45:38 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:45:38 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:45:38 localhost python3.9[229886]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:45:38 localhost systemd[1]: Reloading. Dec 6 04:45:39 localhost systemd-rc-local-generator[229909]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:45:39 localhost systemd-sysv-generator[229913]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:45:39 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:45:39 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 04:45:39 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:45:39 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:45:39 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:45:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47735 DF PROTO=TCP SPT=46962 DPT=9882 SEQ=1071838655 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D9F7EF0000000001030307) Dec 6 04:45:39 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 04:45:39 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:45:39 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:45:39 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:45:39 localhost systemd[1]: Starting nova_compute container... Dec 6 04:45:39 localhost systemd[1]: Started libcrun container. Dec 6 04:45:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81aea125146c60e1b0ee38b0c4ee8c70ba3c42600b7bfc70695e2bff0e11c0ad/merged/etc/nvme supports timestamps until 2038 (0x7fffffff) Dec 6 04:45:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81aea125146c60e1b0ee38b0c4ee8c70ba3c42600b7bfc70695e2bff0e11c0ad/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Dec 6 04:45:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81aea125146c60e1b0ee38b0c4ee8c70ba3c42600b7bfc70695e2bff0e11c0ad/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Dec 6 04:45:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81aea125146c60e1b0ee38b0c4ee8c70ba3c42600b7bfc70695e2bff0e11c0ad/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Dec 6 04:45:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81aea125146c60e1b0ee38b0c4ee8c70ba3c42600b7bfc70695e2bff0e11c0ad/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 6 04:45:39 localhost podman[229927]: 2025-12-06 09:45:39.39188973 +0000 UTC m=+0.119081208 container init 6674d58fdb9d90e78bfb85f434c919baa1836ad3e98a097c0a64c1152f7163c8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, container_name=nova_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251125) Dec 6 04:45:39 localhost podman[229927]: 2025-12-06 09:45:39.401454027 +0000 UTC m=+0.128645505 container start 6674d58fdb9d90e78bfb85f434c919baa1836ad3e98a097c0a64c1152f7163c8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 6 04:45:39 localhost podman[229927]: nova_compute Dec 6 04:45:39 localhost systemd[1]: tmp-crun.8RpS1f.mount: Deactivated successfully. Dec 6 04:45:39 localhost nova_compute[229942]: + sudo -E kolla_set_configs Dec 6 04:45:39 localhost systemd[1]: Started nova_compute container. Dec 6 04:45:39 localhost nova_compute[229942]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Dec 6 04:45:39 localhost nova_compute[229942]: INFO:__main__:Validating config file Dec 6 04:45:39 localhost nova_compute[229942]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Dec 6 04:45:39 localhost nova_compute[229942]: INFO:__main__:Copying service configuration files Dec 6 04:45:39 localhost nova_compute[229942]: INFO:__main__:Deleting /etc/nova/nova.conf Dec 6 04:45:39 localhost nova_compute[229942]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf Dec 6 04:45:39 localhost nova_compute[229942]: INFO:__main__:Setting permission for /etc/nova/nova.conf Dec 6 04:45:39 localhost nova_compute[229942]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf Dec 6 04:45:39 localhost nova_compute[229942]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf Dec 6 04:45:39 localhost nova_compute[229942]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf Dec 6 04:45:39 localhost nova_compute[229942]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf Dec 6 04:45:39 localhost nova_compute[229942]: INFO:__main__:Copying /var/lib/kolla/config_files/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Dec 6 04:45:39 localhost nova_compute[229942]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Dec 6 04:45:39 localhost nova_compute[229942]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf Dec 6 04:45:39 localhost nova_compute[229942]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf Dec 6 04:45:39 localhost nova_compute[229942]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf Dec 6 04:45:39 localhost nova_compute[229942]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf Dec 6 04:45:39 localhost nova_compute[229942]: INFO:__main__:Deleting /etc/ceph Dec 6 04:45:39 localhost nova_compute[229942]: INFO:__main__:Creating directory /etc/ceph Dec 6 04:45:39 localhost nova_compute[229942]: INFO:__main__:Setting permission for /etc/ceph Dec 6 04:45:39 localhost nova_compute[229942]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf Dec 6 04:45:39 localhost nova_compute[229942]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Dec 6 04:45:39 localhost nova_compute[229942]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring Dec 6 04:45:39 localhost nova_compute[229942]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Dec 6 04:45:39 localhost nova_compute[229942]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey Dec 6 04:45:39 localhost nova_compute[229942]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Dec 6 04:45:39 localhost nova_compute[229942]: INFO:__main__:Deleting /var/lib/nova/.ssh/config Dec 6 04:45:39 localhost nova_compute[229942]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config Dec 6 04:45:39 localhost nova_compute[229942]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Dec 6 04:45:39 localhost nova_compute[229942]: INFO:__main__:Deleting /usr/sbin/iscsiadm Dec 6 04:45:39 localhost nova_compute[229942]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm Dec 6 04:45:39 localhost nova_compute[229942]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm Dec 6 04:45:39 localhost nova_compute[229942]: INFO:__main__:Writing out command to execute Dec 6 04:45:39 localhost nova_compute[229942]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Dec 6 04:45:39 localhost nova_compute[229942]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Dec 6 04:45:39 localhost nova_compute[229942]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ Dec 6 04:45:39 localhost nova_compute[229942]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Dec 6 04:45:39 localhost nova_compute[229942]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Dec 6 04:45:39 localhost nova_compute[229942]: ++ cat /run_command Dec 6 04:45:39 localhost nova_compute[229942]: + CMD=nova-compute Dec 6 04:45:39 localhost nova_compute[229942]: + ARGS= Dec 6 04:45:39 localhost nova_compute[229942]: + sudo kolla_copy_cacerts Dec 6 04:45:39 localhost nova_compute[229942]: + [[ ! -n '' ]] Dec 6 04:45:39 localhost nova_compute[229942]: + . kolla_extend_start Dec 6 04:45:39 localhost nova_compute[229942]: Running command: 'nova-compute' Dec 6 04:45:39 localhost nova_compute[229942]: + echo 'Running command: '\''nova-compute'\''' Dec 6 04:45:39 localhost nova_compute[229942]: + umask 0022 Dec 6 04:45:39 localhost nova_compute[229942]: + exec nova-compute Dec 6 04:45:40 localhost python3.9[230062]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:45:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=509 DF PROTO=TCP SPT=40508 DPT=9102 SEQ=222705752 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52D9FE6F0000000001030307) Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.183 229946 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.183 229946 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.183 229946 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.183 229946 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.301 229946 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.322 229946 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.323 229946 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.788 229946 INFO nova.virt.driver [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.906 229946 INFO nova.compute.provider_config [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.918 229946 WARNING nova.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.: nova.exception.TooOldComputeService: Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.918 229946 DEBUG oslo_concurrency.lockutils [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.918 229946 DEBUG oslo_concurrency.lockutils [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.919 229946 DEBUG oslo_concurrency.lockutils [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.919 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.919 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.919 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.919 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.920 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.920 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.920 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] allow_resize_to_same_host = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.920 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] arq_binding_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.920 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] backdoor_port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.920 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] backdoor_socket = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.920 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] block_device_allocate_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.920 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.921 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cert = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.921 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] compute_driver = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.921 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] compute_monitors = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.921 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] config_dir = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.921 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] config_drive_format = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.921 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] config_file = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.921 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.922 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] console_host = np0005548789.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.922 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] control_exchange = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.922 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cpu_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.922 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.922 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.922 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.922 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] default_availability_zone = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.923 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] default_ephemeral_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.923 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.923 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] default_schedule_zone = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.923 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] disk_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.923 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] enable_new_services = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.923 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] enabled_apis = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.923 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] enabled_ssl_apis = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.924 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] flat_injected = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.924 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] force_config_drive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.924 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] force_raw_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.924 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.924 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.924 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] host = np0005548789.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.924 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] initial_cpu_allocation_ratio = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.925 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] initial_disk_allocation_ratio = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.925 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] initial_ram_allocation_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.925 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] injected_network_template = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.925 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] instance_build_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.925 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] instance_delete_interval = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.925 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.925 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] instance_name_template = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.926 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] instance_usage_audit = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.926 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] instance_usage_audit_period = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.926 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.926 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] instances_path = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.926 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.926 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.926 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] live_migration_retry_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.927 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.927 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.927 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.927 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.927 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.927 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.927 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.927 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] log_rotation_type = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.928 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.928 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.928 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.928 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.928 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.928 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] long_rpc_timeout = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.928 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] max_concurrent_builds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.929 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.929 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] max_concurrent_snapshots = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.929 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] max_local_block_devices = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.929 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] max_logfile_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.929 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] max_logfile_size_mb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.929 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.929 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] metadata_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.929 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] metadata_listen_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.930 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] metadata_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.930 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] migrate_max_retries = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.930 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] mkisofs_cmd = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.930 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] my_block_storage_ip = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.930 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] my_ip = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.930 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] network_allocate_retries = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.930 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.931 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] osapi_compute_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.931 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] osapi_compute_listen_port = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.931 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] osapi_compute_unique_server_name_scope = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.931 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] osapi_compute_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.931 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] password_length = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.931 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] periodic_enable = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.931 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] periodic_fuzzy_delay = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.932 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] pointer_model = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.932 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] preallocate_images = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.932 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.932 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] pybasedir = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.932 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] ram_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.932 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.932 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.932 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.933 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] reboot_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.933 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] reclaim_instance_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.933 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] record = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.933 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] reimage_timeout_per_gb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.933 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] report_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.933 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] rescue_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.933 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] reserved_host_cpus = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.933 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] reserved_host_disk_mb = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.934 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] reserved_host_memory_mb = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.934 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] reserved_huge_pages = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.934 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] resize_confirm_window = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.934 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] resize_fs_using_block_device = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.934 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.934 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] rootwrap_config = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.934 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] rpc_response_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.935 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] run_external_periodic_tasks = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.935 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.935 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.935 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.935 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.935 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] service_down_time = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.935 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] servicegroup_driver = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.936 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] shelved_offload_time = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.936 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] shelved_poll_interval = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.936 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.936 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] source_is_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.936 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] ssl_only = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.936 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] state_path = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.936 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] sync_power_state_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.937 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] sync_power_state_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.937 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.937 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] tempdir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.937 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] timeout_nbd = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.937 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.937 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] update_resources_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.937 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] use_cow_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.938 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.938 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.938 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.938 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] use_rootwrap_daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.938 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.938 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.938 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vcpu_pin_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.938 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vif_plugging_is_fatal = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.939 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vif_plugging_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.939 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] virt_mkfs = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.939 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] volume_usage_poll_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.939 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.939 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] web = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.939 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.939 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_concurrency.lock_path = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.940 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.940 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.940 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.940 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.940 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.940 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] api.auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.940 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] api.compute_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.941 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.941 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] api.dhcp_domain = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.941 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] api.enable_instance_password = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.941 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] api.glance_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.941 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.941 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.941 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.942 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.942 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] api.local_metadata_per_cell = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.942 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] api.max_limit = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.942 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] api.metadata_cache_expiration = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.942 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] api.neutron_default_tenant_id = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.942 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] api.use_forwarded_for = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.942 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] api.use_neutron_default_nets = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.943 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.943 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.943 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.943 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] api.vendordata_dynamic_ssl_certfile = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.943 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.943 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] api.vendordata_jsonfile_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.943 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] api.vendordata_providers = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.944 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cache.backend = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.944 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cache.backend_argument = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.944 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cache.config_prefix = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.944 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cache.dead_timeout = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.944 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cache.debug_cache_backend = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.944 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cache.enable_retry_client = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.944 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cache.enable_socket_keepalive = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.945 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cache.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.945 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cache.expiration_time = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.945 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.945 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cache.hashclient_retry_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.945 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cache.memcache_dead_retry = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.945 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cache.memcache_password = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.945 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.946 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.946 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cache.memcache_pool_maxsize = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.946 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.946 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cache.memcache_sasl_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.946 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cache.memcache_servers = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.946 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cache.memcache_socket_timeout = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.946 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cache.memcache_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.947 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cache.proxies = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.947 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cache.retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.947 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cache.retry_delay = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.947 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cache.socket_keepalive_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.947 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cache.socket_keepalive_idle = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.947 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.947 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cache.tls_allowed_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.948 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cache.tls_cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.948 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cache.tls_certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.948 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cache.tls_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.948 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cache.tls_keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.948 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cinder.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.948 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cinder.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.948 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cinder.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.949 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cinder.catalog_info = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.949 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cinder.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.949 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cinder.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.949 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cinder.cross_az_attach = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.949 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cinder.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.949 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cinder.endpoint_template = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.949 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cinder.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.950 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cinder.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.950 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cinder.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.950 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cinder.os_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.950 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cinder.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.950 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cinder.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.950 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.950 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] compute.cpu_dedicated_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.951 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] compute.cpu_shared_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.951 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.951 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.951 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.951 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.951 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.951 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.952 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.952 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.952 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] compute.vmdk_allowed_types = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.952 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] conductor.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.952 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] console.allowed_origins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.952 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] console.ssl_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.952 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] console.ssl_minimum_version = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.953 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] consoleauth.token_ttl = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.953 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cyborg.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.953 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cyborg.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.953 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cyborg.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.953 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cyborg.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.953 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cyborg.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.953 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cyborg.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.954 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cyborg.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.954 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cyborg.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.954 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cyborg.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.954 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cyborg.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.954 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cyborg.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.954 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cyborg.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.954 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cyborg.service_type = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.954 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cyborg.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.955 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cyborg.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.955 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.955 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cyborg.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.955 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cyborg.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.955 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] cyborg.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.955 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.955 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.956 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.956 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.956 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.956 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.956 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.956 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.956 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.957 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.957 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.957 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.957 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.957 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.957 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.957 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.958 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.958 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.958 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.958 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.958 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] api_database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.958 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] api_database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.959 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] api_database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.959 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] api_database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.959 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.959 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] api_database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.959 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.959 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] api_database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.959 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.960 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.960 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] api_database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.960 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] api_database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.960 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] api_database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.960 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] api_database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.960 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] api_database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.960 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.961 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] api_database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.961 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] api_database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.961 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] api_database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.961 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.961 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] devices.enabled_mdev_types = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.961 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.961 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.961 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.962 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] glance.api_servers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.962 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] glance.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.962 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] glance.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.962 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] glance.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.962 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] glance.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.962 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] glance.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.962 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] glance.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.963 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.963 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.963 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] glance.enable_rbd_download = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.963 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] glance.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.963 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] glance.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.963 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] glance.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.963 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] glance.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.964 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] glance.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.964 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] glance.num_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.964 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] glance.rbd_ceph_conf = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.964 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] glance.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.964 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] glance.rbd_pool = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.964 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] glance.rbd_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.964 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] glance.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.965 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] glance.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.965 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] glance.service_type = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.965 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] glance.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.965 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] glance.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.965 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.965 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] glance.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.965 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] glance.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.965 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.966 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] glance.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.966 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] guestfs.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.966 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] hyperv.config_drive_cdrom = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.966 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.966 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] hyperv.dynamic_memory_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.966 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.966 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] hyperv.enable_remotefx = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.967 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] hyperv.instances_path_share = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.967 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] hyperv.iscsi_initiator_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.967 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] hyperv.limit_cpu_features = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.967 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.967 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.967 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.967 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.968 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] hyperv.qemu_img_cmd = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.968 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] hyperv.use_multipath_io = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.968 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.968 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.968 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] hyperv.vswitch_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.968 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.968 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] mks.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.969 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] mks.mksproxy_base_url = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.969 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] image_cache.manager_interval = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.969 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.969 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.969 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.969 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.970 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] image_cache.subdirectory_name = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.970 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] ironic.api_max_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.970 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] ironic.api_retry_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.970 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.970 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.970 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.970 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.971 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.971 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.971 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.971 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.971 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.971 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.971 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.972 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.972 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] ironic.partition_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.972 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] ironic.peer_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.972 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.972 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.972 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.972 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] ironic.service_type = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.973 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.973 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.973 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.973 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.973 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] ironic.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.973 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.973 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] key_manager.backend = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.974 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] key_manager.fixed_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.974 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] barbican.auth_endpoint = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.974 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] barbican.barbican_api_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.974 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] barbican.barbican_endpoint = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.974 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.974 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] barbican.barbican_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.974 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] barbican.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.975 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] barbican.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.975 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] barbican.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.975 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] barbican.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.975 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] barbican.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.975 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] barbican.number_of_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.975 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] barbican.retry_delay = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.975 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.976 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] barbican.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.976 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] barbican.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.976 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] barbican.verify_ssl = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.976 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] barbican.verify_ssl_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.976 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.976 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.976 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] barbican_service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.977 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.977 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.977 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.977 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] barbican_service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.977 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.977 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] barbican_service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.977 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vault.approle_role_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.977 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vault.approle_secret_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.978 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vault.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.978 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vault.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.978 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vault.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.978 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vault.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.978 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vault.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.978 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vault.kv_mountpoint = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.978 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vault.kv_version = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.979 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vault.namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.979 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vault.root_token_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.979 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vault.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.979 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vault.ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.979 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vault.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.979 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vault.use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.979 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vault.vault_url = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.980 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] keystone.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.980 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] keystone.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.980 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] keystone.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.980 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] keystone.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.980 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] keystone.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.980 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] keystone.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.980 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] keystone.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.981 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] keystone.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.981 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] keystone.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.981 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] keystone.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.981 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] keystone.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.981 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] keystone.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.981 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] keystone.service_type = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.981 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] keystone.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.982 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] keystone.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.982 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.982 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] keystone.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.982 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] keystone.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.982 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] keystone.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.982 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.connection_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.982 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.cpu_mode = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.983 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.cpu_model_extra_flags = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.983 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.cpu_models = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.983 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.983 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.983 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.cpu_power_management = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.983 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.983 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.984 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.device_detach_timeout = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.984 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.disk_cachemodes = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.984 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.disk_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.984 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.enabled_perf_events = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.984 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.file_backed_memory = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.984 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.gid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.984 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.hw_disk_discard = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.985 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.hw_machine_type = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.985 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.images_rbd_ceph_conf = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.985 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.985 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.985 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.985 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.images_rbd_pool = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.985 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.images_type = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.986 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.images_volume_group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.986 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.inject_key = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.986 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.inject_partition = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.986 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.986 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.iscsi_iface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.986 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.iser_use_multipath = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.986 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.986 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.987 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.987 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.987 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.987 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.987 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.987 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.987 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.live_migration_scheme = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.988 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.988 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.988 229946 WARNING oslo_config.cfg [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal ( Dec 6 04:45:41 localhost nova_compute[229942]: live_migration_uri is deprecated for removal in favor of two other options that Dec 6 04:45:41 localhost nova_compute[229942]: allow to change live migration scheme and target URI: ``live_migration_scheme`` Dec 6 04:45:41 localhost nova_compute[229942]: and ``live_migration_inbound_addr`` respectively. Dec 6 04:45:41 localhost nova_compute[229942]: ). Its value may be silently ignored in the future.#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.988 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.live_migration_uri = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.988 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.988 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.max_queues = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.989 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.989 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.nfs_mount_options = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.989 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.nfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.989 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.989 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.num_iser_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.989 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.989 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.990 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.num_pcie_ports = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.990 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.num_volume_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.990 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.pmem_namespaces = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.990 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.quobyte_client_cfg = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.990 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.990 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.990 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.991 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.991 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.rbd_secret_uuid = 1939e851-b10c-5c3b-9bb7-8e7f380233e8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.991 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.rbd_user = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.991 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.991 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.991 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.rescue_image_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.991 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.rescue_kernel_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.992 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.rescue_ramdisk_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.992 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.rng_dev_path = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.992 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.rx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.992 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.smbfs_mount_options = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.992 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.992 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.snapshot_compression = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.992 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.snapshot_image_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.993 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.snapshots_directory = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.993 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.993 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.swtpm_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.993 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.swtpm_group = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.993 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.swtpm_user = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.993 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.sysinfo_serial = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.993 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.tx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.994 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.uid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.994 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.994 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.virt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.994 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.volume_clear = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.994 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.volume_clear_size = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.994 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.volume_use_multipath = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.995 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.vzstorage_cache_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.995 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.vzstorage_log_path = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.995 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.vzstorage_mount_group = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.995 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.vzstorage_mount_opts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.995 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.vzstorage_mount_perms = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.995 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.995 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.vzstorage_mount_user = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.996 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.996 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] neutron.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.996 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] neutron.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.996 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] neutron.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.996 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] neutron.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.996 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] neutron.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.996 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] neutron.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.997 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] neutron.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.997 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] neutron.default_floating_pool = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.997 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] neutron.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.997 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.997 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] neutron.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.997 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] neutron.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.997 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] neutron.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.997 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] neutron.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.998 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.998 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] neutron.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.998 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] neutron.ovs_bridge = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.998 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] neutron.physnets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.998 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] neutron.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.998 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.998 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] neutron.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.999 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] neutron.service_type = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.999 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] neutron.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.999 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] neutron.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.999 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.999 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] neutron.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:41 localhost nova_compute[229942]: 2025-12-06 09:45:41.999 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] neutron.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:41.999 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] neutron.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.000 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.000 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] notifications.default_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.000 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.000 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.000 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.000 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] pci.alias = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.000 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] pci.device_spec = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.001 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] pci.report_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.001 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.001 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] placement.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.001 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] placement.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.001 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.001 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.002 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.002 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] placement.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.002 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] placement.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.002 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] placement.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.002 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] placement.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.002 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] placement.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.002 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] placement.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.002 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] placement.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.003 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.003 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.003 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] placement.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.003 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] placement.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.003 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] placement.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.003 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] placement.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.003 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] placement.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.004 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] placement.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.004 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] placement.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.004 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] placement.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.004 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] placement.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.004 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] placement.service_type = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.004 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.004 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] placement.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.005 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.005 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] placement.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.005 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.005 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] placement.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.005 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] placement.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.005 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] placement.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.005 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] placement.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.005 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] placement.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.006 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] placement.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.006 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] placement.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.006 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] quota.cores = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.006 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.006 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] quota.driver = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.006 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.006 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.007 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] quota.injected_files = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.007 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] quota.instances = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.007 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] quota.key_pairs = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.007 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] quota.metadata_items = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.007 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] quota.ram = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.007 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] quota.recheck_quota = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.007 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] quota.server_group_members = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.008 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] quota.server_groups = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.008 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] rdp.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.008 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] rdp.html5_proxy_base_url = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.008 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.008 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.009 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.009 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.009 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] scheduler.max_attempts = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.009 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.009 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.009 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.009 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.010 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.010 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] scheduler.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.010 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.010 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.010 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.010 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.010 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.011 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.011 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.011 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.011 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.011 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.011 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.011 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.012 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.012 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.012 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.012 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.012 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.012 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.012 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.012 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.013 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.013 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.013 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.013 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.013 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] metrics.required = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.013 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] metrics.weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.013 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] metrics.weight_of_unavailable = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.014 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] metrics.weight_setting = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.014 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] serial_console.base_url = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.014 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] serial_console.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.014 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] serial_console.port_range = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.014 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.014 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.014 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.015 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.015 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] service_user.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.015 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.015 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.015 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.015 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.016 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.016 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.016 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.016 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.016 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] spice.agent_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.016 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] spice.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.017 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] spice.html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.017 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] spice.html5proxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.017 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] spice.html5proxy_port = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.017 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] spice.image_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.017 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] spice.jpeg_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.018 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] spice.playback_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.018 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] spice.server_listen = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.018 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.018 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] spice.streaming_mode = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.018 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] spice.zlib_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.018 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] upgrade_levels.baseapi = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.019 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] upgrade_levels.cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.019 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] upgrade_levels.compute = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.019 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] upgrade_levels.conductor = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.019 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] upgrade_levels.scheduler = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.019 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.019 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.019 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.020 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.020 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.020 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.020 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.020 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.020 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.020 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.020 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.021 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vmware.cache_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.021 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vmware.cluster_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.021 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vmware.connection_pool_size = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.021 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vmware.console_delay_seconds = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.021 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vmware.datastore_regex = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.021 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vmware.host_ip = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.021 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.022 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.022 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vmware.host_username = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.022 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.022 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vmware.integration_bridge = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.022 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vmware.maximum_objects = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.023 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vmware.pbm_default_policy = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.023 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vmware.pbm_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.023 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vmware.pbm_wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.023 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vmware.serial_log_dir = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.023 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vmware.serial_port_proxy_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.024 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.024 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.024 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vmware.use_linked_clone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.024 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vmware.vnc_keymap = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.024 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vmware.vnc_port = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.025 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vmware.vnc_port_total = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.025 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vnc.auth_schemes = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.025 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vnc.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.025 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vnc.novncproxy_base_url = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.026 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vnc.novncproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.026 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vnc.novncproxy_port = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.026 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vnc.server_listen = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.026 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vnc.server_proxyclient_address = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.027 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vnc.vencrypt_ca_certs = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.027 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vnc.vencrypt_client_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.027 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vnc.vencrypt_client_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.027 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] workarounds.disable_compute_service_check_for_ffu = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.027 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.028 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.028 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.028 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.028 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] workarounds.disable_rootwrap = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.028 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.029 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.029 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.029 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.029 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.029 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.030 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.030 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.030 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.030 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.030 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.031 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.031 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.031 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.031 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.032 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] wsgi.api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.032 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] wsgi.client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.032 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] wsgi.default_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.032 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] wsgi.keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.032 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] wsgi.max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.033 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] wsgi.secure_proxy_ssl_header = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.033 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] wsgi.ssl_ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.033 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] wsgi.ssl_cert_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.033 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] wsgi.ssl_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.033 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] wsgi.tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.034 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] wsgi.wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.034 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] zvm.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.034 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] zvm.cloud_connector_url = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.034 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] zvm.image_tmp_path = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.034 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] zvm.reachable_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.035 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.035 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_policy.enforce_scope = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.035 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.035 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.036 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.036 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.036 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.036 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.036 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.036 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.037 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.037 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.037 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] remote_debug.host = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.037 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] remote_debug.port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.038 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.038 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.038 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.038 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.038 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.039 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.039 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.039 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.039 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.039 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.040 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.040 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.040 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.040 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.040 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.041 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.041 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.041 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.041 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.041 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.041 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.042 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.042 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.042 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.042 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.042 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.043 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.043 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.043 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.043 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.044 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.044 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.044 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.044 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.044 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.045 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_limit.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.045 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_limit.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.045 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_limit.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.045 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_limit.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.045 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_limit.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.046 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_limit.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.046 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_limit.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.046 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.046 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_limit.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.046 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.047 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_limit.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.047 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_limit.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.047 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_limit.endpoint_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.047 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_limit.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.047 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_limit.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.048 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_limit.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.048 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_limit.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.048 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_limit.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.048 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_limit.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.048 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_limit.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.049 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.049 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_limit.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.049 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_limit.project_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.049 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_limit.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.049 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_limit.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.049 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_limit.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.050 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_limit.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.050 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.050 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.050 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_limit.system_scope = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.050 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_limit.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.051 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_limit.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.051 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_limit.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.051 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_limit.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.051 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_limit.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.051 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_limit.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.051 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_limit.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.052 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_limit.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.052 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.052 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.052 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] oslo_reports.log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.052 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.052 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.053 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.053 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.053 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.053 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.053 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.054 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vif_plug_ovs_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.054 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.054 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.054 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.054 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] vif_plug_ovs_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.054 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.055 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.055 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] os_vif_linux_bridge.iptables_bottom_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.055 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.055 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] os_vif_linux_bridge.iptables_top_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.055 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.055 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] os_vif_linux_bridge.use_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.056 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.056 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] os_vif_ovs.isolate_vif = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.056 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] os_vif_ovs.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.056 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] os_vif_ovs.ovs_vsctl_timeout = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.056 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] os_vif_ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.056 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] os_vif_ovs.ovsdb_interface = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.057 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] os_vif_ovs.per_port_bridge = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.057 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] os_brick.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.057 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.057 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.057 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] privsep_osbrick.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.057 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] privsep_osbrick.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.057 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.058 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] privsep_osbrick.logger_name = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.058 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.058 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] privsep_osbrick.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.058 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] nova_sys_admin.capabilities = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.058 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] nova_sys_admin.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.058 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] nova_sys_admin.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.058 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] nova_sys_admin.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.059 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.059 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] nova_sys_admin.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.059 229946 DEBUG oslo_service.service [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.060 229946 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.072 229946 INFO nova.virt.node [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Determined node identity 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad from /var/lib/nova/compute_id#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.072 229946 DEBUG nova.virt.libvirt.host [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.073 229946 DEBUG nova.virt.libvirt.host [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.073 229946 DEBUG nova.virt.libvirt.host [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.073 229946 DEBUG nova.virt.libvirt.host [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.085 229946 DEBUG nova.virt.libvirt.host [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Registering for lifecycle events _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.088 229946 DEBUG nova.virt.libvirt.host [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Registering for connection events: _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.088 229946 INFO nova.virt.libvirt.driver [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Connection event '1' reason 'None'#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.097 229946 DEBUG nova.virt.libvirt.volume.mount [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.111 229946 INFO nova.virt.libvirt.host [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Libvirt host capabilities Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: 0b20d7bd-1341-4912-afa7-eec4e2b0c648 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: x86_64 Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-Rome-v4 Dec 6 04:45:42 localhost nova_compute[229942]: AMD Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: tcp Dec 6 04:45:42 localhost nova_compute[229942]: rdma Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: 16116612 Dec 6 04:45:42 localhost nova_compute[229942]: 4029153 Dec 6 04:45:42 localhost nova_compute[229942]: 0 Dec 6 04:45:42 localhost nova_compute[229942]: 0 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: selinux Dec 6 04:45:42 localhost nova_compute[229942]: 0 Dec 6 04:45:42 localhost nova_compute[229942]: system_u:system_r:svirt_t:s0 Dec 6 04:45:42 localhost nova_compute[229942]: system_u:system_r:svirt_tcg_t:s0 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: dac Dec 6 04:45:42 localhost nova_compute[229942]: 0 Dec 6 04:45:42 localhost nova_compute[229942]: +107:+107 Dec 6 04:45:42 localhost nova_compute[229942]: +107:+107 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: hvm Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: 32 Dec 6 04:45:42 localhost nova_compute[229942]: /usr/libexec/qemu-kvm Dec 6 04:45:42 localhost nova_compute[229942]: pc-i440fx-rhel7.6.0 Dec 6 04:45:42 localhost nova_compute[229942]: pc Dec 6 04:45:42 localhost nova_compute[229942]: pc-q35-rhel9.8.0 Dec 6 04:45:42 localhost nova_compute[229942]: q35 Dec 6 04:45:42 localhost nova_compute[229942]: pc-q35-rhel9.6.0 Dec 6 04:45:42 localhost nova_compute[229942]: pc-q35-rhel8.6.0 Dec 6 04:45:42 localhost nova_compute[229942]: pc-q35-rhel9.4.0 Dec 6 04:45:42 localhost nova_compute[229942]: pc-q35-rhel8.5.0 Dec 6 04:45:42 localhost nova_compute[229942]: pc-q35-rhel8.3.0 Dec 6 04:45:42 localhost nova_compute[229942]: pc-q35-rhel7.6.0 Dec 6 04:45:42 localhost nova_compute[229942]: pc-q35-rhel8.4.0 Dec 6 04:45:42 localhost nova_compute[229942]: pc-q35-rhel9.2.0 Dec 6 04:45:42 localhost nova_compute[229942]: pc-q35-rhel8.2.0 Dec 6 04:45:42 localhost nova_compute[229942]: pc-q35-rhel9.0.0 Dec 6 04:45:42 localhost nova_compute[229942]: pc-q35-rhel8.0.0 Dec 6 04:45:42 localhost nova_compute[229942]: pc-q35-rhel8.1.0 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: hvm Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: 64 Dec 6 04:45:42 localhost nova_compute[229942]: /usr/libexec/qemu-kvm Dec 6 04:45:42 localhost nova_compute[229942]: pc-i440fx-rhel7.6.0 Dec 6 04:45:42 localhost nova_compute[229942]: pc Dec 6 04:45:42 localhost nova_compute[229942]: pc-q35-rhel9.8.0 Dec 6 04:45:42 localhost nova_compute[229942]: q35 Dec 6 04:45:42 localhost nova_compute[229942]: pc-q35-rhel9.6.0 Dec 6 04:45:42 localhost nova_compute[229942]: pc-q35-rhel8.6.0 Dec 6 04:45:42 localhost nova_compute[229942]: pc-q35-rhel9.4.0 Dec 6 04:45:42 localhost nova_compute[229942]: pc-q35-rhel8.5.0 Dec 6 04:45:42 localhost nova_compute[229942]: pc-q35-rhel8.3.0 Dec 6 04:45:42 localhost nova_compute[229942]: pc-q35-rhel7.6.0 Dec 6 04:45:42 localhost nova_compute[229942]: pc-q35-rhel8.4.0 Dec 6 04:45:42 localhost nova_compute[229942]: pc-q35-rhel9.2.0 Dec 6 04:45:42 localhost nova_compute[229942]: pc-q35-rhel8.2.0 Dec 6 04:45:42 localhost nova_compute[229942]: pc-q35-rhel9.0.0 Dec 6 04:45:42 localhost nova_compute[229942]: pc-q35-rhel8.0.0 Dec 6 04:45:42 localhost nova_compute[229942]: pc-q35-rhel8.1.0 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: #033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.116 229946 DEBUG nova.virt.libvirt.host [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.131 229946 DEBUG nova.virt.libvirt.host [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: /usr/libexec/qemu-kvm Dec 6 04:45:42 localhost nova_compute[229942]: kvm Dec 6 04:45:42 localhost nova_compute[229942]: pc-i440fx-rhel7.6.0 Dec 6 04:45:42 localhost nova_compute[229942]: i686 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: /usr/share/OVMF/OVMF_CODE.secboot.fd Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: rom Dec 6 04:45:42 localhost nova_compute[229942]: pflash Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: yes Dec 6 04:45:42 localhost nova_compute[229942]: no Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: no Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: on Dec 6 04:45:42 localhost nova_compute[229942]: off Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: on Dec 6 04:45:42 localhost nova_compute[229942]: off Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-Rome Dec 6 04:45:42 localhost nova_compute[229942]: AMD Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: 486 Dec 6 04:45:42 localhost nova_compute[229942]: 486-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Broadwell Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Broadwell-IBRS Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Broadwell-noTSX Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Broadwell-noTSX-IBRS Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Broadwell-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Broadwell-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Broadwell-v3 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Broadwell-v4 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Cascadelake-Server Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Cascadelake-Server-noTSX Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Cascadelake-Server-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Cascadelake-Server-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Cascadelake-Server-v3 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Cascadelake-Server-v4 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Cascadelake-Server-v5 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Conroe Dec 6 04:45:42 localhost nova_compute[229942]: Conroe-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Cooperlake Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Cooperlake-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Cooperlake-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Denverton Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Denverton-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Denverton-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Denverton-v3 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dhyana Dec 6 04:45:42 localhost nova_compute[229942]: Dhyana-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dhyana-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: EPYC Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-Genoa Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-Genoa-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-IBPB Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-Milan Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-Milan-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-Milan-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-Rome Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-Rome-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-Rome-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-Rome-v3 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-Rome-v4 Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-v1 Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-v2 Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-v3 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-v4 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: GraniteRapids Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: GraniteRapids-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: GraniteRapids-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Haswell Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Haswell-IBRS Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Haswell-noTSX Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Haswell-noTSX-IBRS Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Haswell-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Haswell-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Haswell-v3 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Haswell-v4 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Icelake-Server Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Icelake-Server-noTSX Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Icelake-Server-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Icelake-Server-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Icelake-Server-v3 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Icelake-Server-v4 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Icelake-Server-v5 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Icelake-Server-v6 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Icelake-Server-v7 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: IvyBridge Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: IvyBridge-IBRS Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: IvyBridge-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: IvyBridge-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: KnightsMill Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: KnightsMill-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Nehalem Dec 6 04:45:42 localhost nova_compute[229942]: Nehalem-IBRS Dec 6 04:45:42 localhost nova_compute[229942]: Nehalem-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Nehalem-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Opteron_G1 Dec 6 04:45:42 localhost nova_compute[229942]: Opteron_G1-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Opteron_G2 Dec 6 04:45:42 localhost nova_compute[229942]: Opteron_G2-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Opteron_G3 Dec 6 04:45:42 localhost nova_compute[229942]: Opteron_G3-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Opteron_G4 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Opteron_G4-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Opteron_G5 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Opteron_G5-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Penryn Dec 6 04:45:42 localhost nova_compute[229942]: Penryn-v1 Dec 6 04:45:42 localhost nova_compute[229942]: SandyBridge Dec 6 04:45:42 localhost nova_compute[229942]: SandyBridge-IBRS Dec 6 04:45:42 localhost nova_compute[229942]: SandyBridge-v1 Dec 6 04:45:42 localhost nova_compute[229942]: SandyBridge-v2 Dec 6 04:45:42 localhost nova_compute[229942]: SapphireRapids Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: SapphireRapids-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: SapphireRapids-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: SapphireRapids-v3 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: SierraForest Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: SierraForest-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Client Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Client-IBRS Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Client-noTSX-IBRS Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Client-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Client-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Client-v3 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Client-v4 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Server Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Server-IBRS Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Server-noTSX-IBRS Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Server-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Server-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Server-v3 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Server-v4 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Server-v5 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Snowridge Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Snowridge-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Snowridge-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Snowridge-v3 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Snowridge-v4 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Westmere Dec 6 04:45:42 localhost nova_compute[229942]: Westmere-IBRS Dec 6 04:45:42 localhost nova_compute[229942]: Westmere-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Westmere-v2 Dec 6 04:45:42 localhost nova_compute[229942]: athlon Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: athlon-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: core2duo Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: core2duo-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: coreduo Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: coreduo-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: kvm32 Dec 6 04:45:42 localhost nova_compute[229942]: kvm32-v1 Dec 6 04:45:42 localhost nova_compute[229942]: kvm64 Dec 6 04:45:42 localhost nova_compute[229942]: kvm64-v1 Dec 6 04:45:42 localhost nova_compute[229942]: n270 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: n270-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: pentium Dec 6 04:45:42 localhost nova_compute[229942]: pentium-v1 Dec 6 04:45:42 localhost nova_compute[229942]: pentium2 Dec 6 04:45:42 localhost nova_compute[229942]: pentium2-v1 Dec 6 04:45:42 localhost nova_compute[229942]: pentium3 Dec 6 04:45:42 localhost nova_compute[229942]: pentium3-v1 Dec 6 04:45:42 localhost nova_compute[229942]: phenom Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: phenom-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: qemu32 Dec 6 04:45:42 localhost nova_compute[229942]: qemu32-v1 Dec 6 04:45:42 localhost nova_compute[229942]: qemu64 Dec 6 04:45:42 localhost nova_compute[229942]: qemu64-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: file Dec 6 04:45:42 localhost nova_compute[229942]: anonymous Dec 6 04:45:42 localhost nova_compute[229942]: memfd Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: disk Dec 6 04:45:42 localhost nova_compute[229942]: cdrom Dec 6 04:45:42 localhost nova_compute[229942]: floppy Dec 6 04:45:42 localhost nova_compute[229942]: lun Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: ide Dec 6 04:45:42 localhost nova_compute[229942]: fdc Dec 6 04:45:42 localhost nova_compute[229942]: scsi Dec 6 04:45:42 localhost nova_compute[229942]: virtio Dec 6 04:45:42 localhost nova_compute[229942]: usb Dec 6 04:45:42 localhost nova_compute[229942]: sata Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: virtio Dec 6 04:45:42 localhost nova_compute[229942]: virtio-transitional Dec 6 04:45:42 localhost nova_compute[229942]: virtio-non-transitional Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: vnc Dec 6 04:45:42 localhost nova_compute[229942]: egl-headless Dec 6 04:45:42 localhost nova_compute[229942]: dbus Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: subsystem Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: default Dec 6 04:45:42 localhost nova_compute[229942]: mandatory Dec 6 04:45:42 localhost nova_compute[229942]: requisite Dec 6 04:45:42 localhost nova_compute[229942]: optional Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: usb Dec 6 04:45:42 localhost nova_compute[229942]: pci Dec 6 04:45:42 localhost nova_compute[229942]: scsi Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: virtio Dec 6 04:45:42 localhost nova_compute[229942]: virtio-transitional Dec 6 04:45:42 localhost nova_compute[229942]: virtio-non-transitional Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: random Dec 6 04:45:42 localhost nova_compute[229942]: egd Dec 6 04:45:42 localhost nova_compute[229942]: builtin Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: path Dec 6 04:45:42 localhost nova_compute[229942]: handle Dec 6 04:45:42 localhost nova_compute[229942]: virtiofs Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: tpm-tis Dec 6 04:45:42 localhost nova_compute[229942]: tpm-crb Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: emulator Dec 6 04:45:42 localhost nova_compute[229942]: external Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: 2.0 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: usb Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: pty Dec 6 04:45:42 localhost nova_compute[229942]: unix Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: qemu Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: builtin Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: default Dec 6 04:45:42 localhost nova_compute[229942]: passt Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: isa Dec 6 04:45:42 localhost nova_compute[229942]: hyperv Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: null Dec 6 04:45:42 localhost nova_compute[229942]: vc Dec 6 04:45:42 localhost nova_compute[229942]: pty Dec 6 04:45:42 localhost nova_compute[229942]: dev Dec 6 04:45:42 localhost nova_compute[229942]: file Dec 6 04:45:42 localhost nova_compute[229942]: pipe Dec 6 04:45:42 localhost nova_compute[229942]: stdio Dec 6 04:45:42 localhost nova_compute[229942]: udp Dec 6 04:45:42 localhost nova_compute[229942]: tcp Dec 6 04:45:42 localhost nova_compute[229942]: unix Dec 6 04:45:42 localhost nova_compute[229942]: qemu-vdagent Dec 6 04:45:42 localhost nova_compute[229942]: dbus Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: relaxed Dec 6 04:45:42 localhost nova_compute[229942]: vapic Dec 6 04:45:42 localhost nova_compute[229942]: spinlocks Dec 6 04:45:42 localhost nova_compute[229942]: vpindex Dec 6 04:45:42 localhost nova_compute[229942]: runtime Dec 6 04:45:42 localhost nova_compute[229942]: synic Dec 6 04:45:42 localhost nova_compute[229942]: stimer Dec 6 04:45:42 localhost nova_compute[229942]: reset Dec 6 04:45:42 localhost nova_compute[229942]: vendor_id Dec 6 04:45:42 localhost nova_compute[229942]: frequencies Dec 6 04:45:42 localhost nova_compute[229942]: reenlightenment Dec 6 04:45:42 localhost nova_compute[229942]: tlbflush Dec 6 04:45:42 localhost nova_compute[229942]: ipi Dec 6 04:45:42 localhost nova_compute[229942]: avic Dec 6 04:45:42 localhost nova_compute[229942]: emsr_bitmap Dec 6 04:45:42 localhost nova_compute[229942]: xmm_input Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: 4095 Dec 6 04:45:42 localhost nova_compute[229942]: on Dec 6 04:45:42 localhost nova_compute[229942]: off Dec 6 04:45:42 localhost nova_compute[229942]: off Dec 6 04:45:42 localhost nova_compute[229942]: Linux KVM Hv Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: tdx Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.137 229946 DEBUG nova.virt.libvirt.host [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: /usr/libexec/qemu-kvm Dec 6 04:45:42 localhost nova_compute[229942]: kvm Dec 6 04:45:42 localhost nova_compute[229942]: pc-q35-rhel9.8.0 Dec 6 04:45:42 localhost nova_compute[229942]: i686 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: /usr/share/OVMF/OVMF_CODE.secboot.fd Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: rom Dec 6 04:45:42 localhost nova_compute[229942]: pflash Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: yes Dec 6 04:45:42 localhost nova_compute[229942]: no Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: no Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: on Dec 6 04:45:42 localhost nova_compute[229942]: off Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: on Dec 6 04:45:42 localhost nova_compute[229942]: off Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-Rome Dec 6 04:45:42 localhost nova_compute[229942]: AMD Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: 486 Dec 6 04:45:42 localhost nova_compute[229942]: 486-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Broadwell Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Broadwell-IBRS Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Broadwell-noTSX Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Broadwell-noTSX-IBRS Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Broadwell-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Broadwell-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Broadwell-v3 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Broadwell-v4 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Cascadelake-Server Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Cascadelake-Server-noTSX Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Cascadelake-Server-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Cascadelake-Server-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Cascadelake-Server-v3 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Cascadelake-Server-v4 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Cascadelake-Server-v5 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Conroe Dec 6 04:45:42 localhost nova_compute[229942]: Conroe-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Cooperlake Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Cooperlake-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Cooperlake-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Denverton Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Denverton-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Denverton-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Denverton-v3 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dhyana Dec 6 04:45:42 localhost nova_compute[229942]: Dhyana-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dhyana-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: EPYC Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-Genoa Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-Genoa-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-IBPB Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-Milan Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-Milan-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-Milan-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-Rome Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-Rome-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-Rome-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-Rome-v3 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-Rome-v4 Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-v1 Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-v2 Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-v3 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-v4 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: GraniteRapids Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: GraniteRapids-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: GraniteRapids-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Haswell Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Haswell-IBRS Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Haswell-noTSX Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Haswell-noTSX-IBRS Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Haswell-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Haswell-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Haswell-v3 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Haswell-v4 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Icelake-Server Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Icelake-Server-noTSX Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Icelake-Server-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Icelake-Server-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Icelake-Server-v3 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Icelake-Server-v4 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Icelake-Server-v5 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Icelake-Server-v6 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Icelake-Server-v7 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: IvyBridge Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: IvyBridge-IBRS Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: IvyBridge-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: IvyBridge-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: KnightsMill Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: KnightsMill-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Nehalem Dec 6 04:45:42 localhost nova_compute[229942]: Nehalem-IBRS Dec 6 04:45:42 localhost nova_compute[229942]: Nehalem-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Nehalem-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Opteron_G1 Dec 6 04:45:42 localhost nova_compute[229942]: Opteron_G1-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Opteron_G2 Dec 6 04:45:42 localhost nova_compute[229942]: Opteron_G2-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Opteron_G3 Dec 6 04:45:42 localhost nova_compute[229942]: Opteron_G3-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Opteron_G4 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Opteron_G4-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Opteron_G5 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Opteron_G5-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Penryn Dec 6 04:45:42 localhost nova_compute[229942]: Penryn-v1 Dec 6 04:45:42 localhost nova_compute[229942]: SandyBridge Dec 6 04:45:42 localhost nova_compute[229942]: SandyBridge-IBRS Dec 6 04:45:42 localhost nova_compute[229942]: SandyBridge-v1 Dec 6 04:45:42 localhost nova_compute[229942]: SandyBridge-v2 Dec 6 04:45:42 localhost nova_compute[229942]: SapphireRapids Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: SapphireRapids-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: SapphireRapids-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: SapphireRapids-v3 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: SierraForest Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: SierraForest-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Client Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Client-IBRS Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Client-noTSX-IBRS Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Client-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Client-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Client-v3 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Client-v4 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Server Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Server-IBRS Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Server-noTSX-IBRS Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Server-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Server-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Server-v3 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Server-v4 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Server-v5 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Snowridge Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Snowridge-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Snowridge-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Snowridge-v3 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Snowridge-v4 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Westmere Dec 6 04:45:42 localhost nova_compute[229942]: Westmere-IBRS Dec 6 04:45:42 localhost nova_compute[229942]: Westmere-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Westmere-v2 Dec 6 04:45:42 localhost nova_compute[229942]: athlon Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: athlon-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: core2duo Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: core2duo-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: coreduo Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: coreduo-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: kvm32 Dec 6 04:45:42 localhost nova_compute[229942]: kvm32-v1 Dec 6 04:45:42 localhost nova_compute[229942]: kvm64 Dec 6 04:45:42 localhost nova_compute[229942]: kvm64-v1 Dec 6 04:45:42 localhost nova_compute[229942]: n270 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: n270-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: pentium Dec 6 04:45:42 localhost nova_compute[229942]: pentium-v1 Dec 6 04:45:42 localhost nova_compute[229942]: pentium2 Dec 6 04:45:42 localhost nova_compute[229942]: pentium2-v1 Dec 6 04:45:42 localhost nova_compute[229942]: pentium3 Dec 6 04:45:42 localhost nova_compute[229942]: pentium3-v1 Dec 6 04:45:42 localhost nova_compute[229942]: phenom Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: phenom-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: qemu32 Dec 6 04:45:42 localhost nova_compute[229942]: qemu32-v1 Dec 6 04:45:42 localhost nova_compute[229942]: qemu64 Dec 6 04:45:42 localhost nova_compute[229942]: qemu64-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: file Dec 6 04:45:42 localhost nova_compute[229942]: anonymous Dec 6 04:45:42 localhost nova_compute[229942]: memfd Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: disk Dec 6 04:45:42 localhost nova_compute[229942]: cdrom Dec 6 04:45:42 localhost nova_compute[229942]: floppy Dec 6 04:45:42 localhost nova_compute[229942]: lun Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: fdc Dec 6 04:45:42 localhost nova_compute[229942]: scsi Dec 6 04:45:42 localhost nova_compute[229942]: virtio Dec 6 04:45:42 localhost nova_compute[229942]: usb Dec 6 04:45:42 localhost nova_compute[229942]: sata Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: virtio Dec 6 04:45:42 localhost nova_compute[229942]: virtio-transitional Dec 6 04:45:42 localhost nova_compute[229942]: virtio-non-transitional Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: vnc Dec 6 04:45:42 localhost nova_compute[229942]: egl-headless Dec 6 04:45:42 localhost nova_compute[229942]: dbus Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: subsystem Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: default Dec 6 04:45:42 localhost nova_compute[229942]: mandatory Dec 6 04:45:42 localhost nova_compute[229942]: requisite Dec 6 04:45:42 localhost nova_compute[229942]: optional Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: usb Dec 6 04:45:42 localhost nova_compute[229942]: pci Dec 6 04:45:42 localhost nova_compute[229942]: scsi Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: virtio Dec 6 04:45:42 localhost nova_compute[229942]: virtio-transitional Dec 6 04:45:42 localhost nova_compute[229942]: virtio-non-transitional Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: random Dec 6 04:45:42 localhost nova_compute[229942]: egd Dec 6 04:45:42 localhost nova_compute[229942]: builtin Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: path Dec 6 04:45:42 localhost nova_compute[229942]: handle Dec 6 04:45:42 localhost nova_compute[229942]: virtiofs Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: tpm-tis Dec 6 04:45:42 localhost nova_compute[229942]: tpm-crb Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: emulator Dec 6 04:45:42 localhost nova_compute[229942]: external Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: 2.0 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: usb Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: pty Dec 6 04:45:42 localhost nova_compute[229942]: unix Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: qemu Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: builtin Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: default Dec 6 04:45:42 localhost nova_compute[229942]: passt Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: isa Dec 6 04:45:42 localhost nova_compute[229942]: hyperv Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: null Dec 6 04:45:42 localhost nova_compute[229942]: vc Dec 6 04:45:42 localhost nova_compute[229942]: pty Dec 6 04:45:42 localhost nova_compute[229942]: dev Dec 6 04:45:42 localhost nova_compute[229942]: file Dec 6 04:45:42 localhost nova_compute[229942]: pipe Dec 6 04:45:42 localhost nova_compute[229942]: stdio Dec 6 04:45:42 localhost nova_compute[229942]: udp Dec 6 04:45:42 localhost nova_compute[229942]: tcp Dec 6 04:45:42 localhost nova_compute[229942]: unix Dec 6 04:45:42 localhost nova_compute[229942]: qemu-vdagent Dec 6 04:45:42 localhost nova_compute[229942]: dbus Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: relaxed Dec 6 04:45:42 localhost nova_compute[229942]: vapic Dec 6 04:45:42 localhost nova_compute[229942]: spinlocks Dec 6 04:45:42 localhost nova_compute[229942]: vpindex Dec 6 04:45:42 localhost nova_compute[229942]: runtime Dec 6 04:45:42 localhost nova_compute[229942]: synic Dec 6 04:45:42 localhost nova_compute[229942]: stimer Dec 6 04:45:42 localhost nova_compute[229942]: reset Dec 6 04:45:42 localhost nova_compute[229942]: vendor_id Dec 6 04:45:42 localhost nova_compute[229942]: frequencies Dec 6 04:45:42 localhost nova_compute[229942]: reenlightenment Dec 6 04:45:42 localhost nova_compute[229942]: tlbflush Dec 6 04:45:42 localhost nova_compute[229942]: ipi Dec 6 04:45:42 localhost nova_compute[229942]: avic Dec 6 04:45:42 localhost nova_compute[229942]: emsr_bitmap Dec 6 04:45:42 localhost nova_compute[229942]: xmm_input Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: 4095 Dec 6 04:45:42 localhost nova_compute[229942]: on Dec 6 04:45:42 localhost nova_compute[229942]: off Dec 6 04:45:42 localhost nova_compute[229942]: off Dec 6 04:45:42 localhost nova_compute[229942]: Linux KVM Hv Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: tdx Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.161 229946 DEBUG nova.virt.libvirt.host [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.167 229946 DEBUG nova.virt.libvirt.host [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: /usr/libexec/qemu-kvm Dec 6 04:45:42 localhost nova_compute[229942]: kvm Dec 6 04:45:42 localhost nova_compute[229942]: pc-i440fx-rhel7.6.0 Dec 6 04:45:42 localhost nova_compute[229942]: x86_64 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: /usr/share/OVMF/OVMF_CODE.secboot.fd Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: rom Dec 6 04:45:42 localhost nova_compute[229942]: pflash Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: yes Dec 6 04:45:42 localhost nova_compute[229942]: no Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: no Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: on Dec 6 04:45:42 localhost nova_compute[229942]: off Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: on Dec 6 04:45:42 localhost nova_compute[229942]: off Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-Rome Dec 6 04:45:42 localhost nova_compute[229942]: AMD Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: 486 Dec 6 04:45:42 localhost nova_compute[229942]: 486-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Broadwell Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Broadwell-IBRS Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Broadwell-noTSX Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Broadwell-noTSX-IBRS Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Broadwell-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Broadwell-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Broadwell-v3 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Broadwell-v4 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Cascadelake-Server Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Cascadelake-Server-noTSX Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Cascadelake-Server-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Cascadelake-Server-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Cascadelake-Server-v3 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Cascadelake-Server-v4 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Cascadelake-Server-v5 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Conroe Dec 6 04:45:42 localhost nova_compute[229942]: Conroe-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Cooperlake Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Cooperlake-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Cooperlake-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Denverton Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Denverton-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Denverton-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Denverton-v3 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dhyana Dec 6 04:45:42 localhost nova_compute[229942]: Dhyana-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dhyana-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: EPYC Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-Genoa Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-Genoa-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-IBPB Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-Milan Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-Milan-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-Milan-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-Rome Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-Rome-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-Rome-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-Rome-v3 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-Rome-v4 Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-v1 Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-v2 Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-v3 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-v4 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: GraniteRapids Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: GraniteRapids-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: GraniteRapids-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Haswell Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Haswell-IBRS Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Haswell-noTSX Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Haswell-noTSX-IBRS Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Haswell-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Haswell-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Haswell-v3 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Haswell-v4 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Icelake-Server Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Icelake-Server-noTSX Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Icelake-Server-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Icelake-Server-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Icelake-Server-v3 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Icelake-Server-v4 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Icelake-Server-v5 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Icelake-Server-v6 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Icelake-Server-v7 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: IvyBridge Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: IvyBridge-IBRS Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: IvyBridge-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: IvyBridge-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: KnightsMill Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: KnightsMill-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Nehalem Dec 6 04:45:42 localhost nova_compute[229942]: Nehalem-IBRS Dec 6 04:45:42 localhost nova_compute[229942]: Nehalem-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Nehalem-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Opteron_G1 Dec 6 04:45:42 localhost nova_compute[229942]: Opteron_G1-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Opteron_G2 Dec 6 04:45:42 localhost nova_compute[229942]: Opteron_G2-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Opteron_G3 Dec 6 04:45:42 localhost nova_compute[229942]: Opteron_G3-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Opteron_G4 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Opteron_G4-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Opteron_G5 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Opteron_G5-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Penryn Dec 6 04:45:42 localhost nova_compute[229942]: Penryn-v1 Dec 6 04:45:42 localhost nova_compute[229942]: SandyBridge Dec 6 04:45:42 localhost nova_compute[229942]: SandyBridge-IBRS Dec 6 04:45:42 localhost nova_compute[229942]: SandyBridge-v1 Dec 6 04:45:42 localhost nova_compute[229942]: SandyBridge-v2 Dec 6 04:45:42 localhost nova_compute[229942]: SapphireRapids Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: SapphireRapids-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: SapphireRapids-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: SapphireRapids-v3 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: SierraForest Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: SierraForest-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Client Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Client-IBRS Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Client-noTSX-IBRS Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Client-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Client-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Client-v3 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Client-v4 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Server Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Server-IBRS Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Server-noTSX-IBRS Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Server-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Server-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Server-v3 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Server-v4 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Server-v5 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Snowridge Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Snowridge-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Snowridge-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Snowridge-v3 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Snowridge-v4 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Westmere Dec 6 04:45:42 localhost nova_compute[229942]: Westmere-IBRS Dec 6 04:45:42 localhost nova_compute[229942]: Westmere-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Westmere-v2 Dec 6 04:45:42 localhost nova_compute[229942]: athlon Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: athlon-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: core2duo Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: core2duo-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: coreduo Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: coreduo-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: kvm32 Dec 6 04:45:42 localhost nova_compute[229942]: kvm32-v1 Dec 6 04:45:42 localhost nova_compute[229942]: kvm64 Dec 6 04:45:42 localhost nova_compute[229942]: kvm64-v1 Dec 6 04:45:42 localhost nova_compute[229942]: n270 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: n270-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: pentium Dec 6 04:45:42 localhost nova_compute[229942]: pentium-v1 Dec 6 04:45:42 localhost nova_compute[229942]: pentium2 Dec 6 04:45:42 localhost nova_compute[229942]: pentium2-v1 Dec 6 04:45:42 localhost nova_compute[229942]: pentium3 Dec 6 04:45:42 localhost nova_compute[229942]: pentium3-v1 Dec 6 04:45:42 localhost nova_compute[229942]: phenom Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: phenom-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: qemu32 Dec 6 04:45:42 localhost nova_compute[229942]: qemu32-v1 Dec 6 04:45:42 localhost nova_compute[229942]: qemu64 Dec 6 04:45:42 localhost nova_compute[229942]: qemu64-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: file Dec 6 04:45:42 localhost nova_compute[229942]: anonymous Dec 6 04:45:42 localhost nova_compute[229942]: memfd Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: disk Dec 6 04:45:42 localhost nova_compute[229942]: cdrom Dec 6 04:45:42 localhost nova_compute[229942]: floppy Dec 6 04:45:42 localhost nova_compute[229942]: lun Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: ide Dec 6 04:45:42 localhost nova_compute[229942]: fdc Dec 6 04:45:42 localhost nova_compute[229942]: scsi Dec 6 04:45:42 localhost nova_compute[229942]: virtio Dec 6 04:45:42 localhost nova_compute[229942]: usb Dec 6 04:45:42 localhost nova_compute[229942]: sata Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: virtio Dec 6 04:45:42 localhost nova_compute[229942]: virtio-transitional Dec 6 04:45:42 localhost nova_compute[229942]: virtio-non-transitional Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: vnc Dec 6 04:45:42 localhost nova_compute[229942]: egl-headless Dec 6 04:45:42 localhost nova_compute[229942]: dbus Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: subsystem Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: default Dec 6 04:45:42 localhost nova_compute[229942]: mandatory Dec 6 04:45:42 localhost nova_compute[229942]: requisite Dec 6 04:45:42 localhost nova_compute[229942]: optional Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: usb Dec 6 04:45:42 localhost nova_compute[229942]: pci Dec 6 04:45:42 localhost nova_compute[229942]: scsi Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: virtio Dec 6 04:45:42 localhost nova_compute[229942]: virtio-transitional Dec 6 04:45:42 localhost nova_compute[229942]: virtio-non-transitional Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: random Dec 6 04:45:42 localhost nova_compute[229942]: egd Dec 6 04:45:42 localhost nova_compute[229942]: builtin Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: path Dec 6 04:45:42 localhost nova_compute[229942]: handle Dec 6 04:45:42 localhost nova_compute[229942]: virtiofs Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: tpm-tis Dec 6 04:45:42 localhost nova_compute[229942]: tpm-crb Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: emulator Dec 6 04:45:42 localhost nova_compute[229942]: external Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: 2.0 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: usb Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: pty Dec 6 04:45:42 localhost nova_compute[229942]: unix Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: qemu Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: builtin Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: default Dec 6 04:45:42 localhost nova_compute[229942]: passt Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: isa Dec 6 04:45:42 localhost nova_compute[229942]: hyperv Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: null Dec 6 04:45:42 localhost nova_compute[229942]: vc Dec 6 04:45:42 localhost nova_compute[229942]: pty Dec 6 04:45:42 localhost nova_compute[229942]: dev Dec 6 04:45:42 localhost nova_compute[229942]: file Dec 6 04:45:42 localhost nova_compute[229942]: pipe Dec 6 04:45:42 localhost nova_compute[229942]: stdio Dec 6 04:45:42 localhost nova_compute[229942]: udp Dec 6 04:45:42 localhost nova_compute[229942]: tcp Dec 6 04:45:42 localhost nova_compute[229942]: unix Dec 6 04:45:42 localhost nova_compute[229942]: qemu-vdagent Dec 6 04:45:42 localhost nova_compute[229942]: dbus Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: relaxed Dec 6 04:45:42 localhost nova_compute[229942]: vapic Dec 6 04:45:42 localhost nova_compute[229942]: spinlocks Dec 6 04:45:42 localhost nova_compute[229942]: vpindex Dec 6 04:45:42 localhost nova_compute[229942]: runtime Dec 6 04:45:42 localhost nova_compute[229942]: synic Dec 6 04:45:42 localhost nova_compute[229942]: stimer Dec 6 04:45:42 localhost nova_compute[229942]: reset Dec 6 04:45:42 localhost nova_compute[229942]: vendor_id Dec 6 04:45:42 localhost nova_compute[229942]: frequencies Dec 6 04:45:42 localhost nova_compute[229942]: reenlightenment Dec 6 04:45:42 localhost nova_compute[229942]: tlbflush Dec 6 04:45:42 localhost nova_compute[229942]: ipi Dec 6 04:45:42 localhost nova_compute[229942]: avic Dec 6 04:45:42 localhost nova_compute[229942]: emsr_bitmap Dec 6 04:45:42 localhost nova_compute[229942]: xmm_input Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: 4095 Dec 6 04:45:42 localhost nova_compute[229942]: on Dec 6 04:45:42 localhost nova_compute[229942]: off Dec 6 04:45:42 localhost nova_compute[229942]: off Dec 6 04:45:42 localhost nova_compute[229942]: Linux KVM Hv Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: tdx Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.219 229946 DEBUG nova.virt.libvirt.host [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: /usr/libexec/qemu-kvm Dec 6 04:45:42 localhost nova_compute[229942]: kvm Dec 6 04:45:42 localhost nova_compute[229942]: pc-q35-rhel9.8.0 Dec 6 04:45:42 localhost nova_compute[229942]: x86_64 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: efi Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: /usr/share/edk2/ovmf/OVMF_CODE.secboot.fd Dec 6 04:45:42 localhost nova_compute[229942]: /usr/share/edk2/ovmf/OVMF_CODE.fd Dec 6 04:45:42 localhost nova_compute[229942]: /usr/share/edk2/ovmf/OVMF.amdsev.fd Dec 6 04:45:42 localhost nova_compute[229942]: /usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: rom Dec 6 04:45:42 localhost nova_compute[229942]: pflash Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: yes Dec 6 04:45:42 localhost nova_compute[229942]: no Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: yes Dec 6 04:45:42 localhost nova_compute[229942]: no Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: on Dec 6 04:45:42 localhost nova_compute[229942]: off Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: on Dec 6 04:45:42 localhost nova_compute[229942]: off Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-Rome Dec 6 04:45:42 localhost nova_compute[229942]: AMD Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: 486 Dec 6 04:45:42 localhost nova_compute[229942]: 486-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Broadwell Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Broadwell-IBRS Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Broadwell-noTSX Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Broadwell-noTSX-IBRS Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Broadwell-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Broadwell-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Broadwell-v3 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Broadwell-v4 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Cascadelake-Server Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Cascadelake-Server-noTSX Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Cascadelake-Server-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Cascadelake-Server-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Cascadelake-Server-v3 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Cascadelake-Server-v4 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Cascadelake-Server-v5 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Conroe Dec 6 04:45:42 localhost nova_compute[229942]: Conroe-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Cooperlake Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Cooperlake-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Cooperlake-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Denverton Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Denverton-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Denverton-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Denverton-v3 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dhyana Dec 6 04:45:42 localhost nova_compute[229942]: Dhyana-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dhyana-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: EPYC Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-Genoa Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-Genoa-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-IBPB Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-Milan Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-Milan-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-Milan-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-Rome Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-Rome-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-Rome-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-Rome-v3 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-Rome-v4 Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-v1 Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-v2 Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-v3 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: EPYC-v4 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: GraniteRapids Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: GraniteRapids-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: GraniteRapids-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Haswell Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Haswell-IBRS Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Haswell-noTSX Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Haswell-noTSX-IBRS Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Haswell-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Haswell-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Haswell-v3 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Haswell-v4 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Icelake-Server Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Icelake-Server-noTSX Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Icelake-Server-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Icelake-Server-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Icelake-Server-v3 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Icelake-Server-v4 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Icelake-Server-v5 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Icelake-Server-v6 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Icelake-Server-v7 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: IvyBridge Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: IvyBridge-IBRS Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: IvyBridge-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: IvyBridge-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: KnightsMill Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: KnightsMill-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Nehalem Dec 6 04:45:42 localhost nova_compute[229942]: Nehalem-IBRS Dec 6 04:45:42 localhost nova_compute[229942]: Nehalem-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Nehalem-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Opteron_G1 Dec 6 04:45:42 localhost nova_compute[229942]: Opteron_G1-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Opteron_G2 Dec 6 04:45:42 localhost nova_compute[229942]: Opteron_G2-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Opteron_G3 Dec 6 04:45:42 localhost nova_compute[229942]: Opteron_G3-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Opteron_G4 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Opteron_G4-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Opteron_G5 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Opteron_G5-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Penryn Dec 6 04:45:42 localhost nova_compute[229942]: Penryn-v1 Dec 6 04:45:42 localhost nova_compute[229942]: SandyBridge Dec 6 04:45:42 localhost nova_compute[229942]: SandyBridge-IBRS Dec 6 04:45:42 localhost nova_compute[229942]: SandyBridge-v1 Dec 6 04:45:42 localhost nova_compute[229942]: SandyBridge-v2 Dec 6 04:45:42 localhost nova_compute[229942]: SapphireRapids Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: SapphireRapids-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: SapphireRapids-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: SapphireRapids-v3 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: SierraForest Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: SierraForest-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Client Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Client-IBRS Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Client-noTSX-IBRS Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Client-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Client-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Client-v3 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Client-v4 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Server Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Server-IBRS Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Server-noTSX-IBRS Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Server-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Server-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Server-v3 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Server-v4 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Skylake-Server-v5 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Snowridge Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Snowridge-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Snowridge-v2 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Snowridge-v3 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Snowridge-v4 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Westmere Dec 6 04:45:42 localhost nova_compute[229942]: Westmere-IBRS Dec 6 04:45:42 localhost nova_compute[229942]: Westmere-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Westmere-v2 Dec 6 04:45:42 localhost nova_compute[229942]: athlon Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: athlon-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: core2duo Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: core2duo-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: coreduo Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: coreduo-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: kvm32 Dec 6 04:45:42 localhost nova_compute[229942]: kvm32-v1 Dec 6 04:45:42 localhost nova_compute[229942]: kvm64 Dec 6 04:45:42 localhost nova_compute[229942]: kvm64-v1 Dec 6 04:45:42 localhost nova_compute[229942]: n270 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: n270-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: pentium Dec 6 04:45:42 localhost nova_compute[229942]: pentium-v1 Dec 6 04:45:42 localhost nova_compute[229942]: pentium2 Dec 6 04:45:42 localhost nova_compute[229942]: pentium2-v1 Dec 6 04:45:42 localhost nova_compute[229942]: pentium3 Dec 6 04:45:42 localhost nova_compute[229942]: pentium3-v1 Dec 6 04:45:42 localhost nova_compute[229942]: phenom Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: phenom-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: qemu32 Dec 6 04:45:42 localhost nova_compute[229942]: qemu32-v1 Dec 6 04:45:42 localhost nova_compute[229942]: qemu64 Dec 6 04:45:42 localhost nova_compute[229942]: qemu64-v1 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: file Dec 6 04:45:42 localhost nova_compute[229942]: anonymous Dec 6 04:45:42 localhost nova_compute[229942]: memfd Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: disk Dec 6 04:45:42 localhost nova_compute[229942]: cdrom Dec 6 04:45:42 localhost nova_compute[229942]: floppy Dec 6 04:45:42 localhost nova_compute[229942]: lun Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: fdc Dec 6 04:45:42 localhost nova_compute[229942]: scsi Dec 6 04:45:42 localhost nova_compute[229942]: virtio Dec 6 04:45:42 localhost nova_compute[229942]: usb Dec 6 04:45:42 localhost nova_compute[229942]: sata Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: virtio Dec 6 04:45:42 localhost nova_compute[229942]: virtio-transitional Dec 6 04:45:42 localhost nova_compute[229942]: virtio-non-transitional Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: vnc Dec 6 04:45:42 localhost nova_compute[229942]: egl-headless Dec 6 04:45:42 localhost nova_compute[229942]: dbus Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: subsystem Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: default Dec 6 04:45:42 localhost nova_compute[229942]: mandatory Dec 6 04:45:42 localhost nova_compute[229942]: requisite Dec 6 04:45:42 localhost nova_compute[229942]: optional Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: usb Dec 6 04:45:42 localhost nova_compute[229942]: pci Dec 6 04:45:42 localhost nova_compute[229942]: scsi Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: virtio Dec 6 04:45:42 localhost nova_compute[229942]: virtio-transitional Dec 6 04:45:42 localhost nova_compute[229942]: virtio-non-transitional Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: random Dec 6 04:45:42 localhost nova_compute[229942]: egd Dec 6 04:45:42 localhost nova_compute[229942]: builtin Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: path Dec 6 04:45:42 localhost nova_compute[229942]: handle Dec 6 04:45:42 localhost nova_compute[229942]: virtiofs Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: tpm-tis Dec 6 04:45:42 localhost nova_compute[229942]: tpm-crb Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: emulator Dec 6 04:45:42 localhost nova_compute[229942]: external Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: 2.0 Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: usb Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: pty Dec 6 04:45:42 localhost nova_compute[229942]: unix Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: qemu Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: builtin Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: default Dec 6 04:45:42 localhost nova_compute[229942]: passt Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: isa Dec 6 04:45:42 localhost nova_compute[229942]: hyperv Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: null Dec 6 04:45:42 localhost nova_compute[229942]: vc Dec 6 04:45:42 localhost nova_compute[229942]: pty Dec 6 04:45:42 localhost nova_compute[229942]: dev Dec 6 04:45:42 localhost nova_compute[229942]: file Dec 6 04:45:42 localhost nova_compute[229942]: pipe Dec 6 04:45:42 localhost nova_compute[229942]: stdio Dec 6 04:45:42 localhost nova_compute[229942]: udp Dec 6 04:45:42 localhost nova_compute[229942]: tcp Dec 6 04:45:42 localhost nova_compute[229942]: unix Dec 6 04:45:42 localhost nova_compute[229942]: qemu-vdagent Dec 6 04:45:42 localhost nova_compute[229942]: dbus Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: relaxed Dec 6 04:45:42 localhost nova_compute[229942]: vapic Dec 6 04:45:42 localhost nova_compute[229942]: spinlocks Dec 6 04:45:42 localhost nova_compute[229942]: vpindex Dec 6 04:45:42 localhost nova_compute[229942]: runtime Dec 6 04:45:42 localhost nova_compute[229942]: synic Dec 6 04:45:42 localhost nova_compute[229942]: stimer Dec 6 04:45:42 localhost nova_compute[229942]: reset Dec 6 04:45:42 localhost nova_compute[229942]: vendor_id Dec 6 04:45:42 localhost nova_compute[229942]: frequencies Dec 6 04:45:42 localhost nova_compute[229942]: reenlightenment Dec 6 04:45:42 localhost nova_compute[229942]: tlbflush Dec 6 04:45:42 localhost nova_compute[229942]: ipi Dec 6 04:45:42 localhost nova_compute[229942]: avic Dec 6 04:45:42 localhost nova_compute[229942]: emsr_bitmap Dec 6 04:45:42 localhost nova_compute[229942]: xmm_input Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: 4095 Dec 6 04:45:42 localhost nova_compute[229942]: on Dec 6 04:45:42 localhost nova_compute[229942]: off Dec 6 04:45:42 localhost nova_compute[229942]: off Dec 6 04:45:42 localhost nova_compute[229942]: Linux KVM Hv Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: tdx Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: Dec 6 04:45:42 localhost nova_compute[229942]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.272 229946 DEBUG nova.virt.libvirt.host [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.272 229946 DEBUG nova.virt.libvirt.host [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.272 229946 DEBUG nova.virt.libvirt.host [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.273 229946 INFO nova.virt.libvirt.host [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Secure Boot support detected#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.275 229946 INFO nova.virt.libvirt.driver [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.275 229946 INFO nova.virt.libvirt.driver [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.289 229946 DEBUG nova.virt.libvirt.driver [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.368 229946 INFO nova.virt.node [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Determined node identity 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad from /var/lib/nova/compute_id#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.393 229946 DEBUG nova.compute.manager [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Verified node 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad matches my host np0005548789.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.436 229946 DEBUG nova.compute.manager [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.440 229946 DEBUG nova.virt.libvirt.vif [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:44:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description='test',display_name='test',ec2_ids=,ephemeral_gb=1,ephemeral_key_uuid=None,fault=,flavor=,hidden=False,host='np0005548789.localdomain',hostname='test',id=2,image_ref='e0d06706-da90-478a-9829-34b75a3ce049',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-12-06T08:44:43Z,launched_on='np0005548789.localdomain',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=,new_flavor=,node='np0005548789.localdomain',numa_topology=None,old_flavor=,os_type=None,pci_devices=,pci_requests=,power_state=1,progress=0,project_id='3d603431c0bb4967bafc7a0aa6108bfe',ramdisk_id='',reservation_id='r-02dpupig',resources=,root_device_name='/dev/vda',root_gb=1,security_groups=,services=,shutdown_terminate=False,system_metadata=,tags=,task_state=None,terminated_at=None,trusted_certs=,updated_at=2025-12-06T08:44:43Z,user_data=None,user_id='ff0049f3313348bdb67886d170c1c765',uuid=b7ed0a2e-9350-4933-9334-4e5e08d3e6aa,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"dhcp_server": "192.168.0.1"}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system"}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.440 229946 DEBUG nova.network.os_vif_util [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Converting VIF {"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"dhcp_server": "192.168.0.1"}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system"}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.441 229946 DEBUG nova.network.os_vif_util [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:64:77:f3,bridge_name='br-int',has_traffic_filtering=True,id=86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b,network=Network(652b6bdc-40ce-45b7-8aa5-3bca79987993),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86fc0b7a-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.442 229946 DEBUG os_vif [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:64:77:f3,bridge_name='br-int',has_traffic_filtering=True,id=86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b,network=Network(652b6bdc-40ce-45b7-8aa5-3bca79987993),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86fc0b7a-fb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.489 229946 DEBUG ovsdbapp.backend.ovs_idl [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.490 229946 DEBUG ovsdbapp.backend.ovs_idl [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.490 229946 DEBUG ovsdbapp.backend.ovs_idl [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.491 229946 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.491 229946 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] [POLLOUT] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.492 229946 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.492 229946 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.494 229946 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.496 229946 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.516 229946 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.516 229946 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.516 229946 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 6 04:45:42 localhost nova_compute[229942]: 2025-12-06 09:45:42.517 229946 INFO oslo.privsep.daemon [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpujlz3pxe/privsep.sock']#033[00m Dec 6 04:45:43 localhost nova_compute[229942]: 2025-12-06 09:45:43.069 229946 INFO oslo.privsep.daemon [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Spawned new privsep daemon via rootwrap#033[00m Dec 6 04:45:43 localhost nova_compute[229942]: 2025-12-06 09:45:42.961 230338 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Dec 6 04:45:43 localhost nova_compute[229942]: 2025-12-06 09:45:42.966 230338 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Dec 6 04:45:43 localhost nova_compute[229942]: 2025-12-06 09:45:42.970 230338 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m Dec 6 04:45:43 localhost nova_compute[229942]: 2025-12-06 09:45:42.970 230338 INFO oslo.privsep.daemon [-] privsep daemon running as pid 230338#033[00m Dec 6 04:45:43 localhost nova_compute[229942]: 2025-12-06 09:45:43.327 229946 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:45:43 localhost nova_compute[229942]: 2025-12-06 09:45:43.328 229946 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap86fc0b7a-fb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 04:45:43 localhost nova_compute[229942]: 2025-12-06 09:45:43.329 229946 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap86fc0b7a-fb, col_values=(('external_ids', {'iface-id': '86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:64:77:f3', 'vm-uuid': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 04:45:43 localhost nova_compute[229942]: 2025-12-06 09:45:43.329 229946 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 6 04:45:43 localhost nova_compute[229942]: 2025-12-06 09:45:43.330 229946 INFO os_vif [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:64:77:f3,bridge_name='br-int',has_traffic_filtering=True,id=86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b,network=Network(652b6bdc-40ce-45b7-8aa5-3bca79987993),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86fc0b7a-fb')#033[00m Dec 6 04:45:43 localhost nova_compute[229942]: 2025-12-06 09:45:43.330 229946 DEBUG nova.compute.manager [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 6 04:45:43 localhost nova_compute[229942]: 2025-12-06 09:45:43.334 229946 DEBUG nova.compute.manager [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Current state is 1, state in DB is 1. _init_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:1304#033[00m Dec 6 04:45:43 localhost nova_compute[229942]: 2025-12-06 09:45:43.334 229946 INFO nova.compute.manager [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m Dec 6 04:45:43 localhost nova_compute[229942]: 2025-12-06 09:45:43.831 229946 INFO nova.service [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Updating service version for nova-compute on np0005548789.localdomain from 57 to 66#033[00m Dec 6 04:45:43 localhost nova_compute[229942]: 2025-12-06 09:45:43.872 229946 DEBUG oslo_concurrency.lockutils [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 04:45:43 localhost nova_compute[229942]: 2025-12-06 09:45:43.872 229946 DEBUG oslo_concurrency.lockutils [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 04:45:43 localhost nova_compute[229942]: 2025-12-06 09:45:43.873 229946 DEBUG oslo_concurrency.lockutils [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 04:45:43 localhost nova_compute[229942]: 2025-12-06 09:45:43.873 229946 DEBUG nova.compute.resource_tracker [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Auditing locally available compute resources for np0005548789.localdomain (node: np0005548789.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 6 04:45:43 localhost nova_compute[229942]: 2025-12-06 09:45:43.874 229946 DEBUG oslo_concurrency.processutils [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 04:45:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=757 DF PROTO=TCP SPT=49776 DPT=9101 SEQ=4200883413 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52DA0C200000000001030307) Dec 6 04:45:44 localhost nova_compute[229942]: 2025-12-06 09:45:44.322 229946 DEBUG oslo_concurrency.processutils [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 04:45:44 localhost nova_compute[229942]: 2025-12-06 09:45:44.401 229946 DEBUG nova.virt.libvirt.driver [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 04:45:44 localhost nova_compute[229942]: 2025-12-06 09:45:44.401 229946 DEBUG nova.virt.libvirt.driver [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 04:45:44 localhost systemd[1]: Started libvirt nodedev daemon. Dec 6 04:45:44 localhost python3.9[230308]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:45:44 localhost nova_compute[229942]: 2025-12-06 09:45:44.634 229946 WARNING nova.virt.libvirt.driver [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 6 04:45:44 localhost nova_compute[229942]: 2025-12-06 09:45:44.635 229946 DEBUG nova.compute.resource_tracker [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Hypervisor/Node resource view: name=np0005548789.localdomain free_ram=12922MB free_disk=41.83721923828125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 6 04:45:44 localhost nova_compute[229942]: 2025-12-06 09:45:44.635 229946 DEBUG oslo_concurrency.lockutils [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 04:45:44 localhost nova_compute[229942]: 2025-12-06 09:45:44.635 229946 DEBUG oslo_concurrency.lockutils [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 04:45:44 localhost nova_compute[229942]: 2025-12-06 09:45:44.841 229946 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:45:44 localhost nova_compute[229942]: 2025-12-06 09:45:44.857 229946 DEBUG nova.compute.resource_tracker [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 6 04:45:44 localhost nova_compute[229942]: 2025-12-06 09:45:44.857 229946 DEBUG nova.compute.resource_tracker [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 6 04:45:44 localhost nova_compute[229942]: 2025-12-06 09:45:44.857 229946 DEBUG nova.compute.resource_tracker [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Final resource view: name=np0005548789.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 6 04:45:44 localhost nova_compute[229942]: 2025-12-06 09:45:44.910 229946 DEBUG nova.scheduler.client.report [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Refreshing inventories for resource provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Dec 6 04:45:44 localhost nova_compute[229942]: 2025-12-06 09:45:44.970 229946 DEBUG nova.scheduler.client.report [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Updating ProviderTree inventory for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Dec 6 04:45:44 localhost nova_compute[229942]: 2025-12-06 09:45:44.971 229946 DEBUG nova.compute.provider_tree [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Updating inventory in ProviderTree for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 6 04:45:44 localhost nova_compute[229942]: 2025-12-06 09:45:44.996 229946 DEBUG nova.scheduler.client.report [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Refreshing aggregate associations for resource provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Dec 6 04:45:45 localhost nova_compute[229942]: 2025-12-06 09:45:45.022 229946 DEBUG nova.scheduler.client.report [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Refreshing trait associations for resource provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad, traits: HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_BMI2,HW_CPU_X86_BMI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SVM,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_MMX,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE42,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NODE,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_LAN9118,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SHA,COMPUTE_ACCELERATORS,HW_CPU_X86_ABM,HW_CPU_X86_AVX2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE41,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Dec 6 04:45:45 localhost nova_compute[229942]: 2025-12-06 09:45:45.070 229946 DEBUG oslo_concurrency.processutils [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 04:45:45 localhost nova_compute[229942]: 2025-12-06 09:45:45.487 229946 DEBUG oslo_concurrency.processutils [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 04:45:45 localhost nova_compute[229942]: 2025-12-06 09:45:45.492 229946 DEBUG nova.virt.libvirt.host [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N Dec 6 04:45:45 localhost nova_compute[229942]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m Dec 6 04:45:45 localhost nova_compute[229942]: 2025-12-06 09:45:45.492 229946 INFO nova.virt.libvirt.host [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] kernel doesn't support AMD SEV#033[00m Dec 6 04:45:45 localhost nova_compute[229942]: 2025-12-06 09:45:45.493 229946 DEBUG nova.compute.provider_tree [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Updating inventory in ProviderTree for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad with inventory: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 6 04:45:45 localhost nova_compute[229942]: 2025-12-06 09:45:45.494 229946 DEBUG nova.virt.libvirt.driver [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Dec 6 04:45:45 localhost nova_compute[229942]: 2025-12-06 09:45:45.544 229946 DEBUG nova.scheduler.client.report [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Updated inventory for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m Dec 6 04:45:45 localhost nova_compute[229942]: 2025-12-06 09:45:45.545 229946 DEBUG nova.compute.provider_tree [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Updating resource provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m Dec 6 04:45:45 localhost nova_compute[229942]: 2025-12-06 09:45:45.545 229946 DEBUG nova.compute.provider_tree [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Updating inventory in ProviderTree for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad with inventory: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 6 04:45:45 localhost python3.9[230554]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:45:45 localhost nova_compute[229942]: 2025-12-06 09:45:45.640 229946 DEBUG nova.compute.provider_tree [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Updating resource provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad generation from 4 to 5 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m Dec 6 04:45:45 localhost nova_compute[229942]: 2025-12-06 09:45:45.684 229946 DEBUG nova.compute.resource_tracker [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Compute_service record updated for np0005548789.localdomain:np0005548789.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 6 04:45:45 localhost nova_compute[229942]: 2025-12-06 09:45:45.685 229946 DEBUG oslo_concurrency.lockutils [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.050s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 04:45:45 localhost nova_compute[229942]: 2025-12-06 09:45:45.685 229946 DEBUG nova.service [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m Dec 6 04:45:45 localhost nova_compute[229942]: 2025-12-06 09:45:45.764 229946 DEBUG nova.service [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m Dec 6 04:45:45 localhost nova_compute[229942]: 2025-12-06 09:45:45.765 229946 DEBUG nova.servicegroup.drivers.db [None req-100b2f1f-6f50-4f10-8014-cfdc1355b0b2 - - - - - -] DB_Driver: join new ServiceGroup member np0005548789.localdomain to the compute group, service = join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m Dec 6 04:45:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 04:45:45 localhost podman[230574]: 2025-12-06 09:45:45.913906868 +0000 UTC m=+0.078555285 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 6 04:45:45 localhost podman[230574]: 2025-12-06 09:45:45.948030225 +0000 UTC m=+0.112678602 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, config_id=multipathd, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 6 04:45:45 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 04:45:46 localhost python3.9[230686]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None Dec 6 04:45:46 localhost systemd-journald[47810]: Field hash table of /run/log/journal/4b30904fc4748c16d0c72dbebcabab49/system.journal has a fill level at 121.3 (404 of 333 items), suggesting rotation. Dec 6 04:45:46 localhost systemd-journald[47810]: /run/log/journal/4b30904fc4748c16d0c72dbebcabab49/system.journal: Journal header limits reached or header out-of-date, rotating. Dec 6 04:45:46 localhost rsyslogd[760]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 6 04:45:46 localhost rsyslogd[760]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 6 04:45:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:45:47.275 160509 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 04:45:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:45:47.276 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 04:45:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:45:47.277 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 04:45:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=759 DF PROTO=TCP SPT=49776 DPT=9101 SEQ=4200883413 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52DA182F0000000001030307) Dec 6 04:45:47 localhost nova_compute[229942]: 2025-12-06 09:45:47.497 229946 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:45:48 localhost python3.9[230820]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 6 04:45:48 localhost systemd[1]: Stopping nova_compute container... Dec 6 04:45:48 localhost systemd[1]: tmp-crun.JEj9ic.mount: Deactivated successfully. Dec 6 04:45:48 localhost nova_compute[229942]: 2025-12-06 09:45:48.510 229946 DEBUG oslo_privsep.comm [-] EOF on privsep read channel _reader_main /usr/lib/python3.9/site-packages/oslo_privsep/comm.py:170#033[00m Dec 6 04:45:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38133 DF PROTO=TCP SPT=58278 DPT=9105 SEQ=1660505716 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52DA21300000000001030307) Dec 6 04:45:49 localhost nova_compute[229942]: 2025-12-06 09:45:49.878 229946 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:45:49 localhost nova_compute[229942]: 2025-12-06 09:45:49.882 229946 WARNING amqp [-] Received method (60, 30) during closing channel 1. This method will be ignored#033[00m Dec 6 04:45:49 localhost nova_compute[229942]: 2025-12-06 09:45:49.884 229946 DEBUG oslo_concurrency.lockutils [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 6 04:45:49 localhost nova_compute[229942]: 2025-12-06 09:45:49.885 229946 DEBUG oslo_concurrency.lockutils [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 6 04:45:49 localhost nova_compute[229942]: 2025-12-06 09:45:49.885 229946 DEBUG oslo_concurrency.lockutils [None req-8b348d35-d06d-4f73-9d7d-edbcdbc5087d - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 6 04:45:50 localhost journal[203911]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, ) Dec 6 04:45:50 localhost journal[203911]: hostname: np0005548789.localdomain Dec 6 04:45:50 localhost journal[203911]: End of file while reading data: Input/output error Dec 6 04:45:50 localhost systemd[1]: libpod-6674d58fdb9d90e78bfb85f434c919baa1836ad3e98a097c0a64c1152f7163c8.scope: Deactivated successfully. Dec 6 04:45:50 localhost systemd[1]: libpod-6674d58fdb9d90e78bfb85f434c919baa1836ad3e98a097c0a64c1152f7163c8.scope: Consumed 4.712s CPU time. Dec 6 04:45:50 localhost podman[230824]: 2025-12-06 09:45:50.267923939 +0000 UTC m=+1.820139827 container died 6674d58fdb9d90e78bfb85f434c919baa1836ad3e98a097c0a64c1152f7163c8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Dec 6 04:45:50 localhost systemd[1]: tmp-crun.eHqPm3.mount: Deactivated successfully. Dec 6 04:45:50 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6674d58fdb9d90e78bfb85f434c919baa1836ad3e98a097c0a64c1152f7163c8-userdata-shm.mount: Deactivated successfully. Dec 6 04:45:50 localhost podman[230824]: 2025-12-06 09:45:50.326175172 +0000 UTC m=+1.878390990 container cleanup 6674d58fdb9d90e78bfb85f434c919baa1836ad3e98a097c0a64c1152f7163c8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, container_name=nova_compute, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm) Dec 6 04:45:50 localhost podman[230824]: nova_compute Dec 6 04:45:50 localhost podman[230868]: error opening file `/run/crun/6674d58fdb9d90e78bfb85f434c919baa1836ad3e98a097c0a64c1152f7163c8/status`: No such file or directory Dec 6 04:45:50 localhost podman[230855]: 2025-12-06 09:45:50.422550789 +0000 UTC m=+0.068912096 container cleanup 6674d58fdb9d90e78bfb85f434c919baa1836ad3e98a097c0a64c1152f7163c8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 04:45:50 localhost podman[230855]: nova_compute Dec 6 04:45:50 localhost systemd[1]: edpm_nova_compute.service: Deactivated successfully. Dec 6 04:45:50 localhost systemd[1]: Stopped nova_compute container. Dec 6 04:45:50 localhost systemd[1]: Starting nova_compute container... Dec 6 04:45:50 localhost systemd[1]: Started libcrun container. Dec 6 04:45:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81aea125146c60e1b0ee38b0c4ee8c70ba3c42600b7bfc70695e2bff0e11c0ad/merged/etc/nvme supports timestamps until 2038 (0x7fffffff) Dec 6 04:45:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81aea125146c60e1b0ee38b0c4ee8c70ba3c42600b7bfc70695e2bff0e11c0ad/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Dec 6 04:45:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81aea125146c60e1b0ee38b0c4ee8c70ba3c42600b7bfc70695e2bff0e11c0ad/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Dec 6 04:45:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81aea125146c60e1b0ee38b0c4ee8c70ba3c42600b7bfc70695e2bff0e11c0ad/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Dec 6 04:45:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81aea125146c60e1b0ee38b0c4ee8c70ba3c42600b7bfc70695e2bff0e11c0ad/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 6 04:45:50 localhost podman[230870]: 2025-12-06 09:45:50.549928703 +0000 UTC m=+0.091140021 container init 6674d58fdb9d90e78bfb85f434c919baa1836ad3e98a097c0a64c1152f7163c8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 04:45:50 localhost podman[230870]: 2025-12-06 09:45:50.560523373 +0000 UTC m=+0.101734691 container start 6674d58fdb9d90e78bfb85f434c919baa1836ad3e98a097c0a64c1152f7163c8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}) Dec 6 04:45:50 localhost podman[230870]: nova_compute Dec 6 04:45:50 localhost nova_compute[230884]: + sudo -E kolla_set_configs Dec 6 04:45:50 localhost systemd[1]: Started nova_compute container. Dec 6 04:45:50 localhost nova_compute[230884]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Dec 6 04:45:50 localhost nova_compute[230884]: INFO:__main__:Validating config file Dec 6 04:45:50 localhost nova_compute[230884]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Dec 6 04:45:50 localhost nova_compute[230884]: INFO:__main__:Copying service configuration files Dec 6 04:45:50 localhost nova_compute[230884]: INFO:__main__:Deleting /etc/nova/nova.conf Dec 6 04:45:50 localhost nova_compute[230884]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf Dec 6 04:45:50 localhost nova_compute[230884]: INFO:__main__:Setting permission for /etc/nova/nova.conf Dec 6 04:45:50 localhost nova_compute[230884]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf Dec 6 04:45:50 localhost nova_compute[230884]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf Dec 6 04:45:50 localhost nova_compute[230884]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf Dec 6 04:45:50 localhost nova_compute[230884]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf Dec 6 04:45:50 localhost nova_compute[230884]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf Dec 6 04:45:50 localhost nova_compute[230884]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf Dec 6 04:45:50 localhost nova_compute[230884]: INFO:__main__:Deleting /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Dec 6 04:45:50 localhost nova_compute[230884]: INFO:__main__:Copying /var/lib/kolla/config_files/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Dec 6 04:45:50 localhost nova_compute[230884]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Dec 6 04:45:50 localhost nova_compute[230884]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf Dec 6 04:45:50 localhost nova_compute[230884]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf Dec 6 04:45:50 localhost nova_compute[230884]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf Dec 6 04:45:50 localhost nova_compute[230884]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf Dec 6 04:45:50 localhost nova_compute[230884]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf Dec 6 04:45:50 localhost nova_compute[230884]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf Dec 6 04:45:50 localhost nova_compute[230884]: INFO:__main__:Deleting /etc/ceph Dec 6 04:45:50 localhost nova_compute[230884]: INFO:__main__:Creating directory /etc/ceph Dec 6 04:45:50 localhost nova_compute[230884]: INFO:__main__:Setting permission for /etc/ceph Dec 6 04:45:50 localhost nova_compute[230884]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf Dec 6 04:45:50 localhost nova_compute[230884]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Dec 6 04:45:50 localhost nova_compute[230884]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring Dec 6 04:45:50 localhost nova_compute[230884]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Dec 6 04:45:50 localhost nova_compute[230884]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey Dec 6 04:45:50 localhost nova_compute[230884]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey Dec 6 04:45:50 localhost nova_compute[230884]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Dec 6 04:45:50 localhost nova_compute[230884]: INFO:__main__:Deleting /var/lib/nova/.ssh/config Dec 6 04:45:50 localhost nova_compute[230884]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config Dec 6 04:45:50 localhost nova_compute[230884]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Dec 6 04:45:50 localhost nova_compute[230884]: INFO:__main__:Deleting /usr/sbin/iscsiadm Dec 6 04:45:50 localhost nova_compute[230884]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm Dec 6 04:45:50 localhost nova_compute[230884]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm Dec 6 04:45:50 localhost nova_compute[230884]: INFO:__main__:Writing out command to execute Dec 6 04:45:50 localhost nova_compute[230884]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Dec 6 04:45:50 localhost nova_compute[230884]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Dec 6 04:45:50 localhost nova_compute[230884]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ Dec 6 04:45:50 localhost nova_compute[230884]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Dec 6 04:45:50 localhost nova_compute[230884]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Dec 6 04:45:50 localhost nova_compute[230884]: ++ cat /run_command Dec 6 04:45:50 localhost nova_compute[230884]: + CMD=nova-compute Dec 6 04:45:50 localhost nova_compute[230884]: + ARGS= Dec 6 04:45:50 localhost nova_compute[230884]: + sudo kolla_copy_cacerts Dec 6 04:45:50 localhost nova_compute[230884]: + [[ ! -n '' ]] Dec 6 04:45:50 localhost nova_compute[230884]: + . kolla_extend_start Dec 6 04:45:50 localhost nova_compute[230884]: Running command: 'nova-compute' Dec 6 04:45:50 localhost nova_compute[230884]: + echo 'Running command: '\''nova-compute'\''' Dec 6 04:45:50 localhost nova_compute[230884]: + umask 0022 Dec 6 04:45:50 localhost nova_compute[230884]: + exec nova-compute Dec 6 04:45:52 localhost nova_compute[230884]: 2025-12-06 09:45:52.354 230888 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Dec 6 04:45:52 localhost nova_compute[230884]: 2025-12-06 09:45:52.355 230888 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Dec 6 04:45:52 localhost nova_compute[230884]: 2025-12-06 09:45:52.355 230888 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Dec 6 04:45:52 localhost nova_compute[230884]: 2025-12-06 09:45:52.355 230888 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m Dec 6 04:45:52 localhost nova_compute[230884]: 2025-12-06 09:45:52.472 230888 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 04:45:52 localhost nova_compute[230884]: 2025-12-06 09:45:52.492 230888 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 04:45:52 localhost nova_compute[230884]: 2025-12-06 09:45:52.492 230888 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m Dec 6 04:45:52 localhost nova_compute[230884]: 2025-12-06 09:45:52.891 230888 INFO nova.virt.driver [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.010 230888 INFO nova.compute.provider_config [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.018 230888 WARNING nova.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.: nova.exception.TooOldComputeService: Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.018 230888 DEBUG oslo_concurrency.lockutils [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.018 230888 DEBUG oslo_concurrency.lockutils [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.018 230888 DEBUG oslo_concurrency.lockutils [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.019 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.019 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.019 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.019 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.019 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.020 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.020 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] allow_resize_to_same_host = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.020 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] arq_binding_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.020 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] backdoor_port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.020 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] backdoor_socket = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.020 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] block_device_allocate_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.021 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.021 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cert = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.021 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] compute_driver = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.021 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] compute_monitors = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.021 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] config_dir = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.022 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] config_drive_format = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.022 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] config_file = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.022 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.022 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] console_host = np0005548789.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.022 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] control_exchange = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.022 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cpu_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.022 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.023 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.023 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.023 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] default_availability_zone = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.023 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] default_ephemeral_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.023 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.024 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] default_schedule_zone = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.024 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] disk_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.024 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] enable_new_services = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.024 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] enabled_apis = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.024 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] enabled_ssl_apis = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.024 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] flat_injected = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.025 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] force_config_drive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.025 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] force_raw_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.025 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.025 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.025 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] host = np0005548789.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.026 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] initial_cpu_allocation_ratio = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.026 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] initial_disk_allocation_ratio = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.026 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] initial_ram_allocation_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.026 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] injected_network_template = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.026 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] instance_build_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.026 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] instance_delete_interval = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.027 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.027 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] instance_name_template = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.027 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] instance_usage_audit = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.027 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] instance_usage_audit_period = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.027 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.027 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] instances_path = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.028 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.028 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.028 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] live_migration_retry_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.028 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.028 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.029 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.029 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.029 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.029 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.029 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.029 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] log_rotation_type = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.030 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.030 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.030 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.030 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.030 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.030 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] long_rpc_timeout = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.031 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] max_concurrent_builds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.031 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.031 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] max_concurrent_snapshots = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.031 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] max_local_block_devices = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.031 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] max_logfile_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.032 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] max_logfile_size_mb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.032 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.032 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] metadata_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.032 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] metadata_listen_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.032 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] metadata_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.032 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] migrate_max_retries = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.033 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] mkisofs_cmd = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.033 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] my_block_storage_ip = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.033 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] my_ip = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.033 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] network_allocate_retries = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.033 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.034 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] osapi_compute_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.034 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] osapi_compute_listen_port = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.034 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] osapi_compute_unique_server_name_scope = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.034 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] osapi_compute_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.034 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] password_length = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.035 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] periodic_enable = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.035 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] periodic_fuzzy_delay = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.035 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] pointer_model = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.035 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] preallocate_images = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.035 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.035 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] pybasedir = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.036 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] ram_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.036 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.036 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.036 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.036 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] reboot_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.036 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] reclaim_instance_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.037 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] record = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.037 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] reimage_timeout_per_gb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.037 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] report_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.037 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] rescue_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.037 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] reserved_host_cpus = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.038 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] reserved_host_disk_mb = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.038 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] reserved_host_memory_mb = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.038 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] reserved_huge_pages = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.038 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] resize_confirm_window = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.038 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] resize_fs_using_block_device = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.038 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.039 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] rootwrap_config = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.039 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] rpc_response_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.039 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] run_external_periodic_tasks = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.039 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.039 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.039 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.040 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.040 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] service_down_time = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.040 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] servicegroup_driver = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.040 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] shelved_offload_time = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.040 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] shelved_poll_interval = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.041 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.041 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] source_is_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.041 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] ssl_only = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.041 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] state_path = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.041 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] sync_power_state_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.041 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] sync_power_state_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.042 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.042 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] tempdir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.042 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] timeout_nbd = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.042 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.042 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] update_resources_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.043 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] use_cow_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.043 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.043 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.043 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.043 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] use_rootwrap_daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.043 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.044 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.044 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vcpu_pin_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.044 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vif_plugging_is_fatal = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.044 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vif_plugging_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.044 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] virt_mkfs = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.044 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] volume_usage_poll_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.045 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.045 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] web = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.045 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.045 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_concurrency.lock_path = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.045 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.046 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.046 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.046 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.046 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.046 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] api.auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.046 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] api.compute_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.047 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.047 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] api.dhcp_domain = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.047 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] api.enable_instance_password = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.047 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] api.glance_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.047 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.048 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.048 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.048 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.048 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] api.local_metadata_per_cell = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.048 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] api.max_limit = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.048 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] api.metadata_cache_expiration = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.049 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] api.neutron_default_tenant_id = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.049 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] api.use_forwarded_for = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.049 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] api.use_neutron_default_nets = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.049 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.049 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.050 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.050 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] api.vendordata_dynamic_ssl_certfile = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.050 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.050 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] api.vendordata_jsonfile_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.050 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] api.vendordata_providers = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.050 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cache.backend = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.051 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cache.backend_argument = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.051 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cache.config_prefix = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.051 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cache.dead_timeout = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.051 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cache.debug_cache_backend = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.051 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cache.enable_retry_client = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.052 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cache.enable_socket_keepalive = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.052 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cache.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.052 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cache.expiration_time = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.052 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.052 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cache.hashclient_retry_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.052 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cache.memcache_dead_retry = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.053 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cache.memcache_password = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.053 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.053 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.053 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cache.memcache_pool_maxsize = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.053 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.053 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cache.memcache_sasl_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.054 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cache.memcache_servers = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.054 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cache.memcache_socket_timeout = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.054 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cache.memcache_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.054 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cache.proxies = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.054 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cache.retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.055 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cache.retry_delay = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.055 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cache.socket_keepalive_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.055 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cache.socket_keepalive_idle = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.055 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.055 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cache.tls_allowed_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.055 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cache.tls_cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.056 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cache.tls_certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.056 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cache.tls_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.056 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cache.tls_keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.056 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cinder.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.056 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cinder.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.056 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cinder.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.057 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cinder.catalog_info = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.057 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cinder.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.057 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cinder.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.057 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cinder.cross_az_attach = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.057 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cinder.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.058 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cinder.endpoint_template = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.058 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cinder.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.058 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cinder.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.058 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cinder.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.058 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cinder.os_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.058 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cinder.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.059 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cinder.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.059 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.059 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] compute.cpu_dedicated_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.059 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] compute.cpu_shared_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.059 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.060 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.060 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.060 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.060 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.060 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.061 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.061 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.061 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] compute.vmdk_allowed_types = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.061 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] conductor.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.061 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] console.allowed_origins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.061 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] console.ssl_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.062 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] console.ssl_minimum_version = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.062 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] consoleauth.token_ttl = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.062 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cyborg.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.062 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cyborg.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.062 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cyborg.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.063 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cyborg.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.063 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cyborg.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.063 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cyborg.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.063 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cyborg.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.063 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cyborg.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.063 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cyborg.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.064 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cyborg.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.064 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cyborg.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.064 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cyborg.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.064 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cyborg.service_type = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.064 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cyborg.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.065 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cyborg.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.065 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.065 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cyborg.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.065 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cyborg.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.065 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] cyborg.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.065 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.066 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.066 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.066 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.066 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.066 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.067 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.067 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.067 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.067 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.067 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.067 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.068 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.068 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.068 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.068 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.068 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.069 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.069 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.069 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.069 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] api_database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.069 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] api_database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.069 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] api_database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.070 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] api_database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.070 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.070 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] api_database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.070 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.070 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] api_database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.071 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.071 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.071 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] api_database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.071 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] api_database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.071 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] api_database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.071 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] api_database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.072 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] api_database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.072 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.072 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] api_database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.072 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] api_database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.072 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] api_database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.073 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.073 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] devices.enabled_mdev_types = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.073 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.073 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.073 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.073 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] glance.api_servers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.074 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] glance.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.074 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] glance.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.074 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] glance.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.074 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] glance.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.074 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] glance.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.075 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] glance.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.075 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.075 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.075 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] glance.enable_rbd_download = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.075 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] glance.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.075 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] glance.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.076 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] glance.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.076 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] glance.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.076 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] glance.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.076 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] glance.num_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.076 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] glance.rbd_ceph_conf = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.077 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] glance.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.077 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] glance.rbd_pool = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.077 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] glance.rbd_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.077 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] glance.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.077 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] glance.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.077 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] glance.service_type = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.078 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] glance.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.078 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] glance.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.078 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.078 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] glance.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.078 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] glance.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.079 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.079 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] glance.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.079 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] guestfs.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.079 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] hyperv.config_drive_cdrom = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.079 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.079 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] hyperv.dynamic_memory_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.080 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.080 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] hyperv.enable_remotefx = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.080 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] hyperv.instances_path_share = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.080 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] hyperv.iscsi_initiator_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.080 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] hyperv.limit_cpu_features = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.081 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.081 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.081 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.081 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.081 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] hyperv.qemu_img_cmd = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.082 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] hyperv.use_multipath_io = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.082 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.082 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.082 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] hyperv.vswitch_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.082 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.082 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] mks.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.083 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] mks.mksproxy_base_url = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.083 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] image_cache.manager_interval = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.083 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.083 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.084 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.084 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.084 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] image_cache.subdirectory_name = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.084 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] ironic.api_max_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.084 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] ironic.api_retry_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.084 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.085 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.085 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.085 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.085 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.085 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.086 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.086 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.086 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.086 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.086 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.086 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.087 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] ironic.partition_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.087 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] ironic.peer_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.087 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.087 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.087 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.087 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] ironic.service_type = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.088 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.088 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.088 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.088 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.088 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] ironic.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.089 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.089 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] key_manager.backend = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.089 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] key_manager.fixed_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.089 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] barbican.auth_endpoint = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.089 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] barbican.barbican_api_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.089 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] barbican.barbican_endpoint = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.090 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.090 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] barbican.barbican_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.090 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] barbican.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.090 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] barbican.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.090 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] barbican.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.091 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] barbican.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.091 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] barbican.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.091 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] barbican.number_of_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.091 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] barbican.retry_delay = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.091 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.091 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] barbican.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.092 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] barbican.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.092 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] barbican.verify_ssl = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.092 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] barbican.verify_ssl_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.092 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.092 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.093 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] barbican_service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.093 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.093 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.093 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.093 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] barbican_service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.094 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.094 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] barbican_service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.094 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vault.approle_role_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.094 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vault.approle_secret_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.094 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vault.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.094 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vault.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.095 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vault.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.095 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vault.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.095 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vault.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.095 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vault.kv_mountpoint = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.095 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vault.kv_version = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.095 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vault.namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.096 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vault.root_token_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.096 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vault.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.096 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vault.ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.096 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vault.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.096 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vault.use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.097 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vault.vault_url = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.097 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] keystone.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.097 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] keystone.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.097 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] keystone.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.097 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] keystone.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.097 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] keystone.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.098 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] keystone.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.098 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] keystone.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.098 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] keystone.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.098 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] keystone.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.098 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] keystone.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.099 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] keystone.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.099 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] keystone.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.099 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] keystone.service_type = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.099 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] keystone.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.099 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] keystone.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.099 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.100 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] keystone.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.100 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] keystone.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.100 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] keystone.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.100 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.connection_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.100 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.cpu_mode = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.101 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.cpu_model_extra_flags = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.101 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.cpu_models = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.101 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.101 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.101 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.cpu_power_management = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.101 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.102 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.102 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.device_detach_timeout = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.102 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.disk_cachemodes = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.102 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.disk_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.102 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.enabled_perf_events = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.103 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.file_backed_memory = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.103 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.gid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.103 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.hw_disk_discard = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.103 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.hw_machine_type = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.103 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.images_rbd_ceph_conf = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.103 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.104 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.104 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.104 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.images_rbd_pool = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.104 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.images_type = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.104 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.images_volume_group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.105 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.inject_key = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.105 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.inject_partition = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.105 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.105 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.iscsi_iface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.105 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.iser_use_multipath = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.105 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.106 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.106 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.106 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.106 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.106 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.107 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.107 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.107 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.live_migration_scheme = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.107 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.107 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.107 230888 WARNING oslo_config.cfg [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal ( Dec 6 04:45:53 localhost nova_compute[230884]: live_migration_uri is deprecated for removal in favor of two other options that Dec 6 04:45:53 localhost nova_compute[230884]: allow to change live migration scheme and target URI: ``live_migration_scheme`` Dec 6 04:45:53 localhost nova_compute[230884]: and ``live_migration_inbound_addr`` respectively. Dec 6 04:45:53 localhost nova_compute[230884]: ). Its value may be silently ignored in the future.#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.108 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.live_migration_uri = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.108 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.108 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.max_queues = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.108 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.109 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.nfs_mount_options = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.109 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.nfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.109 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.109 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.num_iser_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.110 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.110 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.110 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.num_pcie_ports = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.110 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.num_volume_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.110 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.pmem_namespaces = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.111 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.quobyte_client_cfg = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.111 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.111 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.111 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.111 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.111 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.rbd_secret_uuid = 1939e851-b10c-5c3b-9bb7-8e7f380233e8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.112 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.rbd_user = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.112 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.112 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.112 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.rescue_image_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.112 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.rescue_kernel_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.113 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.rescue_ramdisk_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.113 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.rng_dev_path = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.113 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.rx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.113 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.smbfs_mount_options = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.113 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.113 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.snapshot_compression = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.114 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.snapshot_image_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.114 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.snapshots_directory = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.114 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.114 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.swtpm_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.114 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.swtpm_group = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.115 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.swtpm_user = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.115 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.sysinfo_serial = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.115 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.tx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.115 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.uid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.116 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.116 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.virt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.116 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.volume_clear = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.116 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.volume_clear_size = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.116 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.volume_use_multipath = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.116 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.vzstorage_cache_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.117 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.vzstorage_log_path = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.117 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.vzstorage_mount_group = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.117 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.vzstorage_mount_opts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.117 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.vzstorage_mount_perms = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.117 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.118 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.vzstorage_mount_user = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.118 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.118 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] neutron.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.118 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] neutron.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.118 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] neutron.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.118 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] neutron.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.119 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] neutron.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.119 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] neutron.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.119 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] neutron.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.119 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] neutron.default_floating_pool = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.119 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] neutron.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.119 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.120 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] neutron.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.120 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] neutron.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.120 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] neutron.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.120 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] neutron.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.120 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.121 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] neutron.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.121 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] neutron.ovs_bridge = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.121 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] neutron.physnets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.121 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] neutron.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.121 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.122 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] neutron.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.122 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] neutron.service_type = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.122 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] neutron.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.122 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] neutron.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.122 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.122 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] neutron.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.123 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] neutron.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.123 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] neutron.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.123 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.123 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] notifications.default_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.123 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.123 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.124 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.124 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] pci.alias = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.124 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] pci.device_spec = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.124 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] pci.report_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.124 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.125 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] placement.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.125 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] placement.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.125 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.125 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.125 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.126 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] placement.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.126 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] placement.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.126 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] placement.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.126 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] placement.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.126 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] placement.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.126 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] placement.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.127 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] placement.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.127 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.127 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.127 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] placement.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.127 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] placement.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.127 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] placement.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.128 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] placement.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.128 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] placement.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.128 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] placement.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.128 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] placement.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.128 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] placement.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.129 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] placement.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.129 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] placement.service_type = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.129 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.129 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] placement.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.129 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.130 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] placement.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.130 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.130 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] placement.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.130 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] placement.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.130 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] placement.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.130 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] placement.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.131 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] placement.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.131 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] placement.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.131 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] placement.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.131 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] quota.cores = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.131 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.132 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] quota.driver = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.132 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.132 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.132 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] quota.injected_files = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.132 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] quota.instances = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.132 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] quota.key_pairs = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.133 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] quota.metadata_items = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.133 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] quota.ram = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.133 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] quota.recheck_quota = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.133 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] quota.server_group_members = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.133 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] quota.server_groups = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.134 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] rdp.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.134 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] rdp.html5_proxy_base_url = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.134 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.134 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.134 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.135 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.135 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] scheduler.max_attempts = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.135 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.135 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.135 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.135 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.136 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.136 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] scheduler.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.136 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.136 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.136 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.137 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.137 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.137 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.137 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.137 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.138 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.138 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.138 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.138 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.138 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.138 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.139 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.139 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.139 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.139 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.139 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.139 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.140 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.140 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.140 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.140 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.140 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] metrics.required = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.141 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] metrics.weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.141 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] metrics.weight_of_unavailable = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.141 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] metrics.weight_setting = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.141 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] serial_console.base_url = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.141 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] serial_console.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.142 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] serial_console.port_range = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.142 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.142 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.142 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.142 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.143 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] service_user.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.143 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.143 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.143 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.143 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.143 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.144 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.144 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.144 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.144 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] spice.agent_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.144 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] spice.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.145 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] spice.html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.145 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] spice.html5proxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.145 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] spice.html5proxy_port = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.145 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] spice.image_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.145 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] spice.jpeg_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.145 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] spice.playback_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.146 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] spice.server_listen = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.146 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.146 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] spice.streaming_mode = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.146 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] spice.zlib_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.146 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] upgrade_levels.baseapi = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.147 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] upgrade_levels.cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.147 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] upgrade_levels.compute = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.147 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] upgrade_levels.conductor = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.147 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] upgrade_levels.scheduler = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.147 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.148 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.148 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.148 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.148 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.148 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.148 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.149 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.149 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.149 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.149 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.149 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vmware.cache_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.150 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vmware.cluster_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.150 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vmware.connection_pool_size = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.150 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vmware.console_delay_seconds = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.150 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vmware.datastore_regex = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.150 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vmware.host_ip = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.150 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.151 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.151 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vmware.host_username = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.151 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.151 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vmware.integration_bridge = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.151 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vmware.maximum_objects = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.151 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vmware.pbm_default_policy = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.152 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vmware.pbm_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.152 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vmware.pbm_wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.152 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vmware.serial_log_dir = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.152 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vmware.serial_port_proxy_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.152 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.153 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.153 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vmware.use_linked_clone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.153 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vmware.vnc_keymap = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.153 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vmware.vnc_port = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.153 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vmware.vnc_port_total = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.153 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vnc.auth_schemes = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.154 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vnc.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.154 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vnc.novncproxy_base_url = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.154 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vnc.novncproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.154 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vnc.novncproxy_port = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.154 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vnc.server_listen = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.155 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vnc.server_proxyclient_address = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.155 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vnc.vencrypt_ca_certs = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.155 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vnc.vencrypt_client_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.155 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vnc.vencrypt_client_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.155 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] workarounds.disable_compute_service_check_for_ffu = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.156 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.156 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.156 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.156 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.156 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] workarounds.disable_rootwrap = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.156 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.157 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.157 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.157 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.157 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.157 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.158 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.158 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.158 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.158 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.158 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.159 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.159 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.159 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.159 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.159 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] wsgi.api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.159 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] wsgi.client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.160 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] wsgi.default_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.160 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] wsgi.keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.160 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] wsgi.max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.160 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] wsgi.secure_proxy_ssl_header = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.160 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] wsgi.ssl_ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.161 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] wsgi.ssl_cert_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.161 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] wsgi.ssl_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.161 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] wsgi.tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.161 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] wsgi.wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.161 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] zvm.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.162 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] zvm.cloud_connector_url = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.162 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] zvm.image_tmp_path = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.162 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] zvm.reachable_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.162 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.162 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_policy.enforce_scope = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.162 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.163 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.163 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.163 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.163 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.163 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.164 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.164 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.164 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.164 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.164 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] remote_debug.host = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.165 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] remote_debug.port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.165 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.165 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.165 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.165 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.165 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.166 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.166 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.166 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.166 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.166 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.167 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.167 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.167 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.167 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.167 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.167 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.168 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.168 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.168 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.168 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.168 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.168 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.169 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.169 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.169 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.169 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.169 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.170 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.170 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.170 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.170 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.170 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.171 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.171 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.171 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.171 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_limit.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.171 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_limit.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.171 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_limit.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.172 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_limit.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.172 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_limit.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.172 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_limit.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.172 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_limit.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.172 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.172 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_limit.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.173 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.173 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_limit.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.173 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_limit.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.173 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_limit.endpoint_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.173 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_limit.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.174 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_limit.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.174 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_limit.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.174 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_limit.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.174 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_limit.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.174 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_limit.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.174 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_limit.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.175 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.175 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_limit.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.175 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_limit.project_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.175 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_limit.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.175 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_limit.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.175 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_limit.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.176 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_limit.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.176 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.176 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.176 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_limit.system_scope = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.176 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_limit.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.176 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_limit.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.177 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_limit.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.177 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_limit.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.177 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_limit.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.177 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_limit.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.177 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_limit.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.178 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_limit.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.178 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.178 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.178 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] oslo_reports.log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.178 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.179 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.179 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.179 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.179 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.179 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.179 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.180 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vif_plug_ovs_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.180 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.180 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.180 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.180 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] vif_plug_ovs_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.181 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.181 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.181 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.181 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.181 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] os_vif_linux_bridge.iptables_top_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.182 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.182 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] os_vif_linux_bridge.use_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.182 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.182 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] os_vif_ovs.isolate_vif = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.182 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] os_vif_ovs.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.183 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] os_vif_ovs.ovs_vsctl_timeout = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.183 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] os_vif_ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.183 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] os_vif_ovs.ovsdb_interface = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.183 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] os_vif_ovs.per_port_bridge = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.183 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] os_brick.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.183 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.184 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.184 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] privsep_osbrick.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.184 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] privsep_osbrick.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.184 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.185 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] privsep_osbrick.logger_name = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.185 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.185 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] privsep_osbrick.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.185 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] nova_sys_admin.capabilities = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.185 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] nova_sys_admin.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.185 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] nova_sys_admin.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.186 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] nova_sys_admin.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.186 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.186 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] nova_sys_admin.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.186 230888 DEBUG oslo_service.service [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.188 230888 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.199 230888 INFO nova.virt.node [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Determined node identity 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad from /var/lib/nova/compute_id#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.200 230888 DEBUG nova.virt.libvirt.host [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.201 230888 DEBUG nova.virt.libvirt.host [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.201 230888 DEBUG nova.virt.libvirt.host [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.201 230888 DEBUG nova.virt.libvirt.host [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.212 230888 DEBUG nova.virt.libvirt.host [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Registering for lifecycle events _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.214 230888 DEBUG nova.virt.libvirt.host [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Registering for connection events: _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.215 230888 INFO nova.virt.libvirt.driver [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Connection event '1' reason 'None'#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.221 230888 INFO nova.virt.libvirt.host [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Libvirt host capabilities Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: 0b20d7bd-1341-4912-afa7-eec4e2b0c648 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: x86_64 Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-Rome-v4 Dec 6 04:45:53 localhost nova_compute[230884]: AMD Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: tcp Dec 6 04:45:53 localhost nova_compute[230884]: rdma Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: 16116612 Dec 6 04:45:53 localhost nova_compute[230884]: 4029153 Dec 6 04:45:53 localhost nova_compute[230884]: 0 Dec 6 04:45:53 localhost nova_compute[230884]: 0 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: selinux Dec 6 04:45:53 localhost nova_compute[230884]: 0 Dec 6 04:45:53 localhost nova_compute[230884]: system_u:system_r:svirt_t:s0 Dec 6 04:45:53 localhost nova_compute[230884]: system_u:system_r:svirt_tcg_t:s0 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: dac Dec 6 04:45:53 localhost nova_compute[230884]: 0 Dec 6 04:45:53 localhost nova_compute[230884]: +107:+107 Dec 6 04:45:53 localhost nova_compute[230884]: +107:+107 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: hvm Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: 32 Dec 6 04:45:53 localhost nova_compute[230884]: /usr/libexec/qemu-kvm Dec 6 04:45:53 localhost nova_compute[230884]: pc-i440fx-rhel7.6.0 Dec 6 04:45:53 localhost nova_compute[230884]: pc Dec 6 04:45:53 localhost nova_compute[230884]: pc-q35-rhel9.8.0 Dec 6 04:45:53 localhost nova_compute[230884]: q35 Dec 6 04:45:53 localhost nova_compute[230884]: pc-q35-rhel9.6.0 Dec 6 04:45:53 localhost nova_compute[230884]: pc-q35-rhel8.6.0 Dec 6 04:45:53 localhost nova_compute[230884]: pc-q35-rhel9.4.0 Dec 6 04:45:53 localhost nova_compute[230884]: pc-q35-rhel8.5.0 Dec 6 04:45:53 localhost nova_compute[230884]: pc-q35-rhel8.3.0 Dec 6 04:45:53 localhost nova_compute[230884]: pc-q35-rhel7.6.0 Dec 6 04:45:53 localhost nova_compute[230884]: pc-q35-rhel8.4.0 Dec 6 04:45:53 localhost nova_compute[230884]: pc-q35-rhel9.2.0 Dec 6 04:45:53 localhost nova_compute[230884]: pc-q35-rhel8.2.0 Dec 6 04:45:53 localhost nova_compute[230884]: pc-q35-rhel9.0.0 Dec 6 04:45:53 localhost nova_compute[230884]: pc-q35-rhel8.0.0 Dec 6 04:45:53 localhost nova_compute[230884]: pc-q35-rhel8.1.0 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: hvm Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: 64 Dec 6 04:45:53 localhost nova_compute[230884]: /usr/libexec/qemu-kvm Dec 6 04:45:53 localhost nova_compute[230884]: pc-i440fx-rhel7.6.0 Dec 6 04:45:53 localhost nova_compute[230884]: pc Dec 6 04:45:53 localhost nova_compute[230884]: pc-q35-rhel9.8.0 Dec 6 04:45:53 localhost nova_compute[230884]: q35 Dec 6 04:45:53 localhost nova_compute[230884]: pc-q35-rhel9.6.0 Dec 6 04:45:53 localhost nova_compute[230884]: pc-q35-rhel8.6.0 Dec 6 04:45:53 localhost nova_compute[230884]: pc-q35-rhel9.4.0 Dec 6 04:45:53 localhost nova_compute[230884]: pc-q35-rhel8.5.0 Dec 6 04:45:53 localhost nova_compute[230884]: pc-q35-rhel8.3.0 Dec 6 04:45:53 localhost nova_compute[230884]: pc-q35-rhel7.6.0 Dec 6 04:45:53 localhost nova_compute[230884]: pc-q35-rhel8.4.0 Dec 6 04:45:53 localhost nova_compute[230884]: pc-q35-rhel9.2.0 Dec 6 04:45:53 localhost nova_compute[230884]: pc-q35-rhel8.2.0 Dec 6 04:45:53 localhost nova_compute[230884]: pc-q35-rhel9.0.0 Dec 6 04:45:53 localhost nova_compute[230884]: pc-q35-rhel8.0.0 Dec 6 04:45:53 localhost nova_compute[230884]: pc-q35-rhel8.1.0 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: #033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.225 230888 DEBUG nova.virt.libvirt.volume.mount [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.227 230888 DEBUG nova.virt.libvirt.host [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.231 230888 DEBUG nova.virt.libvirt.host [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: /usr/libexec/qemu-kvm Dec 6 04:45:53 localhost nova_compute[230884]: kvm Dec 6 04:45:53 localhost nova_compute[230884]: pc-i440fx-rhel7.6.0 Dec 6 04:45:53 localhost nova_compute[230884]: i686 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: /usr/share/OVMF/OVMF_CODE.secboot.fd Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: rom Dec 6 04:45:53 localhost nova_compute[230884]: pflash Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: yes Dec 6 04:45:53 localhost nova_compute[230884]: no Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: no Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: on Dec 6 04:45:53 localhost nova_compute[230884]: off Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: on Dec 6 04:45:53 localhost nova_compute[230884]: off Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-Rome Dec 6 04:45:53 localhost nova_compute[230884]: AMD Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: 486 Dec 6 04:45:53 localhost nova_compute[230884]: 486-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Broadwell Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Broadwell-IBRS Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Broadwell-noTSX Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Broadwell-noTSX-IBRS Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Broadwell-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Broadwell-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Broadwell-v3 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Broadwell-v4 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Cascadelake-Server Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Cascadelake-Server-noTSX Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Cascadelake-Server-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Cascadelake-Server-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Cascadelake-Server-v3 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Cascadelake-Server-v4 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Cascadelake-Server-v5 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Conroe Dec 6 04:45:53 localhost nova_compute[230884]: Conroe-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Cooperlake Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Cooperlake-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Cooperlake-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Denverton Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Denverton-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Denverton-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Denverton-v3 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dhyana Dec 6 04:45:53 localhost nova_compute[230884]: Dhyana-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dhyana-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: EPYC Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-Genoa Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-Genoa-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-IBPB Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-Milan Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-Milan-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-Milan-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-Rome Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-Rome-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-Rome-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-Rome-v3 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-Rome-v4 Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-v1 Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-v2 Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-v3 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-v4 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: GraniteRapids Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: GraniteRapids-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: GraniteRapids-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Haswell Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Haswell-IBRS Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Haswell-noTSX Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Haswell-noTSX-IBRS Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Haswell-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Haswell-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Haswell-v3 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Haswell-v4 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Icelake-Server Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Icelake-Server-noTSX Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Icelake-Server-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Icelake-Server-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Icelake-Server-v3 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Icelake-Server-v4 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Icelake-Server-v5 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Icelake-Server-v6 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Icelake-Server-v7 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: IvyBridge Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: IvyBridge-IBRS Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: IvyBridge-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: IvyBridge-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: KnightsMill Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: KnightsMill-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Nehalem Dec 6 04:45:53 localhost nova_compute[230884]: Nehalem-IBRS Dec 6 04:45:53 localhost nova_compute[230884]: Nehalem-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Nehalem-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Opteron_G1 Dec 6 04:45:53 localhost nova_compute[230884]: Opteron_G1-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Opteron_G2 Dec 6 04:45:53 localhost nova_compute[230884]: Opteron_G2-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Opteron_G3 Dec 6 04:45:53 localhost nova_compute[230884]: Opteron_G3-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Opteron_G4 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Opteron_G4-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Opteron_G5 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Opteron_G5-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Penryn Dec 6 04:45:53 localhost nova_compute[230884]: Penryn-v1 Dec 6 04:45:53 localhost nova_compute[230884]: SandyBridge Dec 6 04:45:53 localhost nova_compute[230884]: SandyBridge-IBRS Dec 6 04:45:53 localhost nova_compute[230884]: SandyBridge-v1 Dec 6 04:45:53 localhost nova_compute[230884]: SandyBridge-v2 Dec 6 04:45:53 localhost nova_compute[230884]: SapphireRapids Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: SapphireRapids-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: SapphireRapids-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: SapphireRapids-v3 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: SierraForest Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: SierraForest-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Client Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Client-IBRS Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Client-noTSX-IBRS Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Client-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Client-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Client-v3 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Client-v4 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Server Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Server-IBRS Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Server-noTSX-IBRS Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Server-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Server-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Server-v3 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Server-v4 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Server-v5 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Snowridge Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Snowridge-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Snowridge-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Snowridge-v3 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Snowridge-v4 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Westmere Dec 6 04:45:53 localhost nova_compute[230884]: Westmere-IBRS Dec 6 04:45:53 localhost nova_compute[230884]: Westmere-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Westmere-v2 Dec 6 04:45:53 localhost nova_compute[230884]: athlon Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: athlon-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: core2duo Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: core2duo-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: coreduo Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: coreduo-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: kvm32 Dec 6 04:45:53 localhost nova_compute[230884]: kvm32-v1 Dec 6 04:45:53 localhost nova_compute[230884]: kvm64 Dec 6 04:45:53 localhost nova_compute[230884]: kvm64-v1 Dec 6 04:45:53 localhost nova_compute[230884]: n270 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: n270-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: pentium Dec 6 04:45:53 localhost nova_compute[230884]: pentium-v1 Dec 6 04:45:53 localhost nova_compute[230884]: pentium2 Dec 6 04:45:53 localhost nova_compute[230884]: pentium2-v1 Dec 6 04:45:53 localhost nova_compute[230884]: pentium3 Dec 6 04:45:53 localhost nova_compute[230884]: pentium3-v1 Dec 6 04:45:53 localhost nova_compute[230884]: phenom Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: phenom-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: qemu32 Dec 6 04:45:53 localhost nova_compute[230884]: qemu32-v1 Dec 6 04:45:53 localhost nova_compute[230884]: qemu64 Dec 6 04:45:53 localhost nova_compute[230884]: qemu64-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: file Dec 6 04:45:53 localhost nova_compute[230884]: anonymous Dec 6 04:45:53 localhost nova_compute[230884]: memfd Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: disk Dec 6 04:45:53 localhost nova_compute[230884]: cdrom Dec 6 04:45:53 localhost nova_compute[230884]: floppy Dec 6 04:45:53 localhost nova_compute[230884]: lun Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: ide Dec 6 04:45:53 localhost nova_compute[230884]: fdc Dec 6 04:45:53 localhost nova_compute[230884]: scsi Dec 6 04:45:53 localhost nova_compute[230884]: virtio Dec 6 04:45:53 localhost nova_compute[230884]: usb Dec 6 04:45:53 localhost nova_compute[230884]: sata Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: virtio Dec 6 04:45:53 localhost nova_compute[230884]: virtio-transitional Dec 6 04:45:53 localhost nova_compute[230884]: virtio-non-transitional Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: vnc Dec 6 04:45:53 localhost nova_compute[230884]: egl-headless Dec 6 04:45:53 localhost nova_compute[230884]: dbus Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: subsystem Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: default Dec 6 04:45:53 localhost nova_compute[230884]: mandatory Dec 6 04:45:53 localhost nova_compute[230884]: requisite Dec 6 04:45:53 localhost nova_compute[230884]: optional Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: usb Dec 6 04:45:53 localhost nova_compute[230884]: pci Dec 6 04:45:53 localhost nova_compute[230884]: scsi Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: virtio Dec 6 04:45:53 localhost nova_compute[230884]: virtio-transitional Dec 6 04:45:53 localhost nova_compute[230884]: virtio-non-transitional Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: random Dec 6 04:45:53 localhost nova_compute[230884]: egd Dec 6 04:45:53 localhost nova_compute[230884]: builtin Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: path Dec 6 04:45:53 localhost nova_compute[230884]: handle Dec 6 04:45:53 localhost nova_compute[230884]: virtiofs Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: tpm-tis Dec 6 04:45:53 localhost nova_compute[230884]: tpm-crb Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: emulator Dec 6 04:45:53 localhost nova_compute[230884]: external Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: 2.0 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: usb Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: pty Dec 6 04:45:53 localhost nova_compute[230884]: unix Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: qemu Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: builtin Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: default Dec 6 04:45:53 localhost nova_compute[230884]: passt Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: isa Dec 6 04:45:53 localhost nova_compute[230884]: hyperv Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: null Dec 6 04:45:53 localhost nova_compute[230884]: vc Dec 6 04:45:53 localhost nova_compute[230884]: pty Dec 6 04:45:53 localhost nova_compute[230884]: dev Dec 6 04:45:53 localhost nova_compute[230884]: file Dec 6 04:45:53 localhost nova_compute[230884]: pipe Dec 6 04:45:53 localhost nova_compute[230884]: stdio Dec 6 04:45:53 localhost nova_compute[230884]: udp Dec 6 04:45:53 localhost nova_compute[230884]: tcp Dec 6 04:45:53 localhost nova_compute[230884]: unix Dec 6 04:45:53 localhost nova_compute[230884]: qemu-vdagent Dec 6 04:45:53 localhost nova_compute[230884]: dbus Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: relaxed Dec 6 04:45:53 localhost nova_compute[230884]: vapic Dec 6 04:45:53 localhost nova_compute[230884]: spinlocks Dec 6 04:45:53 localhost nova_compute[230884]: vpindex Dec 6 04:45:53 localhost nova_compute[230884]: runtime Dec 6 04:45:53 localhost nova_compute[230884]: synic Dec 6 04:45:53 localhost nova_compute[230884]: stimer Dec 6 04:45:53 localhost nova_compute[230884]: reset Dec 6 04:45:53 localhost nova_compute[230884]: vendor_id Dec 6 04:45:53 localhost nova_compute[230884]: frequencies Dec 6 04:45:53 localhost nova_compute[230884]: reenlightenment Dec 6 04:45:53 localhost nova_compute[230884]: tlbflush Dec 6 04:45:53 localhost nova_compute[230884]: ipi Dec 6 04:45:53 localhost nova_compute[230884]: avic Dec 6 04:45:53 localhost nova_compute[230884]: emsr_bitmap Dec 6 04:45:53 localhost nova_compute[230884]: xmm_input Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: 4095 Dec 6 04:45:53 localhost nova_compute[230884]: on Dec 6 04:45:53 localhost nova_compute[230884]: off Dec 6 04:45:53 localhost nova_compute[230884]: off Dec 6 04:45:53 localhost nova_compute[230884]: Linux KVM Hv Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: tdx Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.240 230888 DEBUG nova.virt.libvirt.host [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: /usr/libexec/qemu-kvm Dec 6 04:45:53 localhost nova_compute[230884]: kvm Dec 6 04:45:53 localhost nova_compute[230884]: pc-q35-rhel9.8.0 Dec 6 04:45:53 localhost nova_compute[230884]: i686 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: /usr/share/OVMF/OVMF_CODE.secboot.fd Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: rom Dec 6 04:45:53 localhost nova_compute[230884]: pflash Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: yes Dec 6 04:45:53 localhost nova_compute[230884]: no Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: no Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: on Dec 6 04:45:53 localhost nova_compute[230884]: off Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: on Dec 6 04:45:53 localhost nova_compute[230884]: off Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-Rome Dec 6 04:45:53 localhost nova_compute[230884]: AMD Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: 486 Dec 6 04:45:53 localhost nova_compute[230884]: 486-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Broadwell Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Broadwell-IBRS Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Broadwell-noTSX Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Broadwell-noTSX-IBRS Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Broadwell-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Broadwell-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Broadwell-v3 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Broadwell-v4 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Cascadelake-Server Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Cascadelake-Server-noTSX Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Cascadelake-Server-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Cascadelake-Server-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Cascadelake-Server-v3 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Cascadelake-Server-v4 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Cascadelake-Server-v5 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Conroe Dec 6 04:45:53 localhost nova_compute[230884]: Conroe-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Cooperlake Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Cooperlake-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Cooperlake-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Denverton Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Denverton-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Denverton-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Denverton-v3 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dhyana Dec 6 04:45:53 localhost nova_compute[230884]: Dhyana-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dhyana-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: EPYC Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-Genoa Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-Genoa-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-IBPB Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-Milan Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-Milan-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-Milan-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-Rome Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-Rome-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-Rome-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-Rome-v3 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-Rome-v4 Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-v1 Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-v2 Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-v3 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-v4 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: GraniteRapids Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: GraniteRapids-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: GraniteRapids-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Haswell Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Haswell-IBRS Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Haswell-noTSX Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Haswell-noTSX-IBRS Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Haswell-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Haswell-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Haswell-v3 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Haswell-v4 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Icelake-Server Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Icelake-Server-noTSX Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Icelake-Server-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Icelake-Server-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Icelake-Server-v3 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Icelake-Server-v4 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Icelake-Server-v5 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Icelake-Server-v6 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Icelake-Server-v7 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: IvyBridge Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: IvyBridge-IBRS Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: IvyBridge-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: IvyBridge-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: KnightsMill Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: KnightsMill-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Nehalem Dec 6 04:45:53 localhost nova_compute[230884]: Nehalem-IBRS Dec 6 04:45:53 localhost nova_compute[230884]: Nehalem-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Nehalem-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Opteron_G1 Dec 6 04:45:53 localhost nova_compute[230884]: Opteron_G1-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Opteron_G2 Dec 6 04:45:53 localhost nova_compute[230884]: Opteron_G2-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Opteron_G3 Dec 6 04:45:53 localhost nova_compute[230884]: Opteron_G3-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Opteron_G4 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Opteron_G4-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Opteron_G5 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Opteron_G5-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Penryn Dec 6 04:45:53 localhost nova_compute[230884]: Penryn-v1 Dec 6 04:45:53 localhost nova_compute[230884]: SandyBridge Dec 6 04:45:53 localhost nova_compute[230884]: SandyBridge-IBRS Dec 6 04:45:53 localhost nova_compute[230884]: SandyBridge-v1 Dec 6 04:45:53 localhost nova_compute[230884]: SandyBridge-v2 Dec 6 04:45:53 localhost nova_compute[230884]: SapphireRapids Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: SapphireRapids-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: SapphireRapids-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: SapphireRapids-v3 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: SierraForest Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: SierraForest-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Client Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Client-IBRS Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Client-noTSX-IBRS Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Client-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Client-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Client-v3 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Client-v4 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Server Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Server-IBRS Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Server-noTSX-IBRS Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Server-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Server-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Server-v3 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Server-v4 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Server-v5 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Snowridge Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Snowridge-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Snowridge-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Snowridge-v3 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Snowridge-v4 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Westmere Dec 6 04:45:53 localhost nova_compute[230884]: Westmere-IBRS Dec 6 04:45:53 localhost nova_compute[230884]: Westmere-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Westmere-v2 Dec 6 04:45:53 localhost nova_compute[230884]: athlon Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: athlon-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: core2duo Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: core2duo-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: coreduo Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: coreduo-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: kvm32 Dec 6 04:45:53 localhost nova_compute[230884]: kvm32-v1 Dec 6 04:45:53 localhost nova_compute[230884]: kvm64 Dec 6 04:45:53 localhost nova_compute[230884]: kvm64-v1 Dec 6 04:45:53 localhost nova_compute[230884]: n270 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: n270-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: pentium Dec 6 04:45:53 localhost nova_compute[230884]: pentium-v1 Dec 6 04:45:53 localhost nova_compute[230884]: pentium2 Dec 6 04:45:53 localhost nova_compute[230884]: pentium2-v1 Dec 6 04:45:53 localhost nova_compute[230884]: pentium3 Dec 6 04:45:53 localhost nova_compute[230884]: pentium3-v1 Dec 6 04:45:53 localhost nova_compute[230884]: phenom Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: phenom-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: qemu32 Dec 6 04:45:53 localhost nova_compute[230884]: qemu32-v1 Dec 6 04:45:53 localhost nova_compute[230884]: qemu64 Dec 6 04:45:53 localhost nova_compute[230884]: qemu64-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: file Dec 6 04:45:53 localhost nova_compute[230884]: anonymous Dec 6 04:45:53 localhost nova_compute[230884]: memfd Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: disk Dec 6 04:45:53 localhost nova_compute[230884]: cdrom Dec 6 04:45:53 localhost nova_compute[230884]: floppy Dec 6 04:45:53 localhost nova_compute[230884]: lun Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: fdc Dec 6 04:45:53 localhost nova_compute[230884]: scsi Dec 6 04:45:53 localhost nova_compute[230884]: virtio Dec 6 04:45:53 localhost nova_compute[230884]: usb Dec 6 04:45:53 localhost nova_compute[230884]: sata Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: virtio Dec 6 04:45:53 localhost nova_compute[230884]: virtio-transitional Dec 6 04:45:53 localhost nova_compute[230884]: virtio-non-transitional Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: vnc Dec 6 04:45:53 localhost nova_compute[230884]: egl-headless Dec 6 04:45:53 localhost nova_compute[230884]: dbus Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: subsystem Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: default Dec 6 04:45:53 localhost nova_compute[230884]: mandatory Dec 6 04:45:53 localhost nova_compute[230884]: requisite Dec 6 04:45:53 localhost nova_compute[230884]: optional Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: usb Dec 6 04:45:53 localhost nova_compute[230884]: pci Dec 6 04:45:53 localhost nova_compute[230884]: scsi Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: virtio Dec 6 04:45:53 localhost nova_compute[230884]: virtio-transitional Dec 6 04:45:53 localhost nova_compute[230884]: virtio-non-transitional Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: random Dec 6 04:45:53 localhost nova_compute[230884]: egd Dec 6 04:45:53 localhost nova_compute[230884]: builtin Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: path Dec 6 04:45:53 localhost nova_compute[230884]: handle Dec 6 04:45:53 localhost nova_compute[230884]: virtiofs Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: tpm-tis Dec 6 04:45:53 localhost nova_compute[230884]: tpm-crb Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: emulator Dec 6 04:45:53 localhost nova_compute[230884]: external Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: 2.0 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: usb Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: pty Dec 6 04:45:53 localhost nova_compute[230884]: unix Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: qemu Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: builtin Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: default Dec 6 04:45:53 localhost nova_compute[230884]: passt Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: isa Dec 6 04:45:53 localhost nova_compute[230884]: hyperv Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: null Dec 6 04:45:53 localhost nova_compute[230884]: vc Dec 6 04:45:53 localhost nova_compute[230884]: pty Dec 6 04:45:53 localhost nova_compute[230884]: dev Dec 6 04:45:53 localhost nova_compute[230884]: file Dec 6 04:45:53 localhost nova_compute[230884]: pipe Dec 6 04:45:53 localhost nova_compute[230884]: stdio Dec 6 04:45:53 localhost nova_compute[230884]: udp Dec 6 04:45:53 localhost nova_compute[230884]: tcp Dec 6 04:45:53 localhost nova_compute[230884]: unix Dec 6 04:45:53 localhost nova_compute[230884]: qemu-vdagent Dec 6 04:45:53 localhost nova_compute[230884]: dbus Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: relaxed Dec 6 04:45:53 localhost nova_compute[230884]: vapic Dec 6 04:45:53 localhost nova_compute[230884]: spinlocks Dec 6 04:45:53 localhost nova_compute[230884]: vpindex Dec 6 04:45:53 localhost nova_compute[230884]: runtime Dec 6 04:45:53 localhost nova_compute[230884]: synic Dec 6 04:45:53 localhost nova_compute[230884]: stimer Dec 6 04:45:53 localhost nova_compute[230884]: reset Dec 6 04:45:53 localhost nova_compute[230884]: vendor_id Dec 6 04:45:53 localhost nova_compute[230884]: frequencies Dec 6 04:45:53 localhost nova_compute[230884]: reenlightenment Dec 6 04:45:53 localhost nova_compute[230884]: tlbflush Dec 6 04:45:53 localhost nova_compute[230884]: ipi Dec 6 04:45:53 localhost nova_compute[230884]: avic Dec 6 04:45:53 localhost nova_compute[230884]: emsr_bitmap Dec 6 04:45:53 localhost nova_compute[230884]: xmm_input Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: 4095 Dec 6 04:45:53 localhost nova_compute[230884]: on Dec 6 04:45:53 localhost nova_compute[230884]: off Dec 6 04:45:53 localhost nova_compute[230884]: off Dec 6 04:45:53 localhost nova_compute[230884]: Linux KVM Hv Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: tdx Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.261 230888 DEBUG nova.virt.libvirt.host [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.265 230888 DEBUG nova.virt.libvirt.host [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: /usr/libexec/qemu-kvm Dec 6 04:45:53 localhost nova_compute[230884]: kvm Dec 6 04:45:53 localhost nova_compute[230884]: pc-i440fx-rhel7.6.0 Dec 6 04:45:53 localhost nova_compute[230884]: x86_64 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: /usr/share/OVMF/OVMF_CODE.secboot.fd Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: rom Dec 6 04:45:53 localhost nova_compute[230884]: pflash Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: yes Dec 6 04:45:53 localhost nova_compute[230884]: no Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: no Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: on Dec 6 04:45:53 localhost nova_compute[230884]: off Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: on Dec 6 04:45:53 localhost nova_compute[230884]: off Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-Rome Dec 6 04:45:53 localhost nova_compute[230884]: AMD Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: 486 Dec 6 04:45:53 localhost nova_compute[230884]: 486-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Broadwell Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Broadwell-IBRS Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Broadwell-noTSX Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Broadwell-noTSX-IBRS Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Broadwell-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Broadwell-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Broadwell-v3 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Broadwell-v4 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Cascadelake-Server Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Cascadelake-Server-noTSX Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Cascadelake-Server-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Cascadelake-Server-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Cascadelake-Server-v3 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Cascadelake-Server-v4 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Cascadelake-Server-v5 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Conroe Dec 6 04:45:53 localhost nova_compute[230884]: Conroe-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Cooperlake Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Cooperlake-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Cooperlake-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Denverton Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Denverton-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Denverton-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Denverton-v3 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dhyana Dec 6 04:45:53 localhost nova_compute[230884]: Dhyana-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dhyana-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: EPYC Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-Genoa Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-Genoa-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-IBPB Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-Milan Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-Milan-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-Milan-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-Rome Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-Rome-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-Rome-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-Rome-v3 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-Rome-v4 Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-v1 Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-v2 Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-v3 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-v4 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: GraniteRapids Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: GraniteRapids-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: GraniteRapids-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Haswell Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Haswell-IBRS Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Haswell-noTSX Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Haswell-noTSX-IBRS Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Haswell-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Haswell-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Haswell-v3 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Haswell-v4 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Icelake-Server Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Icelake-Server-noTSX Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Icelake-Server-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Icelake-Server-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Icelake-Server-v3 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Icelake-Server-v4 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Icelake-Server-v5 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Icelake-Server-v6 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Icelake-Server-v7 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: IvyBridge Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: IvyBridge-IBRS Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: IvyBridge-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: IvyBridge-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: KnightsMill Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: KnightsMill-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Nehalem Dec 6 04:45:53 localhost nova_compute[230884]: Nehalem-IBRS Dec 6 04:45:53 localhost nova_compute[230884]: Nehalem-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Nehalem-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Opteron_G1 Dec 6 04:45:53 localhost nova_compute[230884]: Opteron_G1-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Opteron_G2 Dec 6 04:45:53 localhost nova_compute[230884]: Opteron_G2-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Opteron_G3 Dec 6 04:45:53 localhost nova_compute[230884]: Opteron_G3-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Opteron_G4 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Opteron_G4-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Opteron_G5 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Opteron_G5-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Penryn Dec 6 04:45:53 localhost nova_compute[230884]: Penryn-v1 Dec 6 04:45:53 localhost nova_compute[230884]: SandyBridge Dec 6 04:45:53 localhost nova_compute[230884]: SandyBridge-IBRS Dec 6 04:45:53 localhost nova_compute[230884]: SandyBridge-v1 Dec 6 04:45:53 localhost nova_compute[230884]: SandyBridge-v2 Dec 6 04:45:53 localhost nova_compute[230884]: SapphireRapids Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: SapphireRapids-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: SapphireRapids-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: SapphireRapids-v3 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: SierraForest Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: SierraForest-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Client Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Client-IBRS Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Client-noTSX-IBRS Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Client-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Client-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Client-v3 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Client-v4 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Server Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Server-IBRS Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Server-noTSX-IBRS Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Server-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Server-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Server-v3 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Server-v4 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Server-v5 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Snowridge Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Snowridge-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Snowridge-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Snowridge-v3 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Snowridge-v4 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Westmere Dec 6 04:45:53 localhost nova_compute[230884]: Westmere-IBRS Dec 6 04:45:53 localhost nova_compute[230884]: Westmere-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Westmere-v2 Dec 6 04:45:53 localhost nova_compute[230884]: athlon Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: athlon-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: core2duo Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: core2duo-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: coreduo Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: coreduo-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: kvm32 Dec 6 04:45:53 localhost nova_compute[230884]: kvm32-v1 Dec 6 04:45:53 localhost nova_compute[230884]: kvm64 Dec 6 04:45:53 localhost nova_compute[230884]: kvm64-v1 Dec 6 04:45:53 localhost nova_compute[230884]: n270 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: n270-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: pentium Dec 6 04:45:53 localhost nova_compute[230884]: pentium-v1 Dec 6 04:45:53 localhost nova_compute[230884]: pentium2 Dec 6 04:45:53 localhost nova_compute[230884]: pentium2-v1 Dec 6 04:45:53 localhost nova_compute[230884]: pentium3 Dec 6 04:45:53 localhost nova_compute[230884]: pentium3-v1 Dec 6 04:45:53 localhost nova_compute[230884]: phenom Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: phenom-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: qemu32 Dec 6 04:45:53 localhost nova_compute[230884]: qemu32-v1 Dec 6 04:45:53 localhost nova_compute[230884]: qemu64 Dec 6 04:45:53 localhost nova_compute[230884]: qemu64-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: file Dec 6 04:45:53 localhost nova_compute[230884]: anonymous Dec 6 04:45:53 localhost nova_compute[230884]: memfd Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: disk Dec 6 04:45:53 localhost nova_compute[230884]: cdrom Dec 6 04:45:53 localhost nova_compute[230884]: floppy Dec 6 04:45:53 localhost nova_compute[230884]: lun Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: ide Dec 6 04:45:53 localhost nova_compute[230884]: fdc Dec 6 04:45:53 localhost nova_compute[230884]: scsi Dec 6 04:45:53 localhost nova_compute[230884]: virtio Dec 6 04:45:53 localhost nova_compute[230884]: usb Dec 6 04:45:53 localhost nova_compute[230884]: sata Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: virtio Dec 6 04:45:53 localhost nova_compute[230884]: virtio-transitional Dec 6 04:45:53 localhost nova_compute[230884]: virtio-non-transitional Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: vnc Dec 6 04:45:53 localhost nova_compute[230884]: egl-headless Dec 6 04:45:53 localhost nova_compute[230884]: dbus Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: subsystem Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: default Dec 6 04:45:53 localhost nova_compute[230884]: mandatory Dec 6 04:45:53 localhost nova_compute[230884]: requisite Dec 6 04:45:53 localhost nova_compute[230884]: optional Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: usb Dec 6 04:45:53 localhost nova_compute[230884]: pci Dec 6 04:45:53 localhost nova_compute[230884]: scsi Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: virtio Dec 6 04:45:53 localhost nova_compute[230884]: virtio-transitional Dec 6 04:45:53 localhost nova_compute[230884]: virtio-non-transitional Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: random Dec 6 04:45:53 localhost nova_compute[230884]: egd Dec 6 04:45:53 localhost nova_compute[230884]: builtin Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: path Dec 6 04:45:53 localhost nova_compute[230884]: handle Dec 6 04:45:53 localhost nova_compute[230884]: virtiofs Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: tpm-tis Dec 6 04:45:53 localhost nova_compute[230884]: tpm-crb Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: emulator Dec 6 04:45:53 localhost nova_compute[230884]: external Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: 2.0 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: usb Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: pty Dec 6 04:45:53 localhost nova_compute[230884]: unix Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: qemu Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: builtin Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: default Dec 6 04:45:53 localhost nova_compute[230884]: passt Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: isa Dec 6 04:45:53 localhost nova_compute[230884]: hyperv Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: null Dec 6 04:45:53 localhost nova_compute[230884]: vc Dec 6 04:45:53 localhost nova_compute[230884]: pty Dec 6 04:45:53 localhost nova_compute[230884]: dev Dec 6 04:45:53 localhost nova_compute[230884]: file Dec 6 04:45:53 localhost nova_compute[230884]: pipe Dec 6 04:45:53 localhost nova_compute[230884]: stdio Dec 6 04:45:53 localhost nova_compute[230884]: udp Dec 6 04:45:53 localhost nova_compute[230884]: tcp Dec 6 04:45:53 localhost nova_compute[230884]: unix Dec 6 04:45:53 localhost nova_compute[230884]: qemu-vdagent Dec 6 04:45:53 localhost nova_compute[230884]: dbus Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: relaxed Dec 6 04:45:53 localhost nova_compute[230884]: vapic Dec 6 04:45:53 localhost nova_compute[230884]: spinlocks Dec 6 04:45:53 localhost nova_compute[230884]: vpindex Dec 6 04:45:53 localhost nova_compute[230884]: runtime Dec 6 04:45:53 localhost nova_compute[230884]: synic Dec 6 04:45:53 localhost nova_compute[230884]: stimer Dec 6 04:45:53 localhost nova_compute[230884]: reset Dec 6 04:45:53 localhost nova_compute[230884]: vendor_id Dec 6 04:45:53 localhost nova_compute[230884]: frequencies Dec 6 04:45:53 localhost nova_compute[230884]: reenlightenment Dec 6 04:45:53 localhost nova_compute[230884]: tlbflush Dec 6 04:45:53 localhost nova_compute[230884]: ipi Dec 6 04:45:53 localhost nova_compute[230884]: avic Dec 6 04:45:53 localhost nova_compute[230884]: emsr_bitmap Dec 6 04:45:53 localhost nova_compute[230884]: xmm_input Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: 4095 Dec 6 04:45:53 localhost nova_compute[230884]: on Dec 6 04:45:53 localhost nova_compute[230884]: off Dec 6 04:45:53 localhost nova_compute[230884]: off Dec 6 04:45:53 localhost nova_compute[230884]: Linux KVM Hv Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: tdx Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.317 230888 DEBUG nova.virt.libvirt.host [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: /usr/libexec/qemu-kvm Dec 6 04:45:53 localhost nova_compute[230884]: kvm Dec 6 04:45:53 localhost nova_compute[230884]: pc-q35-rhel9.8.0 Dec 6 04:45:53 localhost nova_compute[230884]: x86_64 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: efi Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: /usr/share/edk2/ovmf/OVMF_CODE.secboot.fd Dec 6 04:45:53 localhost nova_compute[230884]: /usr/share/edk2/ovmf/OVMF_CODE.fd Dec 6 04:45:53 localhost nova_compute[230884]: /usr/share/edk2/ovmf/OVMF.amdsev.fd Dec 6 04:45:53 localhost nova_compute[230884]: /usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: rom Dec 6 04:45:53 localhost nova_compute[230884]: pflash Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: yes Dec 6 04:45:53 localhost nova_compute[230884]: no Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: yes Dec 6 04:45:53 localhost nova_compute[230884]: no Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: on Dec 6 04:45:53 localhost nova_compute[230884]: off Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: on Dec 6 04:45:53 localhost nova_compute[230884]: off Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-Rome Dec 6 04:45:53 localhost nova_compute[230884]: AMD Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: 486 Dec 6 04:45:53 localhost nova_compute[230884]: 486-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Broadwell Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Broadwell-IBRS Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Broadwell-noTSX Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Broadwell-noTSX-IBRS Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Broadwell-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Broadwell-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Broadwell-v3 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Broadwell-v4 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Cascadelake-Server Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Cascadelake-Server-noTSX Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Cascadelake-Server-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Cascadelake-Server-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Cascadelake-Server-v3 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Cascadelake-Server-v4 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Cascadelake-Server-v5 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Conroe Dec 6 04:45:53 localhost nova_compute[230884]: Conroe-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Cooperlake Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Cooperlake-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Cooperlake-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Denverton Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Denverton-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Denverton-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Denverton-v3 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dhyana Dec 6 04:45:53 localhost nova_compute[230884]: Dhyana-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dhyana-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: EPYC Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-Genoa Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-Genoa-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-IBPB Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-Milan Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-Milan-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-Milan-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-Rome Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-Rome-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-Rome-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-Rome-v3 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-Rome-v4 Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-v1 Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-v2 Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-v3 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: EPYC-v4 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: GraniteRapids Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: GraniteRapids-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: GraniteRapids-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Haswell Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Haswell-IBRS Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Haswell-noTSX Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Haswell-noTSX-IBRS Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Haswell-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Haswell-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Haswell-v3 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Haswell-v4 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Icelake-Server Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Icelake-Server-noTSX Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Icelake-Server-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Icelake-Server-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Icelake-Server-v3 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Icelake-Server-v4 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Icelake-Server-v5 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Icelake-Server-v6 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Icelake-Server-v7 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: IvyBridge Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: IvyBridge-IBRS Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: IvyBridge-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: IvyBridge-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: KnightsMill Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: KnightsMill-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Nehalem Dec 6 04:45:53 localhost nova_compute[230884]: Nehalem-IBRS Dec 6 04:45:53 localhost nova_compute[230884]: Nehalem-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Nehalem-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Opteron_G1 Dec 6 04:45:53 localhost nova_compute[230884]: Opteron_G1-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Opteron_G2 Dec 6 04:45:53 localhost nova_compute[230884]: Opteron_G2-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Opteron_G3 Dec 6 04:45:53 localhost nova_compute[230884]: Opteron_G3-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Opteron_G4 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Opteron_G4-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Opteron_G5 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Opteron_G5-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Penryn Dec 6 04:45:53 localhost nova_compute[230884]: Penryn-v1 Dec 6 04:45:53 localhost nova_compute[230884]: SandyBridge Dec 6 04:45:53 localhost nova_compute[230884]: SandyBridge-IBRS Dec 6 04:45:53 localhost nova_compute[230884]: SandyBridge-v1 Dec 6 04:45:53 localhost nova_compute[230884]: SandyBridge-v2 Dec 6 04:45:53 localhost nova_compute[230884]: SapphireRapids Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: SapphireRapids-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: SapphireRapids-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: SapphireRapids-v3 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: SierraForest Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: SierraForest-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Client Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Client-IBRS Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Client-noTSX-IBRS Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Client-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Client-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Client-v3 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Client-v4 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Server Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Server-IBRS Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Server-noTSX-IBRS Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Server-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Server-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Server-v3 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Server-v4 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Skylake-Server-v5 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Snowridge Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Snowridge-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Snowridge-v2 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Snowridge-v3 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Snowridge-v4 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Westmere Dec 6 04:45:53 localhost nova_compute[230884]: Westmere-IBRS Dec 6 04:45:53 localhost nova_compute[230884]: Westmere-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Westmere-v2 Dec 6 04:45:53 localhost nova_compute[230884]: athlon Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: athlon-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: core2duo Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: core2duo-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: coreduo Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: coreduo-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: kvm32 Dec 6 04:45:53 localhost nova_compute[230884]: kvm32-v1 Dec 6 04:45:53 localhost nova_compute[230884]: kvm64 Dec 6 04:45:53 localhost nova_compute[230884]: kvm64-v1 Dec 6 04:45:53 localhost nova_compute[230884]: n270 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: n270-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: pentium Dec 6 04:45:53 localhost nova_compute[230884]: pentium-v1 Dec 6 04:45:53 localhost nova_compute[230884]: pentium2 Dec 6 04:45:53 localhost nova_compute[230884]: pentium2-v1 Dec 6 04:45:53 localhost nova_compute[230884]: pentium3 Dec 6 04:45:53 localhost nova_compute[230884]: pentium3-v1 Dec 6 04:45:53 localhost nova_compute[230884]: phenom Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: phenom-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: qemu32 Dec 6 04:45:53 localhost nova_compute[230884]: qemu32-v1 Dec 6 04:45:53 localhost nova_compute[230884]: qemu64 Dec 6 04:45:53 localhost nova_compute[230884]: qemu64-v1 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: file Dec 6 04:45:53 localhost nova_compute[230884]: anonymous Dec 6 04:45:53 localhost nova_compute[230884]: memfd Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: disk Dec 6 04:45:53 localhost nova_compute[230884]: cdrom Dec 6 04:45:53 localhost nova_compute[230884]: floppy Dec 6 04:45:53 localhost nova_compute[230884]: lun Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: fdc Dec 6 04:45:53 localhost nova_compute[230884]: scsi Dec 6 04:45:53 localhost nova_compute[230884]: virtio Dec 6 04:45:53 localhost nova_compute[230884]: usb Dec 6 04:45:53 localhost nova_compute[230884]: sata Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: virtio Dec 6 04:45:53 localhost nova_compute[230884]: virtio-transitional Dec 6 04:45:53 localhost nova_compute[230884]: virtio-non-transitional Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: vnc Dec 6 04:45:53 localhost nova_compute[230884]: egl-headless Dec 6 04:45:53 localhost nova_compute[230884]: dbus Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: subsystem Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: default Dec 6 04:45:53 localhost nova_compute[230884]: mandatory Dec 6 04:45:53 localhost nova_compute[230884]: requisite Dec 6 04:45:53 localhost nova_compute[230884]: optional Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: usb Dec 6 04:45:53 localhost nova_compute[230884]: pci Dec 6 04:45:53 localhost nova_compute[230884]: scsi Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: virtio Dec 6 04:45:53 localhost nova_compute[230884]: virtio-transitional Dec 6 04:45:53 localhost nova_compute[230884]: virtio-non-transitional Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: random Dec 6 04:45:53 localhost nova_compute[230884]: egd Dec 6 04:45:53 localhost nova_compute[230884]: builtin Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: path Dec 6 04:45:53 localhost nova_compute[230884]: handle Dec 6 04:45:53 localhost nova_compute[230884]: virtiofs Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: tpm-tis Dec 6 04:45:53 localhost nova_compute[230884]: tpm-crb Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: emulator Dec 6 04:45:53 localhost nova_compute[230884]: external Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: 2.0 Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: usb Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: pty Dec 6 04:45:53 localhost nova_compute[230884]: unix Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: qemu Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: builtin Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: default Dec 6 04:45:53 localhost nova_compute[230884]: passt Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: isa Dec 6 04:45:53 localhost nova_compute[230884]: hyperv Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: null Dec 6 04:45:53 localhost nova_compute[230884]: vc Dec 6 04:45:53 localhost nova_compute[230884]: pty Dec 6 04:45:53 localhost nova_compute[230884]: dev Dec 6 04:45:53 localhost nova_compute[230884]: file Dec 6 04:45:53 localhost nova_compute[230884]: pipe Dec 6 04:45:53 localhost nova_compute[230884]: stdio Dec 6 04:45:53 localhost nova_compute[230884]: udp Dec 6 04:45:53 localhost nova_compute[230884]: tcp Dec 6 04:45:53 localhost nova_compute[230884]: unix Dec 6 04:45:53 localhost nova_compute[230884]: qemu-vdagent Dec 6 04:45:53 localhost nova_compute[230884]: dbus Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: relaxed Dec 6 04:45:53 localhost nova_compute[230884]: vapic Dec 6 04:45:53 localhost nova_compute[230884]: spinlocks Dec 6 04:45:53 localhost nova_compute[230884]: vpindex Dec 6 04:45:53 localhost nova_compute[230884]: runtime Dec 6 04:45:53 localhost nova_compute[230884]: synic Dec 6 04:45:53 localhost nova_compute[230884]: stimer Dec 6 04:45:53 localhost nova_compute[230884]: reset Dec 6 04:45:53 localhost nova_compute[230884]: vendor_id Dec 6 04:45:53 localhost nova_compute[230884]: frequencies Dec 6 04:45:53 localhost nova_compute[230884]: reenlightenment Dec 6 04:45:53 localhost nova_compute[230884]: tlbflush Dec 6 04:45:53 localhost nova_compute[230884]: ipi Dec 6 04:45:53 localhost nova_compute[230884]: avic Dec 6 04:45:53 localhost nova_compute[230884]: emsr_bitmap Dec 6 04:45:53 localhost nova_compute[230884]: xmm_input Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: 4095 Dec 6 04:45:53 localhost nova_compute[230884]: on Dec 6 04:45:53 localhost nova_compute[230884]: off Dec 6 04:45:53 localhost nova_compute[230884]: off Dec 6 04:45:53 localhost nova_compute[230884]: Linux KVM Hv Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: tdx Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: Dec 6 04:45:53 localhost nova_compute[230884]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.366 230888 DEBUG nova.virt.libvirt.host [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.366 230888 DEBUG nova.virt.libvirt.host [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.366 230888 DEBUG nova.virt.libvirt.host [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.367 230888 INFO nova.virt.libvirt.host [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Secure Boot support detected#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.369 230888 INFO nova.virt.libvirt.driver [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.369 230888 INFO nova.virt.libvirt.driver [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.382 230888 DEBUG nova.virt.libvirt.driver [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.422 230888 INFO nova.virt.node [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Determined node identity 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad from /var/lib/nova/compute_id#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.444 230888 DEBUG nova.compute.manager [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Verified node 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad matches my host np0005548789.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.485 230888 DEBUG nova.compute.manager [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.488 230888 DEBUG nova.virt.libvirt.vif [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:44:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description='test',display_name='test',ec2_ids=,ephemeral_gb=1,ephemeral_key_uuid=None,fault=,flavor=,hidden=False,host='np0005548789.localdomain',hostname='test',id=2,image_ref='e0d06706-da90-478a-9829-34b75a3ce049',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-12-06T08:44:43Z,launched_on='np0005548789.localdomain',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=,new_flavor=,node='np0005548789.localdomain',numa_topology=None,old_flavor=,os_type=None,pci_devices=,pci_requests=,power_state=1,progress=0,project_id='3d603431c0bb4967bafc7a0aa6108bfe',ramdisk_id='',reservation_id='r-02dpupig',resources=,root_device_name='/dev/vda',root_gb=1,security_groups=,services=,shutdown_terminate=False,system_metadata=,tags=,task_state=None,terminated_at=None,trusted_certs=,updated_at=2025-12-06T08:44:43Z,user_data=None,user_id='ff0049f3313348bdb67886d170c1c765',uuid=b7ed0a2e-9350-4933-9334-4e5e08d3e6aa,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"dhcp_server": "192.168.0.1"}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system"}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.488 230888 DEBUG nova.network.os_vif_util [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Converting VIF {"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"dhcp_server": "192.168.0.1"}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system"}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.489 230888 DEBUG nova.network.os_vif_util [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:64:77:f3,bridge_name='br-int',has_traffic_filtering=True,id=86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b,network=Network(652b6bdc-40ce-45b7-8aa5-3bca79987993),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86fc0b7a-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.489 230888 DEBUG os_vif [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:64:77:f3,bridge_name='br-int',has_traffic_filtering=True,id=86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b,network=Network(652b6bdc-40ce-45b7-8aa5-3bca79987993),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86fc0b7a-fb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.556 230888 DEBUG ovsdbapp.backend.ovs_idl [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.557 230888 DEBUG ovsdbapp.backend.ovs_idl [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.557 230888 DEBUG ovsdbapp.backend.ovs_idl [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.557 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.557 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] [POLLOUT] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.557 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.558 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.559 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.561 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.571 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.571 230888 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.572 230888 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 6 04:45:53 localhost nova_compute[230884]: 2025-12-06 09:45:53.572 230888 INFO oslo.privsep.daemon [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpu7bcbu8p/privsep.sock']#033[00m Dec 6 04:45:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38134 DF PROTO=TCP SPT=58278 DPT=9105 SEQ=1660505716 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52DA30EF0000000001030307) Dec 6 04:45:54 localhost nova_compute[230884]: 2025-12-06 09:45:54.189 230888 INFO oslo.privsep.daemon [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Spawned new privsep daemon via rootwrap#033[00m Dec 6 04:45:54 localhost nova_compute[230884]: 2025-12-06 09:45:54.079 230943 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Dec 6 04:45:54 localhost nova_compute[230884]: 2025-12-06 09:45:54.083 230943 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Dec 6 04:45:54 localhost nova_compute[230884]: 2025-12-06 09:45:54.087 230943 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m Dec 6 04:45:54 localhost nova_compute[230884]: 2025-12-06 09:45:54.087 230943 INFO oslo.privsep.daemon [-] privsep daemon running as pid 230943#033[00m Dec 6 04:45:54 localhost nova_compute[230884]: 2025-12-06 09:45:54.467 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:45:54 localhost nova_compute[230884]: 2025-12-06 09:45:54.468 230888 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap86fc0b7a-fb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 04:45:54 localhost nova_compute[230884]: 2025-12-06 09:45:54.468 230888 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap86fc0b7a-fb, col_values=(('external_ids', {'iface-id': '86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:64:77:f3', 'vm-uuid': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 04:45:54 localhost nova_compute[230884]: 2025-12-06 09:45:54.469 230888 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 6 04:45:54 localhost nova_compute[230884]: 2025-12-06 09:45:54.470 230888 INFO os_vif [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:64:77:f3,bridge_name='br-int',has_traffic_filtering=True,id=86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b,network=Network(652b6bdc-40ce-45b7-8aa5-3bca79987993),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86fc0b7a-fb')#033[00m Dec 6 04:45:54 localhost nova_compute[230884]: 2025-12-06 09:45:54.470 230888 DEBUG nova.compute.manager [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 6 04:45:54 localhost nova_compute[230884]: 2025-12-06 09:45:54.474 230888 DEBUG nova.compute.manager [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Current state is 1, state in DB is 1. _init_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:1304#033[00m Dec 6 04:45:54 localhost nova_compute[230884]: 2025-12-06 09:45:54.474 230888 INFO nova.compute.manager [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m Dec 6 04:45:54 localhost nova_compute[230884]: 2025-12-06 09:45:54.609 230888 DEBUG oslo_concurrency.lockutils [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 04:45:54 localhost nova_compute[230884]: 2025-12-06 09:45:54.610 230888 DEBUG oslo_concurrency.lockutils [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 04:45:54 localhost nova_compute[230884]: 2025-12-06 09:45:54.610 230888 DEBUG oslo_concurrency.lockutils [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 04:45:54 localhost nova_compute[230884]: 2025-12-06 09:45:54.611 230888 DEBUG nova.compute.resource_tracker [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Auditing locally available compute resources for np0005548789.localdomain (node: np0005548789.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 6 04:45:54 localhost nova_compute[230884]: 2025-12-06 09:45:54.611 230888 DEBUG oslo_concurrency.processutils [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 04:45:54 localhost nova_compute[230884]: 2025-12-06 09:45:54.927 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:45:55 localhost nova_compute[230884]: 2025-12-06 09:45:55.102 230888 DEBUG oslo_concurrency.processutils [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 04:45:55 localhost nova_compute[230884]: 2025-12-06 09:45:55.199 230888 DEBUG nova.virt.libvirt.driver [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 04:45:55 localhost nova_compute[230884]: 2025-12-06 09:45:55.199 230888 DEBUG nova.virt.libvirt.driver [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 04:45:55 localhost nova_compute[230884]: 2025-12-06 09:45:55.400 230888 WARNING nova.virt.libvirt.driver [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 6 04:45:55 localhost nova_compute[230884]: 2025-12-06 09:45:55.401 230888 DEBUG nova.compute.resource_tracker [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Hypervisor/Node resource view: name=np0005548789.localdomain free_ram=12920MB free_disk=41.83721923828125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 6 04:45:55 localhost nova_compute[230884]: 2025-12-06 09:45:55.401 230888 DEBUG oslo_concurrency.lockutils [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 04:45:55 localhost nova_compute[230884]: 2025-12-06 09:45:55.401 230888 DEBUG oslo_concurrency.lockutils [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 04:45:55 localhost nova_compute[230884]: 2025-12-06 09:45:55.556 230888 DEBUG nova.compute.resource_tracker [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 6 04:45:55 localhost nova_compute[230884]: 2025-12-06 09:45:55.556 230888 DEBUG nova.compute.resource_tracker [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 6 04:45:55 localhost nova_compute[230884]: 2025-12-06 09:45:55.557 230888 DEBUG nova.compute.resource_tracker [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Final resource view: name=np0005548789.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 6 04:45:55 localhost nova_compute[230884]: 2025-12-06 09:45:55.619 230888 DEBUG nova.scheduler.client.report [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Refreshing inventories for resource provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Dec 6 04:45:55 localhost nova_compute[230884]: 2025-12-06 09:45:55.644 230888 DEBUG nova.scheduler.client.report [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Updating ProviderTree inventory for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Dec 6 04:45:55 localhost nova_compute[230884]: 2025-12-06 09:45:55.645 230888 DEBUG nova.compute.provider_tree [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Updating inventory in ProviderTree for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 6 04:45:55 localhost nova_compute[230884]: 2025-12-06 09:45:55.672 230888 DEBUG nova.scheduler.client.report [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Refreshing aggregate associations for resource provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Dec 6 04:45:55 localhost nova_compute[230884]: 2025-12-06 09:45:55.758 230888 DEBUG nova.scheduler.client.report [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Refreshing trait associations for resource provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad, traits: COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE4A,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AMD_SVM,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX2,HW_CPU_X86_SSE2,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AESNI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE41,HW_CPU_X86_BMI,HW_CPU_X86_FMA3,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SHA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_AVX,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NODE,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Dec 6 04:45:55 localhost nova_compute[230884]: 2025-12-06 09:45:55.801 230888 DEBUG oslo_concurrency.processutils [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 04:45:55 localhost python3.9[231061]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None Dec 6 04:45:56 localhost systemd[1]: Started libpod-conmon-a90088f7335f0424abe9208b181fba6d3fc6d1408325e575f0ba866a5d87ad9b.scope. Dec 6 04:45:56 localhost systemd[1]: Started libcrun container. Dec 6 04:45:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb10c7340fd1e23819ce0ca67ef154f878e8bf4b0dc6dca13387cdceeb0c7ee/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff) Dec 6 04:45:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb10c7340fd1e23819ce0ca67ef154f878e8bf4b0dc6dca13387cdceeb0c7ee/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 6 04:45:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb10c7340fd1e23819ce0ca67ef154f878e8bf4b0dc6dca13387cdceeb0c7ee/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff) Dec 6 04:45:56 localhost podman[231101]: 2025-12-06 09:45:56.08157794 +0000 UTC m=+0.132867917 container init a90088f7335f0424abe9208b181fba6d3fc6d1408325e575f0ba866a5d87ad9b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, container_name=nova_compute_init, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3) Dec 6 04:45:56 localhost podman[231101]: 2025-12-06 09:45:56.091928583 +0000 UTC m=+0.143218600 container start a90088f7335f0424abe9208b181fba6d3fc6d1408325e575f0ba866a5d87ad9b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=edpm) Dec 6 04:45:56 localhost python3.9[231061]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init Dec 6 04:45:56 localhost nova_compute_init[231127]: INFO:nova_statedir:Applying nova statedir ownership Dec 6 04:45:56 localhost nova_compute_init[231127]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436 Dec 6 04:45:56 localhost nova_compute_init[231127]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/ Dec 6 04:45:56 localhost nova_compute_init[231127]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436 Dec 6 04:45:56 localhost nova_compute_init[231127]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0 Dec 6 04:45:56 localhost nova_compute_init[231127]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/ Dec 6 04:45:56 localhost nova_compute_init[231127]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436 Dec 6 04:45:56 localhost nova_compute_init[231127]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0 Dec 6 04:45:56 localhost nova_compute_init[231127]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/ Dec 6 04:45:56 localhost nova_compute_init[231127]: INFO:nova_statedir:Ownership of /var/lib/nova/instances/b7ed0a2e-9350-4933-9334-4e5e08d3e6aa already 42436:42436 Dec 6 04:45:56 localhost nova_compute_init[231127]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances/b7ed0a2e-9350-4933-9334-4e5e08d3e6aa to system_u:object_r:container_file_t:s0 Dec 6 04:45:56 localhost nova_compute_init[231127]: INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/instances/b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/console.log Dec 6 04:45:56 localhost nova_compute_init[231127]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/_base/ Dec 6 04:45:56 localhost nova_compute_init[231127]: INFO:nova_statedir:Ownership of /var/lib/nova/instances/_base already 42436:42436 Dec 6 04:45:56 localhost nova_compute_init[231127]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances/_base to system_u:object_r:container_file_t:s0 Dec 6 04:45:56 localhost nova_compute_init[231127]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/_base/55d01870b6a0ce0995b6b5844cf47638cdf46fbf Dec 6 04:45:56 localhost nova_compute_init[231127]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/_base/ephemeral_1_0706d66 Dec 6 04:45:56 localhost nova_compute_init[231127]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/locks/ Dec 6 04:45:56 localhost nova_compute_init[231127]: INFO:nova_statedir:Ownership of /var/lib/nova/instances/locks already 42436:42436 Dec 6 04:45:56 localhost nova_compute_init[231127]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances/locks to system_u:object_r:container_file_t:s0 Dec 6 04:45:56 localhost nova_compute_init[231127]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/locks/nova-55d01870b6a0ce0995b6b5844cf47638cdf46fbf Dec 6 04:45:56 localhost nova_compute_init[231127]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/locks/nova-ephemeral_1_0706d66 Dec 6 04:45:56 localhost nova_compute_init[231127]: INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/delay-nova-compute Dec 6 04:45:56 localhost nova_compute_init[231127]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ Dec 6 04:45:56 localhost nova_compute_init[231127]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436 Dec 6 04:45:56 localhost nova_compute_init[231127]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0 Dec 6 04:45:56 localhost nova_compute_init[231127]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey Dec 6 04:45:56 localhost nova_compute_init[231127]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config Dec 6 04:45:56 localhost nova_compute_init[231127]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/ Dec 6 04:45:56 localhost nova_compute_init[231127]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache already 42436:42436 Dec 6 04:45:56 localhost nova_compute_init[231127]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache to system_u:object_r:container_file_t:s0 Dec 6 04:45:56 localhost nova_compute_init[231127]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/ Dec 6 04:45:56 localhost nova_compute_init[231127]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache/python-entrypoints already 42436:42436 Dec 6 04:45:56 localhost nova_compute_init[231127]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache/python-entrypoints to system_u:object_r:container_file_t:s0 Dec 6 04:45:56 localhost nova_compute_init[231127]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/b234715fc878456b41e32c4fbc669b417044dbe6c6684bbc9059e5c93396ffea Dec 6 04:45:56 localhost nova_compute_init[231127]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/20273498b7380904530133bcb3f720bd45f4f00b810dc4597d81d23acd8f9673 Dec 6 04:45:56 localhost nova_compute_init[231127]: INFO:nova_statedir:Nova statedir ownership complete Dec 6 04:45:56 localhost systemd[1]: libpod-a90088f7335f0424abe9208b181fba6d3fc6d1408325e575f0ba866a5d87ad9b.scope: Deactivated successfully. Dec 6 04:45:56 localhost podman[231128]: 2025-12-06 09:45:56.164856004 +0000 UTC m=+0.053538058 container died a90088f7335f0424abe9208b181fba6d3fc6d1408325e575f0ba866a5d87ad9b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, config_id=edpm, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team) Dec 6 04:45:56 localhost nova_compute[230884]: 2025-12-06 09:45:56.275 230888 DEBUG oslo_concurrency.processutils [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 04:45:56 localhost nova_compute[230884]: 2025-12-06 09:45:56.281 230888 DEBUG nova.virt.libvirt.host [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N Dec 6 04:45:56 localhost nova_compute[230884]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m Dec 6 04:45:56 localhost nova_compute[230884]: 2025-12-06 09:45:56.281 230888 INFO nova.virt.libvirt.host [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] kernel doesn't support AMD SEV#033[00m Dec 6 04:45:56 localhost nova_compute[230884]: 2025-12-06 09:45:56.283 230888 DEBUG nova.compute.provider_tree [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Inventory has not changed in ProviderTree for provider: 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 6 04:45:56 localhost nova_compute[230884]: 2025-12-06 09:45:56.283 230888 DEBUG nova.virt.libvirt.driver [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Dec 6 04:45:56 localhost podman[231141]: 2025-12-06 09:45:56.284125558 +0000 UTC m=+0.117681996 container cleanup a90088f7335f0424abe9208b181fba6d3fc6d1408325e575f0ba866a5d87ad9b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 6 04:45:56 localhost systemd[1]: libpod-conmon-a90088f7335f0424abe9208b181fba6d3fc6d1408325e575f0ba866a5d87ad9b.scope: Deactivated successfully. Dec 6 04:45:56 localhost nova_compute[230884]: 2025-12-06 09:45:56.302 230888 DEBUG nova.scheduler.client.report [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Inventory has not changed for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 6 04:45:56 localhost nova_compute[230884]: 2025-12-06 09:45:56.342 230888 DEBUG nova.compute.resource_tracker [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Compute_service record updated for np0005548789.localdomain:np0005548789.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 6 04:45:56 localhost nova_compute[230884]: 2025-12-06 09:45:56.343 230888 DEBUG oslo_concurrency.lockutils [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.942s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 04:45:56 localhost nova_compute[230884]: 2025-12-06 09:45:56.343 230888 DEBUG nova.service [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m Dec 6 04:45:56 localhost nova_compute[230884]: 2025-12-06 09:45:56.375 230888 DEBUG nova.service [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m Dec 6 04:45:56 localhost nova_compute[230884]: 2025-12-06 09:45:56.376 230888 DEBUG nova.servicegroup.drivers.db [None req-e1f88f64-31d2-42bc-8944-ab08e0aca269 - - - - - -] DB_Driver: join new ServiceGroup member np0005548789.localdomain to the compute group, service = join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m Dec 6 04:45:56 localhost systemd[1]: session-54.scope: Deactivated successfully. Dec 6 04:45:56 localhost systemd[1]: session-54.scope: Consumed 2min 9.443s CPU time. Dec 6 04:45:56 localhost systemd-logind[766]: Session 54 logged out. Waiting for processes to exit. Dec 6 04:45:56 localhost systemd-logind[766]: Removed session 54. Dec 6 04:45:56 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63508 DF PROTO=TCP SPT=37542 DPT=9882 SEQ=2811801932 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52DA3DAF0000000001030307) Dec 6 04:45:56 localhost systemd[1]: var-lib-containers-storage-overlay-ceb10c7340fd1e23819ce0ca67ef154f878e8bf4b0dc6dca13387cdceeb0c7ee-merged.mount: Deactivated successfully. Dec 6 04:45:56 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a90088f7335f0424abe9208b181fba6d3fc6d1408325e575f0ba866a5d87ad9b-userdata-shm.mount: Deactivated successfully. Dec 6 04:45:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 04:45:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 04:45:57 localhost systemd[1]: tmp-crun.9W9dZd.mount: Deactivated successfully. Dec 6 04:45:57 localhost podman[231187]: 2025-12-06 09:45:57.91218789 +0000 UTC m=+0.071397364 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent) Dec 6 04:45:57 localhost podman[231186]: 2025-12-06 09:45:57.966853212 +0000 UTC m=+0.125067405 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 6 04:45:57 localhost podman[231187]: 2025-12-06 09:45:57.99731021 +0000 UTC m=+0.156519684 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 6 04:45:58 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 04:45:58 localhost podman[231186]: 2025-12-06 09:45:58.054644086 +0000 UTC m=+0.212858359 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Dec 6 04:45:58 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 04:45:58 localhost nova_compute[230884]: 2025-12-06 09:45:58.608 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:45:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=761 DF PROTO=TCP SPT=49776 DPT=9101 SEQ=4200883413 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52DA47EF0000000001030307) Dec 6 04:45:59 localhost nova_compute[230884]: 2025-12-06 09:45:59.961 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:46:01 localhost sshd[231230]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:46:02 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38135 DF PROTO=TCP SPT=58278 DPT=9105 SEQ=1660505716 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52DA51EF0000000001030307) Dec 6 04:46:02 localhost sshd[231232]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:46:02 localhost systemd-logind[766]: New session 56 of user zuul. Dec 6 04:46:03 localhost systemd[1]: Started Session 56 of User zuul. Dec 6 04:46:03 localhost nova_compute[230884]: 2025-12-06 09:46:03.668 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:46:04 localhost python3.9[231343]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 6 04:46:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58226 DF PROTO=TCP SPT=46606 DPT=9102 SEQ=1202293018 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52DA5BEF0000000001030307) Dec 6 04:46:05 localhost nova_compute[230884]: 2025-12-06 09:46:05.000 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:46:05 localhost python3.9[231491]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 6 04:46:05 localhost systemd[1]: Reloading. Dec 6 04:46:05 localhost systemd-rc-local-generator[231535]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:46:05 localhost systemd-sysv-generator[231539]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:46:05 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:46:05 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 04:46:05 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:46:05 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:46:06 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:46:06 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 04:46:06 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:46:06 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:46:06 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:46:07 localhost python3.9[231687]: ansible-ansible.builtin.service_facts Invoked Dec 6 04:46:07 localhost network[231704]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 6 04:46:07 localhost network[231705]: 'network-scripts' will be removed from distribution in near future. Dec 6 04:46:07 localhost network[231706]: It is advised to switch to 'NetworkManager' instead for network management. Dec 6 04:46:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25003 DF PROTO=TCP SPT=58212 DPT=9102 SEQ=2104232652 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52DA67EF0000000001030307) Dec 6 04:46:08 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:46:08 localhost nova_compute[230884]: 2025-12-06 09:46:08.712 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:46:10 localhost nova_compute[230884]: 2025-12-06 09:46:10.051 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:46:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58228 DF PROTO=TCP SPT=46606 DPT=9102 SEQ=1202293018 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52DA73AF0000000001030307) Dec 6 04:46:11 localhost sshd[231849]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:46:13 localhost python3.9[231943]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:46:13 localhost nova_compute[230884]: 2025-12-06 09:46:13.752 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:46:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61365 DF PROTO=TCP SPT=36184 DPT=9101 SEQ=1986252779 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52DA81500000000001030307) Dec 6 04:46:14 localhost python3.9[232054]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:46:14 localhost systemd-journald[47810]: Field hash table of /run/log/journal/4b30904fc4748c16d0c72dbebcabab49/system.journal has a fill level at 76.3 (254 of 333 items), suggesting rotation. Dec 6 04:46:14 localhost systemd-journald[47810]: /run/log/journal/4b30904fc4748c16d0c72dbebcabab49/system.journal: Journal header limits reached or header out-of-date, rotating. Dec 6 04:46:14 localhost rsyslogd[760]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 6 04:46:14 localhost rsyslogd[760]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 6 04:46:15 localhost nova_compute[230884]: 2025-12-06 09:46:15.105 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:46:15 localhost python3.9[232165]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:46:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 04:46:16 localhost podman[232275]: 2025-12-06 09:46:16.200012877 +0000 UTC m=+0.086305308 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 6 04:46:16 localhost podman[232275]: 2025-12-06 09:46:16.216011765 +0000 UTC m=+0.102304176 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0) Dec 6 04:46:16 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 04:46:16 localhost python3.9[232276]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:46:17 localhost python3.9[232405]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Dec 6 04:46:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61367 DF PROTO=TCP SPT=36184 DPT=9101 SEQ=1986252779 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52DA8D6F0000000001030307) Dec 6 04:46:18 localhost python3.9[232515]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 6 04:46:18 localhost systemd[1]: Reloading. Dec 6 04:46:18 localhost systemd-rc-local-generator[232538]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:46:18 localhost systemd-sysv-generator[232541]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:46:18 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:46:18 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 04:46:18 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:46:18 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:46:18 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:46:18 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 04:46:18 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:46:18 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:46:18 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:46:18 localhost nova_compute[230884]: 2025-12-06 09:46:18.793 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:46:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19008 DF PROTO=TCP SPT=39750 DPT=9105 SEQ=3707890099 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52DA96700000000001030307) Dec 6 04:46:19 localhost python3.9[232661]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:46:20 localhost nova_compute[230884]: 2025-12-06 09:46:20.130 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:46:20 localhost python3.9[232772]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 04:46:21 localhost python3.9[232880]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:46:22 localhost nova_compute[230884]: 2025-12-06 09:46:22.378 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:46:22 localhost nova_compute[230884]: 2025-12-06 09:46:22.413 230888 DEBUG nova.compute.manager [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Triggering sync for uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m Dec 6 04:46:22 localhost nova_compute[230884]: 2025-12-06 09:46:22.413 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Acquiring lock "b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 04:46:22 localhost nova_compute[230884]: 2025-12-06 09:46:22.413 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Lock "b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" acquired by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 04:46:22 localhost nova_compute[230884]: 2025-12-06 09:46:22.414 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:46:22 localhost nova_compute[230884]: 2025-12-06 09:46:22.509 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Lock "b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" "released" by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" :: held 0.095s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 04:46:22 localhost python3.9[232990]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:46:23 localhost python3.9[233076]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014382.2838378-360-164839538130597/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=ff4e72663552f54a1c747481e1f73412f2607746 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 04:46:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19009 DF PROTO=TCP SPT=39750 DPT=9105 SEQ=3707890099 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52DAA62F0000000001030307) Dec 6 04:46:23 localhost nova_compute[230884]: 2025-12-06 09:46:23.797 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:46:24 localhost python3.9[233186]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None Dec 6 04:46:25 localhost nova_compute[230884]: 2025-12-06 09:46:25.137 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:46:25 localhost python3.9[233296]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None Dec 6 04:46:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63511 DF PROTO=TCP SPT=37542 DPT=9882 SEQ=2811801932 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52DAADEF0000000001030307) Dec 6 04:46:26 localhost python3.9[233407]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None Dec 6 04:46:27 localhost python3.9[233523]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005548789.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None Dec 6 04:46:28 localhost nova_compute[230884]: 2025-12-06 09:46:28.800 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:46:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 04:46:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 04:46:28 localhost python3.9[233639]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:46:28 localhost systemd[1]: tmp-crun.FvAixU.mount: Deactivated successfully. Dec 6 04:46:28 localhost podman[233640]: 2025-12-06 09:46:28.909322252 +0000 UTC m=+0.071232660 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Dec 6 04:46:28 localhost podman[233641]: 2025-12-06 09:46:28.92178984 +0000 UTC m=+0.081937932 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 6 04:46:28 localhost podman[233640]: 2025-12-06 09:46:28.934365662 +0000 UTC m=+0.096276070 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible) Dec 6 04:46:28 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 04:46:28 localhost podman[233641]: 2025-12-06 09:46:28.949986738 +0000 UTC m=+0.110134790 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible) Dec 6 04:46:28 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 04:46:29 localhost python3.9[233768]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1765014388.4323149-564-97107424243592/.source.conf _original_basename=ceilometer.conf follow=False checksum=e90760659247c177dccfbe1ef7de974794985ce9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:46:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61369 DF PROTO=TCP SPT=36184 DPT=9101 SEQ=1986252779 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52DABDF00000000001030307) Dec 6 04:46:29 localhost python3.9[233876]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:46:30 localhost nova_compute[230884]: 2025-12-06 09:46:30.139 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:46:31 localhost python3.9[233962]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1765014389.5647614-564-214907326056160/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:46:31 localhost python3.9[234070]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:46:31 localhost auditd[725]: Audit daemon rotating log files Dec 6 04:46:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19010 DF PROTO=TCP SPT=39750 DPT=9105 SEQ=3707890099 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52DAC5EF0000000001030307) Dec 6 04:46:32 localhost python3.9[234156]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1765014391.2911236-564-100204522657405/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:46:33 localhost python3.9[234264]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:46:33 localhost nova_compute[230884]: 2025-12-06 09:46:33.836 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:46:34 localhost python3.9[234372]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:46:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19318 DF PROTO=TCP SPT=48568 DPT=9102 SEQ=2118299854 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52DAD12F0000000001030307) Dec 6 04:46:35 localhost nova_compute[230884]: 2025-12-06 09:46:35.142 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:46:35 localhost python3.9[234480]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:46:35 localhost sshd[234530]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:46:35 localhost python3.9[234568]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014394.6638057-741-25553970523476/.source.json follow=False _original_basename=ceilometer-agent-compute.json.j2 checksum=264d11e8d3809e7ef745878dce7edd46098e25b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:46:36 localhost python3.9[234676]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:46:36 localhost python3.9[234731]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:46:37 localhost python3.9[234839]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:46:37 localhost python3.9[234925]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014396.8055358-741-273471521040312/.source.json follow=False _original_basename=ceilometer_agent_compute.json.j2 checksum=d15068604cf730dd6e7b88a19d62f57d3a39f94f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:46:38 localhost python3.9[235033]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:46:38 localhost python3.9[235119]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014397.9168751-741-120671930251414/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:46:38 localhost nova_compute[230884]: 2025-12-06 09:46:38.886 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:46:39 localhost python3.9[235227]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:46:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41228 DF PROTO=TCP SPT=33374 DPT=9882 SEQ=2224775730 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52DAE3F00000000001030307) Dec 6 04:46:39 localhost python3.9[235313]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014399.0052042-741-248476297696252/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:46:40 localhost nova_compute[230884]: 2025-12-06 09:46:40.144 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:46:40 localhost python3.9[235421]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:46:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19320 DF PROTO=TCP SPT=48568 DPT=9102 SEQ=2118299854 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52DAE8EF0000000001030307) Dec 6 04:46:41 localhost python3.9[235507]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014400.1171174-741-54439433761440/.source.json follow=False _original_basename=node_exporter.json.j2 checksum=7e5ab36b7368c1d4a00810e02af11a7f7d7c84e8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:46:41 localhost python3.9[235615]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:46:42 localhost python3.9[235701]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014401.276177-741-6489650128920/.source.yaml follow=False _original_basename=node_exporter.yaml.j2 checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:46:42 localhost python3.9[235809]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:46:43 localhost python3.9[235895]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014402.3464491-741-12700034770971/.source.json follow=False _original_basename=openstack_network_exporter.json.j2 checksum=0e4ea521b0035bea70b7a804346a5c89364dcbc3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:46:43 localhost nova_compute[230884]: 2025-12-06 09:46:43.922 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:46:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63662 DF PROTO=TCP SPT=54692 DPT=9101 SEQ=1152780584 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52DAF6830000000001030307) Dec 6 04:46:44 localhost python3.9[236003]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:46:45 localhost nova_compute[230884]: 2025-12-06 09:46:45.146 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:46:45 localhost python3.9[236089]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014404.3147995-741-250966041018160/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=b056dcaaba7624b93826bb95ee9e82f81bde6c72 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:46:45 localhost python3.9[236197]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:46:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 04:46:46 localhost systemd[1]: tmp-crun.Lysyqd.mount: Deactivated successfully. Dec 6 04:46:46 localhost podman[236247]: 2025-12-06 09:46:46.90239277 +0000 UTC m=+0.069527335 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125) Dec 6 04:46:46 localhost podman[236247]: 2025-12-06 09:46:46.918205633 +0000 UTC m=+0.085340178 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 6 04:46:46 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 04:46:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:46:47.276 160509 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 04:46:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:46:47.277 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 04:46:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:46:47.278 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 04:46:47 localhost python3.9[236302]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014405.503078-741-140054624375160/.source.json follow=False _original_basename=podman_exporter.json.j2 checksum=885ccc6f5edd8803cb385bdda5648d0b3017b4e4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:46:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63664 DF PROTO=TCP SPT=54692 DPT=9101 SEQ=1152780584 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52DB026F0000000001030307) Dec 6 04:46:47 localhost python3.9[236410]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:46:48 localhost python3.9[236496]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014407.4625413-741-17130614229957/.source.yaml follow=False _original_basename=podman_exporter.yaml.j2 checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:46:48 localhost nova_compute[230884]: 2025-12-06 09:46:48.965 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:46:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54002 DF PROTO=TCP SPT=33278 DPT=9105 SEQ=1469673539 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52DB0BB00000000001030307) Dec 6 04:46:50 localhost nova_compute[230884]: 2025-12-06 09:46:50.200 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:46:50 localhost python3.9[236606]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 04:46:51 localhost python3.9[236716]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:46:51 localhost systemd[1]: Reloading. Dec 6 04:46:51 localhost systemd-rc-local-generator[236746]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:46:51 localhost systemd-sysv-generator[236749]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:46:51 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:46:51 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 04:46:51 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:46:51 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:46:51 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:46:51 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 04:46:51 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:46:51 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:46:51 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:46:51 localhost systemd[1]: Listening on Podman API Socket. Dec 6 04:46:52 localhost python3.9[236866]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:46:52 localhost nova_compute[230884]: 2025-12-06 09:46:52.596 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:46:52 localhost nova_compute[230884]: 2025-12-06 09:46:52.597 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:46:52 localhost nova_compute[230884]: 2025-12-06 09:46:52.598 230888 DEBUG nova.compute.manager [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 6 04:46:52 localhost nova_compute[230884]: 2025-12-06 09:46:52.598 230888 DEBUG nova.compute.manager [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 6 04:46:52 localhost python3.9[236954]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014411.9035506-1257-149379748432715/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 6 04:46:53 localhost python3.9[237009]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:46:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54003 DF PROTO=TCP SPT=33278 DPT=9105 SEQ=1469673539 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52DB1B6F0000000001030307) Dec 6 04:46:54 localhost nova_compute[230884]: 2025-12-06 09:46:54.007 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:46:54 localhost python3.9[237097]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014411.9035506-1257-149379748432715/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 6 04:46:54 localhost nova_compute[230884]: 2025-12-06 09:46:54.129 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Acquiring lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 6 04:46:54 localhost nova_compute[230884]: 2025-12-06 09:46:54.130 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Acquired lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 6 04:46:54 localhost nova_compute[230884]: 2025-12-06 09:46:54.130 230888 DEBUG nova.network.neutron [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 6 04:46:54 localhost nova_compute[230884]: 2025-12-06 09:46:54.131 230888 DEBUG nova.objects.instance [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 6 04:46:55 localhost nova_compute[230884]: 2025-12-06 09:46:55.202 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:46:55 localhost python3.9[237207]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=ceilometer_agent_compute.json debug=False Dec 6 04:46:55 localhost nova_compute[230884]: 2025-12-06 09:46:55.266 230888 DEBUG nova.network.neutron [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updating instance_info_cache with network_info: [{"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 6 04:46:55 localhost nova_compute[230884]: 2025-12-06 09:46:55.291 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Releasing lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 6 04:46:55 localhost nova_compute[230884]: 2025-12-06 09:46:55.291 230888 DEBUG nova.compute.manager [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 6 04:46:55 localhost nova_compute[230884]: 2025-12-06 09:46:55.292 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:46:55 localhost nova_compute[230884]: 2025-12-06 09:46:55.292 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:46:55 localhost nova_compute[230884]: 2025-12-06 09:46:55.292 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:46:55 localhost nova_compute[230884]: 2025-12-06 09:46:55.292 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:46:55 localhost nova_compute[230884]: 2025-12-06 09:46:55.292 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:46:55 localhost nova_compute[230884]: 2025-12-06 09:46:55.293 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:46:55 localhost nova_compute[230884]: 2025-12-06 09:46:55.293 230888 DEBUG nova.compute.manager [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 6 04:46:55 localhost nova_compute[230884]: 2025-12-06 09:46:55.293 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:46:55 localhost nova_compute[230884]: 2025-12-06 09:46:55.308 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 04:46:55 localhost nova_compute[230884]: 2025-12-06 09:46:55.309 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 04:46:55 localhost nova_compute[230884]: 2025-12-06 09:46:55.309 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 04:46:55 localhost nova_compute[230884]: 2025-12-06 09:46:55.309 230888 DEBUG nova.compute.resource_tracker [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Auditing locally available compute resources for np0005548789.localdomain (node: np0005548789.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 6 04:46:55 localhost nova_compute[230884]: 2025-12-06 09:46:55.309 230888 DEBUG oslo_concurrency.processutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 04:46:55 localhost nova_compute[230884]: 2025-12-06 09:46:55.696 230888 DEBUG oslo_concurrency.processutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.386s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 04:46:55 localhost nova_compute[230884]: 2025-12-06 09:46:55.772 230888 DEBUG nova.virt.libvirt.driver [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 04:46:55 localhost nova_compute[230884]: 2025-12-06 09:46:55.773 230888 DEBUG nova.virt.libvirt.driver [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 04:46:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41229 DF PROTO=TCP SPT=33374 DPT=9882 SEQ=2224775730 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52DB23EF0000000001030307) Dec 6 04:46:55 localhost nova_compute[230884]: 2025-12-06 09:46:55.936 230888 WARNING nova.virt.libvirt.driver [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 6 04:46:55 localhost nova_compute[230884]: 2025-12-06 09:46:55.938 230888 DEBUG nova.compute.resource_tracker [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Hypervisor/Node resource view: name=np0005548789.localdomain free_ram=12917MB free_disk=41.83721923828125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 6 04:46:55 localhost nova_compute[230884]: 2025-12-06 09:46:55.938 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 04:46:55 localhost nova_compute[230884]: 2025-12-06 09:46:55.938 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 04:46:56 localhost nova_compute[230884]: 2025-12-06 09:46:56.006 230888 DEBUG nova.compute.resource_tracker [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 6 04:46:56 localhost nova_compute[230884]: 2025-12-06 09:46:56.007 230888 DEBUG nova.compute.resource_tracker [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 6 04:46:56 localhost nova_compute[230884]: 2025-12-06 09:46:56.007 230888 DEBUG nova.compute.resource_tracker [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Final resource view: name=np0005548789.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 6 04:46:56 localhost nova_compute[230884]: 2025-12-06 09:46:56.060 230888 DEBUG oslo_concurrency.processutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 04:46:56 localhost python3.9[237339]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Dec 6 04:46:56 localhost nova_compute[230884]: 2025-12-06 09:46:56.524 230888 DEBUG oslo_concurrency.processutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 04:46:56 localhost nova_compute[230884]: 2025-12-06 09:46:56.532 230888 DEBUG nova.compute.provider_tree [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Inventory has not changed in ProviderTree for provider: 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 6 04:46:56 localhost nova_compute[230884]: 2025-12-06 09:46:56.558 230888 DEBUG nova.scheduler.client.report [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Inventory has not changed for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 6 04:46:56 localhost nova_compute[230884]: 2025-12-06 09:46:56.561 230888 DEBUG nova.compute.resource_tracker [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Compute_service record updated for np0005548789.localdomain:np0005548789.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 6 04:46:56 localhost nova_compute[230884]: 2025-12-06 09:46:56.561 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 04:46:58 localhost python3[237471]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=ceilometer_agent_compute.json log_base_path=/var/log/containers/stdouts debug=False Dec 6 04:46:58 localhost python3[237471]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "343ba269c9fe0a56d7572c8ca328dbce002017c4dd4986f43667971dd03085c2",#012 "Digest": "sha256:667029e1ec7e63fffa1a096f432f6160b441ba36df1bddc9066cbd1129b82009",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-ceilometer-compute@sha256:667029e1ec7e63fffa1a096f432f6160b441ba36df1bddc9066cbd1129b82009"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-12-01T06:21:53.58682213Z",#012 "Config": {#012 "User": "root",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 505175293,#012 "VirtualSize": 505175293,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/4b9c41fe9442d39f0f731cbd431e2ad53f3df5a873cab9bbccc810ab289d4d69/diff:/var/lib/containers/storage/overlay/11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60/diff:/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/ea63802099ebb85258cb7d2a1bbd57ddeec51406b466437719c2fc7b376d5b79/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/ea63802099ebb85258cb7d2a1bbd57ddeec51406b466437719c2fc7b376d5b79/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",#012 "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",#012 "sha256:86c2cd3987225f8a9bf38cc88e9c24b56bdf4a194f2301186519b4a7571b0c92",#012 "sha256:a47016624274f5ebad76019f5a2e465c1737f96caa539b36f90ab8e33592f415",#012 "sha256:38a03f5e96658211fb28e2f87c11ffad531281d1797368f48e6cd4af7ac97c0e"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "root",#012 "History": [#012 {#012 "created": "2025-11-25T04:02:36.223494528Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:36.223562059Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251125\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:39.054452717Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-12-01T06:09:28.025707917Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025744608Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025767729Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025791379Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.02581523Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025867611Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.469442331Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:10:02.029095017Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 Dec 6 04:46:58 localhost podman[237520]: 2025-12-06 09:46:58.389985843 +0000 UTC m=+0.096222257 container remove a37b9c3e0a94913f96d0acd0e46ec4af5b191c79194cc94d7212024b813b32c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '728090aef247cfdd273031dadf6d1125'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, version=17.1.12, distribution-scope=public, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible) Dec 6 04:46:58 localhost python3[237471]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force ceilometer_agent_compute Dec 6 04:46:58 localhost podman[237533]: Dec 6 04:46:58 localhost podman[237533]: 2025-12-06 09:46:58.498724199 +0000 UTC m=+0.089803737 container create bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute) Dec 6 04:46:58 localhost podman[237533]: 2025-12-06 09:46:58.455265965 +0000 UTC m=+0.046345543 image pull quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified Dec 6 04:46:58 localhost python3[237471]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck compute --label config_id=edpm --label container_name=ceilometer_agent_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']} --log-driver journald --log-level info --network host --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z --volume /run/libvirt:/run/libvirt:shared,ro --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified kolla_start Dec 6 04:46:59 localhost nova_compute[230884]: 2025-12-06 09:46:59.078 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:46:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63666 DF PROTO=TCP SPT=54692 DPT=9101 SEQ=1152780584 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52DB31EF0000000001030307) Dec 6 04:46:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 04:46:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 04:46:59 localhost podman[237645]: 2025-12-06 09:46:59.939746186 +0000 UTC m=+0.091655014 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 6 04:46:59 localhost systemd[1]: tmp-crun.5JkNAt.mount: Deactivated successfully. Dec 6 04:46:59 localhost podman[237645]: 2025-12-06 09:46:59.993466 +0000 UTC m=+0.145374788 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller) Dec 6 04:47:00 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 04:47:00 localhost podman[237646]: 2025-12-06 09:46:59.996338809 +0000 UTC m=+0.148739833 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent) Dec 6 04:47:00 localhost podman[237646]: 2025-12-06 09:47:00.076469023 +0000 UTC m=+0.228869947 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 6 04:47:00 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 04:47:00 localhost nova_compute[230884]: 2025-12-06 09:47:00.205 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:47:00 localhost python3.9[237726]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:47:01 localhost python3.9[237838]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:47:02 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54004 DF PROTO=TCP SPT=33278 DPT=9105 SEQ=1469673539 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52DB3BEF0000000001030307) Dec 6 04:47:02 localhost python3.9[237947]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765014421.528171-1449-63151730621133/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:47:03 localhost python3.9[238002]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 6 04:47:03 localhost systemd[1]: Reloading. Dec 6 04:47:03 localhost systemd-rc-local-generator[238023]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:47:03 localhost systemd-sysv-generator[238029]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:47:03 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:47:03 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 04:47:03 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:47:03 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:47:03 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:47:03 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 04:47:03 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:47:03 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:47:03 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:47:03 localhost python3.9[238092]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:47:03 localhost systemd[1]: Reloading. Dec 6 04:47:04 localhost nova_compute[230884]: 2025-12-06 09:47:04.080 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:47:04 localhost systemd-sysv-generator[238123]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:47:04 localhost systemd-rc-local-generator[238120]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:47:04 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:47:04 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 04:47:04 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:47:04 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:47:04 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:47:04 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 04:47:04 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:47:04 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:47:04 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:47:04 localhost systemd[1]: Starting ceilometer_agent_compute container... Dec 6 04:47:04 localhost systemd[1]: Started libcrun container. Dec 6 04:47:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6a3f0d73814bb22777000701e0d23d3cb3ada68f96a18b4fb977a15c11f67d2/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff) Dec 6 04:47:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6a3f0d73814bb22777000701e0d23d3cb3ada68f96a18b4fb977a15c11f67d2/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff) Dec 6 04:47:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 04:47:04 localhost podman[238133]: 2025-12-06 09:47:04.498897845 +0000 UTC m=+0.153745467 container init bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Dec 6 04:47:04 localhost systemd[1]: tmp-crun.hlwHui.mount: Deactivated successfully. Dec 6 04:47:04 localhost ceilometer_agent_compute[238148]: + sudo -E kolla_set_configs Dec 6 04:47:04 localhost ceilometer_agent_compute[238148]: sudo: unable to send audit message: Operation not permitted Dec 6 04:47:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 04:47:04 localhost podman[238133]: 2025-12-06 09:47:04.542744841 +0000 UTC m=+0.197592423 container start bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Dec 6 04:47:04 localhost podman[238133]: ceilometer_agent_compute Dec 6 04:47:04 localhost systemd[1]: Started ceilometer_agent_compute container. Dec 6 04:47:04 localhost ceilometer_agent_compute[238148]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Dec 6 04:47:04 localhost ceilometer_agent_compute[238148]: INFO:__main__:Validating config file Dec 6 04:47:04 localhost ceilometer_agent_compute[238148]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Dec 6 04:47:04 localhost ceilometer_agent_compute[238148]: INFO:__main__:Copying service configuration files Dec 6 04:47:04 localhost ceilometer_agent_compute[238148]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf Dec 6 04:47:04 localhost ceilometer_agent_compute[238148]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf Dec 6 04:47:04 localhost ceilometer_agent_compute[238148]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf Dec 6 04:47:04 localhost ceilometer_agent_compute[238148]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml Dec 6 04:47:04 localhost ceilometer_agent_compute[238148]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml Dec 6 04:47:04 localhost ceilometer_agent_compute[238148]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml Dec 6 04:47:04 localhost ceilometer_agent_compute[238148]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf Dec 6 04:47:04 localhost ceilometer_agent_compute[238148]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf Dec 6 04:47:04 localhost ceilometer_agent_compute[238148]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf Dec 6 04:47:04 localhost ceilometer_agent_compute[238148]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf Dec 6 04:47:04 localhost ceilometer_agent_compute[238148]: INFO:__main__:Writing out command to execute Dec 6 04:47:04 localhost ceilometer_agent_compute[238148]: ++ cat /run_command Dec 6 04:47:04 localhost ceilometer_agent_compute[238148]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout' Dec 6 04:47:04 localhost ceilometer_agent_compute[238148]: + ARGS= Dec 6 04:47:04 localhost ceilometer_agent_compute[238148]: + sudo kolla_copy_cacerts Dec 6 04:47:04 localhost ceilometer_agent_compute[238148]: sudo: unable to send audit message: Operation not permitted Dec 6 04:47:04 localhost ceilometer_agent_compute[238148]: + [[ ! -n '' ]] Dec 6 04:47:04 localhost ceilometer_agent_compute[238148]: + . kolla_extend_start Dec 6 04:47:04 localhost ceilometer_agent_compute[238148]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\''' Dec 6 04:47:04 localhost ceilometer_agent_compute[238148]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout' Dec 6 04:47:04 localhost ceilometer_agent_compute[238148]: + umask 0022 Dec 6 04:47:04 localhost ceilometer_agent_compute[238148]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout Dec 6 04:47:04 localhost podman[238157]: 2025-12-06 09:47:04.655089959 +0000 UTC m=+0.098893081 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Dec 6 04:47:04 localhost podman[238157]: 2025-12-06 09:47:04.689139793 +0000 UTC m=+0.132942905 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 6 04:47:04 localhost podman[238157]: unhealthy Dec 6 04:47:04 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:47:04 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Failed with result 'exit-code'. Dec 6 04:47:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8779 DF PROTO=TCP SPT=56060 DPT=9102 SEQ=3650887006 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52DB466F0000000001030307) Dec 6 04:47:05 localhost nova_compute[230884]: 2025-12-06 09:47:05.239 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.326 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.326 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.326 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.326 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.326 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.326 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.326 2 DEBUG cotyledon.oslo_config_glue [-] batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.327 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.327 2 DEBUG cotyledon.oslo_config_glue [-] config_dir = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.327 2 DEBUG cotyledon.oslo_config_glue [-] config_file = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.327 2 DEBUG cotyledon.oslo_config_glue [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.327 2 DEBUG cotyledon.oslo_config_glue [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.327 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.327 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.327 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.328 2 DEBUG cotyledon.oslo_config_glue [-] host = np0005548789.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.328 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.328 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.328 2 DEBUG cotyledon.oslo_config_glue [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.328 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.328 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.328 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.328 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.328 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.329 2 DEBUG cotyledon.oslo_config_glue [-] log_dir = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.329 2 DEBUG cotyledon.oslo_config_glue [-] log_file = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.329 2 DEBUG cotyledon.oslo_config_glue [-] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.329 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.329 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.329 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.329 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.329 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.330 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.330 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.330 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.330 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.330 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.330 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.330 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.330 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.330 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.331 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.331 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.331 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.331 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.331 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.331 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.331 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.331 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.331 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.332 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.332 2 DEBUG cotyledon.oslo_config_glue [-] sample_source = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.332 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.332 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.332 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.332 2 DEBUG cotyledon.oslo_config_glue [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.332 2 DEBUG cotyledon.oslo_config_glue [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.332 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.333 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.333 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.333 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.333 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.333 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.333 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.333 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.333 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.333 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.334 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.334 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.334 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.334 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.334 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.334 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.334 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.334 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.335 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.335 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.335 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.335 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.335 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.335 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.335 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.335 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.335 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.336 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.336 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.336 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.336 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.336 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.336 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.336 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.336 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.336 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.337 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.337 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.337 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.337 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.337 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.337 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.337 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.337 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.338 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.338 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.338 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.338 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.338 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.338 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.338 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.338 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.338 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.339 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.339 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.339 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.339 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.339 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.339 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.339 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.339 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.340 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.340 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.340 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.340 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.340 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.340 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.340 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.340 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.340 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.341 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.341 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.341 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.341 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.341 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.341 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.341 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.341 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.342 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.342 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.342 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.342 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.342 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.342 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.342 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.342 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.342 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.343 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.343 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.343 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.343 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.343 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.343 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.343 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.343 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.343 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.344 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.344 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.344 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.344 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.344 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.344 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.344 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.344 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.344 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.345 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.345 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.345 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.363 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']]. Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.364 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d]. Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.365 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']]. Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.463 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.525 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.526 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.526 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.526 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.526 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.526 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.526 12 DEBUG cotyledon.oslo_config_glue [-] batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.526 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.526 12 DEBUG cotyledon.oslo_config_glue [-] config_dir = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.526 12 DEBUG cotyledon.oslo_config_glue [-] config_file = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.526 12 DEBUG cotyledon.oslo_config_glue [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.526 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.527 12 DEBUG cotyledon.oslo_config_glue [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.527 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.527 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.527 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.527 12 DEBUG cotyledon.oslo_config_glue [-] host = np0005548789.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.527 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.527 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.527 12 DEBUG cotyledon.oslo_config_glue [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.527 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.527 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.528 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.528 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.528 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.528 12 DEBUG cotyledon.oslo_config_glue [-] log_dir = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.528 12 DEBUG cotyledon.oslo_config_glue [-] log_file = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.528 12 DEBUG cotyledon.oslo_config_glue [-] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.528 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.528 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.528 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.528 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.528 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.528 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.528 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.529 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.529 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.529 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.529 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.529 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.529 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.529 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.529 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.529 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.529 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.529 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.529 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.529 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.530 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.530 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.530 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.530 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.530 12 DEBUG cotyledon.oslo_config_glue [-] sample_source = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.530 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.530 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.530 12 DEBUG cotyledon.oslo_config_glue [-] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.530 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.530 12 DEBUG cotyledon.oslo_config_glue [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.530 12 DEBUG cotyledon.oslo_config_glue [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.531 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.531 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.531 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.531 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.531 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.531 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.531 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.531 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.531 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.531 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.531 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.531 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.532 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.532 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.532 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.532 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.532 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.532 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.532 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.532 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.532 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.532 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.532 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.532 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.533 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.533 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.533 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.533 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.533 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.533 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.533 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.533 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.533 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.533 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.533 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.533 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.534 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.534 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.534 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.534 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.534 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.534 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.534 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.534 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.534 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.534 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.534 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.535 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.535 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.535 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.535 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.535 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.535 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.535 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.535 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.535 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.535 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.535 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.536 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.536 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.536 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.536 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.536 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.536 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.536 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.536 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.536 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.536 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.536 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.536 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.537 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.537 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.537 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.537 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.537 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.537 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.537 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.537 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.537 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.537 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.537 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.537 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.537 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.538 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.538 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.538 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.538 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.538 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.538 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.538 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.538 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.538 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.538 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.538 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.538 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.539 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.539 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.539 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.539 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.539 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.539 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.539 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.539 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.539 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.539 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.539 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.540 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.540 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.540 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.540 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.540 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.540 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.540 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.540 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.540 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.540 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.540 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.540 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.541 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.541 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.541 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.541 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.541 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.541 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.541 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.541 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.541 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.541 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.541 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.541 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.541 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.542 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.542 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.542 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.542 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.542 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.542 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.542 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.542 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.542 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.542 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.542 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.542 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.543 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.543 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.543 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.543 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.543 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.543 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.543 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.543 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.543 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.543 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.543 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.543 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.544 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.544 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.544 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.544 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.544 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.547 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.555 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93 Dec 6 04:47:05 localhost python3.9[238288]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 6 04:47:05 localhost systemd[1]: Stopping ceilometer_agent_compute container... Dec 6 04:47:05 localhost systemd[1]: tmp-crun.K7e0ij.mount: Deactivated successfully. Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.831 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.933 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.933 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308 Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.933 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12] Dec 6 04:47:05 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:05.956 12 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET http://nova-internal.openstack.svc:8774/v2.1/flavors?is_public=None -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ad4f29dde4290bcb083efcb5841fb43151872879a45528968948af6eeaac0c77" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:519 Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.115 12 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 327 Content-Type: application/json Date: Sat, 06 Dec 2025 09:47:05 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-87b31b8a-9cf1-41ac-800e-43788298cefa x-openstack-request-id: req-87b31b8a-9cf1-41ac-800e-43788298cefa _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:550 Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.116 12 DEBUG novaclient.v2.client [-] RESP BODY: {"flavors": [{"id": "3b9dcd46-fa1b-4714-ba2b-665da2f67af6", "name": "m1.small", "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/3b9dcd46-fa1b-4714-ba2b-665da2f67af6"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/3b9dcd46-fa1b-4714-ba2b-665da2f67af6"}]}]} _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:582 Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.116 12 DEBUG novaclient.v2.client [-] GET call to compute for http://nova-internal.openstack.svc:8774/v2.1/flavors?is_public=None used request id req-87b31b8a-9cf1-41ac-800e-43788298cefa request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:954 Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.118 12 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET http://nova-internal.openstack.svc:8774/v2.1/flavors/3b9dcd46-fa1b-4714-ba2b-665da2f67af6 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}ad4f29dde4290bcb083efcb5841fb43151872879a45528968948af6eeaac0c77" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:519 Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.177 12 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 494 Content-Type: application/json Date: Sat, 06 Dec 2025 09:47:06 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-d31e78df-e387-4ee7-940a-4db0ce94f506 x-openstack-request-id: req-d31e78df-e387-4ee7-940a-4db0ce94f506 _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:550 Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.178 12 DEBUG novaclient.v2.client [-] RESP BODY: {"flavor": {"id": "3b9dcd46-fa1b-4714-ba2b-665da2f67af6", "name": "m1.small", "ram": 512, "disk": 1, "swap": "", "OS-FLV-EXT-DATA:ephemeral": 1, "OS-FLV-DISABLED:disabled": false, "vcpus": 1, "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/3b9dcd46-fa1b-4714-ba2b-665da2f67af6"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/3b9dcd46-fa1b-4714-ba2b-665da2f67af6"}]}} _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:582 Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.178 12 DEBUG novaclient.v2.client [-] GET call to compute for http://nova-internal.openstack.svc:8774/v2.1/flavors/3b9dcd46-fa1b-4714-ba2b-665da2f67af6 used request id req-d31e78df-e387-4ee7-940a-4db0ce94f506 request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:954 Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.179 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'name': 'test', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005548789.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'hostId': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.180 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.206 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/cpu volume: 49840000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '43574c69-5275-4834-a13b-d54e2a36a442', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 49840000000, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'timestamp': '2025-12-06T09:47:06.180341', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '867ad82e-d288-11f0-9b72-fa163e118844', 'monotonic_time': 10844.454932007, 'message_signature': 'd9d981c30097d3a296aafbc37ca5f13d258d0769e7c742d30aaf3112282760d7'}]}, 'timestamp': '2025-12-06 09:47:06.207307', '_unique_id': '87d2e190b0584fb6aec0c587b7bce02a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.214 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.219 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.222 12 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for b7ed0a2e-9350-4933-9334-4e5e08d3e6aa / tap86fc0b7a-fb inspect_vnics /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/inspector.py:136 Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.223 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a23cb439-5802-42d6-b5b7-084ba2a1bbb3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T09:47:06.219607', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '867d6896-d288-11f0-9b72-fa163e118844', 'monotonic_time': 10844.469142909, 'message_signature': '94654625957fe1fcd98c6e6aa08f8e57d1c7b15ff95d6f0e14000aaff15d4517'}]}, 'timestamp': '2025-12-06 09:47:06.224080', '_unique_id': '597eb9dd3d7a481e8d58890ba6a382df'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.225 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.226 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.226 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.227 12 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.227 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.227 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4faa2e96-ea9c-429a-b657-870df8b2f0ed', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T09:47:06.227875', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '867e1c0a-d288-11f0-9b72-fa163e118844', 'monotonic_time': 10844.469142909, 'message_signature': '6e392f3921340eede9f263145819c6f8eb2a6f19ea37015ca57659ed515e01e7'}]}, 'timestamp': '2025-12-06 09:47:06.228488', '_unique_id': '87770a81ec4f465ba200faa9003b0d24'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.229 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.230 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.231 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/memory.usage volume: 52.37890625 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '01036f4d-4491-4314-870a-d9bbbd35ab46', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 52.37890625, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'timestamp': '2025-12-06T09:47:06.231113', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '867e9810-d288-11f0-9b72-fa163e118844', 'monotonic_time': 10844.454932007, 'message_signature': '1ca92c650d008040d5c3a704694bda2fde08d36b5aede03b0f5ae15f12938e9c'}]}, 'timestamp': '2025-12-06 09:47:06.231682', '_unique_id': 'cb4a712f36764e7f89d6d29b9b65d25a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.232 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.234 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.275 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.bytes volume: 29130240 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.276 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.bytes volume: 4300800 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ca17492f-4922-4e05-afe8-0f928d214863', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29130240, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T09:47:06.234496', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '86856da2-d288-11f0-9b72-fa163e118844', 'monotonic_time': 10844.484004242, 'message_signature': 'fb6dd60d85eafabccbab3a66f3283928d6e2fd83a47cc82d195a6e772a743b15'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4300800, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T09:47:06.234496', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '8685876a-d288-11f0-9b72-fa163e118844', 'monotonic_time': 10844.484004242, 'message_signature': '5a9fff3b3cd3305826402cdabed95c2161bbe227dcef7e249a128e0d681ac659'}]}, 'timestamp': '2025-12-06 09:47:06.277183', '_unique_id': 'c7a4ea250d454aa3bfcc142e9b121718'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.278 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.280 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.280 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '42317a8e-ce9d-4c81-8a73-8215765f7a43', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T09:47:06.280475', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '86861f9a-d288-11f0-9b72-fa163e118844', 'monotonic_time': 10844.469142909, 'message_signature': '9eee8776d8f46eeb944ffe159b15c1079527b8285ee34cd103e3f7af18921b53'}]}, 'timestamp': '2025-12-06 09:47:06.281062', '_unique_id': 'c1fc5c29f4c341afa0e5d6a0c624f119'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.282 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.283 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.283 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.283 12 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.284 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.284 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.bytes volume: 73908224 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.285 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'fd644106-91d5-43d6-a21f-0087d09d9088', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73908224, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T09:47:06.284520', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '8686c0da-d288-11f0-9b72-fa163e118844', 'monotonic_time': 10844.484004242, 'message_signature': '94a8812307e190162568288f41a43ea5bb83b38e8040e311b42613450ce51bb5'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T09:47:06.284520', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '8686d3b8-d288-11f0-9b72-fa163e118844', 'monotonic_time': 10844.484004242, 'message_signature': '1b57b05e75eeea79e59279f28dbfef2dadd01499c5abea79394e9f8616752059'}]}, 'timestamp': '2025-12-06 09:47:06.285651', '_unique_id': '1db1ee2412214f91b08d4c2b1ecfc5f6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.286 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.288 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.302 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.302 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0bb1aa47-2716-4408-b5b4-0605aca2ca9c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T09:47:06.288792', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '868973d4-d288-11f0-9b72-fa163e118844', 'monotonic_time': 10844.538337013, 'message_signature': 'b22e2ded19e5ba3c14c2c1c9801b3e2f074b63ed4539d4f3655ff3e330c650f0'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T09:47:06.288792', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '86898aae-d288-11f0-9b72-fa163e118844', 'monotonic_time': 10844.538337013, 'message_signature': 'a50bc2378cf77b1d5a858772eaa319b77c467c6355b1b57db1f571e7020284f5'}]}, 'timestamp': '2025-12-06 09:47:06.303390', '_unique_id': '1a184208b3ff41deb36ea0eeb73be0f5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.304 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.305 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.306 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.latency volume: 1043514478 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.306 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.latency volume: 200503964 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '70490da6-62dc-4c90-ba1f-e394f95114b4', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1043514478, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T09:47:06.306063', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '868a070e-d288-11f0-9b72-fa163e118844', 'monotonic_time': 10844.484004242, 'message_signature': '26160b809231fae25f766a4535df79ce6db2ac6434fff7bb57f9b012b97c1aa5'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 200503964, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T09:47:06.306063', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '868a1aaa-d288-11f0-9b72-fa163e118844', 'monotonic_time': 10844.484004242, 'message_signature': '3d7aa37ac7098c5d216b9f49ecfb90619d5778016be2d91df1701566ee8ec54e'}]}, 'timestamp': '2025-12-06 09:47:06.307072', '_unique_id': 'e55fb5b7ffb042e895ff8f0fbb89a513'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.308 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.309 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.309 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.requests volume: 524 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.310 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7ad7342b-8fcc-4a1f-8dee-a594c88169cf', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 524, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T09:47:06.309662', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '868a94c6-d288-11f0-9b72-fa163e118844', 'monotonic_time': 10844.484004242, 'message_signature': '01f353cbc80918f180f636f67b5f23eda1aebc70474d897a72817f798fe3094b'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T09:47:06.309662', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '868aa77c-d288-11f0-9b72-fa163e118844', 'monotonic_time': 10844.484004242, 'message_signature': '525661498fe6a439eb97103efbf9700f078b28b7971bc6880267902adf2a994d'}]}, 'timestamp': '2025-12-06 09:47:06.310668', '_unique_id': '3bab024789734f80acec495d2608953b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.311 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.313 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.313 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.bytes volume: 11272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd1315b5f-41ba-4435-9d38-11636c0d8a9b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 11272, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T09:47:06.313172', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '868b1cac-d288-11f0-9b72-fa163e118844', 'monotonic_time': 10844.469142909, 'message_signature': 'b0994697493d077cac2a6ae12618ab9719098c5e76184dc67abf12344a88cbb4'}]}, 'timestamp': '2025-12-06 09:47:06.313731', '_unique_id': 'b18f60de58d8436baaf93a579bd18c97'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.314 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.316 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.316 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets volume: 129 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6e5a2b17-a87d-45a2-90b7-d65b09b8ac91', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 129, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T09:47:06.316169', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '868b9196-d288-11f0-9b72-fa163e118844', 'monotonic_time': 10844.469142909, 'message_signature': '8217b55648c0850ebe34be4cdeb3ec2e4e72eebec652c1fadb861fb6a6d2ceb8'}]}, 'timestamp': '2025-12-06 09:47:06.316692', '_unique_id': 'f2fcccdf00704d968d9849c6324fb2a9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.317 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.319 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.319 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '62ab4f5d-a291-4851-9550-120e4013a0bc', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T09:47:06.319165', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '868c05ea-d288-11f0-9b72-fa163e118844', 'monotonic_time': 10844.469142909, 'message_signature': '065d0a908392f84c2b77b5b818e4886d1ab69d1f432f96fe750d20248cfad22a'}]}, 'timestamp': '2025-12-06 09:47:06.319653', '_unique_id': '4ace43eee3e6418bae23c4ee58952043'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.320 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.321 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.322 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.latency volume: 223261606 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.322 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.latency volume: 30984668 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5b8c5327-dffa-4968-9467-de33ab605afc', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 223261606, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T09:47:06.321998', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '868c73fe-d288-11f0-9b72-fa163e118844', 'monotonic_time': 10844.484004242, 'message_signature': '42723c99cffe9fbd5f204a95eafee9f14db9fefd1a3352568be180ce09655019'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 30984668, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T09:47:06.321998', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '868c8ad8-d288-11f0-9b72-fa163e118844', 'monotonic_time': 10844.484004242, 'message_signature': '9e6c45cc6a5c9349877974cb13febdac501abbac3fe8a9de875ff47fce566ef8'}]}, 'timestamp': '2025-12-06 09:47:06.323036', '_unique_id': 'fe29a9faa6004ab7bd284a390c3fea83'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.324 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.325 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.325 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets volume: 88 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0c79d3a4-8f7f-45c3-8c28-6b01301d92ae', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 88, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T09:47:06.325423', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '868cfbf8-d288-11f0-9b72-fa163e118844', 'monotonic_time': 10844.469142909, 'message_signature': '8cc8eadf83ae49f8fcdd991119e3fdc5abee450eaecd45d2207d552718c17072'}]}, 'timestamp': '2025-12-06 09:47:06.325988', '_unique_id': 'ad2c7b755727463fbb5985b0c8c50ff3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.326 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.328 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.328 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0b6f6247-2b01-439b-adc2-46e365c0bf2c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T09:47:06.328277', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '868d6a16-d288-11f0-9b72-fa163e118844', 'monotonic_time': 10844.469142909, 'message_signature': 'c95bcf864ceb531f80fe370dd8d12633346c2973f2695f8f1809bfe8d46b9ce4'}]}, 'timestamp': '2025-12-06 09:47:06.328814', '_unique_id': '9d362290e72b4e4991355934fe4e7ee9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.329 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.330 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.iops in the context of pollsters Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.331 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for PerDeviceDiskIOPSPollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.331 12 ERROR ceilometer.polling.manager [-] Prevent pollster disk.device.iops from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.331 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.331 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.332 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.allocation volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4674df87-cc83-4ad7-a774-f5838974aef2', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T09:47:06.331634', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '868dede2-d288-11f0-9b72-fa163e118844', 'monotonic_time': 10844.538337013, 'message_signature': 'e1600d33a79214c9c69f5fe5f8fd32cff7dcdcc98c60c450c8fac956bf7e1627'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T09:47:06.331634', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '868e007a-d288-11f0-9b72-fa163e118844', 'monotonic_time': 10844.538337013, 'message_signature': '9125e3213bdba1a57e4a81da0c2f32421c7dd7fdb75238b6ece4d77ecb33a4c0'}]}, 'timestamp': '2025-12-06 09:47:06.332590', '_unique_id': 'a63e710eaa3f4febafe1acecffc6781f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.333 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.334 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.334 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.335 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e9b432f8-0af9-426c-9cbd-07177162b7b1', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T09:47:06.334874', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '868e6b0a-d288-11f0-9b72-fa163e118844', 'monotonic_time': 10844.538337013, 'message_signature': '15a035843f7d818baf9bed7b0a1deca5b4d50d4a3ec82653d5eb9ffa71a27b81'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T09:47:06.334874', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '868e7da2-d288-11f0-9b72-fa163e118844', 'monotonic_time': 10844.538337013, 'message_signature': '1885bb9644f4421914489c662d32e2c4256853fc2442f95b7d56f0aa85cc8771'}]}, 'timestamp': '2025-12-06 09:47:06.335832', '_unique_id': '3b07c52e5f1f42a299c9f7e921f6900a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.336 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.338 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.338 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '929ab98c-9424-4e7c-99c9-b94faba653fe', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T09:47:06.338146', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '868eed00-d288-11f0-9b72-fa163e118844', 'monotonic_time': 10844.469142909, 'message_signature': '7f3af896d316f2a4f88b986f0a5297b14449b1626ba4ce222af200558806395d'}]}, 'timestamp': '2025-12-06 09:47:06.338692', '_unique_id': '129f1bac19854712b0b80f5d2c4fbd36'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.339 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.340 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.341 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.bytes volume: 9015 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7417c945-1e2d-4d31-83be-f4d99fc99aef', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9015, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T09:47:06.340981', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '868f59b6-d288-11f0-9b72-fa163e118844', 'monotonic_time': 10844.469142909, 'message_signature': '6db924158fa550c3cbd26dbf1fc0304aff301e780d9765d3be66ab7e824a8c70'}]}, 'timestamp': '2025-12-06 09:47:06.341455', '_unique_id': '05555a75d21d4e9b96c1f8ba54d19cfc'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.342 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.343 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.latency in the context of pollsters Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.343 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for PerDeviceDiskLatencyPollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.343 12 ERROR ceilometer.polling.manager [-] Prevent pollster disk.device.latency from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.344 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.344 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.requests volume: 1064 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.344 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.requests volume: 222 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'be8fa5ef-171c-45fc-ba11-302cdf301ef7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1064, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T09:47:06.344427', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '868fdf3a-d288-11f0-9b72-fa163e118844', 'monotonic_time': 10844.484004242, 'message_signature': '96cc563d113516c948999c1ba965702309788787fe661d6c4f421c41ed8d3a82'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 222, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T09:47:06.344427', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '868fea98-d288-11f0-9b72-fa163e118844', 'monotonic_time': 10844.484004242, 'message_signature': '5022b0bf31103c112fcb22cd96c6a827d578163251fa29699d214dbcbfa2ab44'}]}, 'timestamp': '2025-12-06 09:47:06.345055', '_unique_id': '2bb165262c5c46fcb08e6dda14372c4a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.345 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:06 localhost journal[203911]: End of file while reading data: Input/output error Dec 6 04:47:06 localhost journal[203911]: End of file while reading data: Input/output error Dec 6 04:47:06 localhost ceilometer_agent_compute[238148]: 2025-12-06 09:47:06.354 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320 Dec 6 04:47:06 localhost systemd[1]: libpod-bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.scope: Deactivated successfully. Dec 6 04:47:06 localhost podman[238298]: 2025-12-06 09:47:06.49220996 +0000 UTC m=+0.736935938 container died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 6 04:47:06 localhost systemd[1]: libpod-bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.scope: Consumed 1.301s CPU time. Dec 6 04:47:06 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.timer: Deactivated successfully. Dec 6 04:47:06 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 04:47:06 localhost systemd[1]: tmp-crun.wRwsI2.mount: Deactivated successfully. Dec 6 04:47:06 localhost systemd[1]: var-lib-containers-storage-overlay-c6a3f0d73814bb22777000701e0d23d3cb3ada68f96a18b4fb977a15c11f67d2-merged.mount: Deactivated successfully. Dec 6 04:47:06 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094-userdata-shm.mount: Deactivated successfully. Dec 6 04:47:06 localhost podman[238298]: 2025-12-06 09:47:06.555989321 +0000 UTC m=+0.800715259 container cleanup bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 04:47:06 localhost podman[238298]: ceilometer_agent_compute Dec 6 04:47:06 localhost podman[238325]: 2025-12-06 09:47:06.652630129 +0000 UTC m=+0.064423653 container cleanup bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 6 04:47:06 localhost podman[238325]: ceilometer_agent_compute Dec 6 04:47:06 localhost systemd[1]: edpm_ceilometer_agent_compute.service: Deactivated successfully. Dec 6 04:47:06 localhost systemd[1]: Stopped ceilometer_agent_compute container. Dec 6 04:47:06 localhost systemd[1]: Starting ceilometer_agent_compute container... Dec 6 04:47:06 localhost systemd[1]: Started libcrun container. Dec 6 04:47:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6a3f0d73814bb22777000701e0d23d3cb3ada68f96a18b4fb977a15c11f67d2/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff) Dec 6 04:47:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6a3f0d73814bb22777000701e0d23d3cb3ada68f96a18b4fb977a15c11f67d2/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff) Dec 6 04:47:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 04:47:06 localhost podman[238337]: 2025-12-06 09:47:06.830543344 +0000 UTC m=+0.144356237 container init bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=edpm) Dec 6 04:47:06 localhost ceilometer_agent_compute[238351]: + sudo -E kolla_set_configs Dec 6 04:47:06 localhost ceilometer_agent_compute[238351]: sudo: unable to send audit message: Operation not permitted Dec 6 04:47:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 04:47:06 localhost podman[238337]: 2025-12-06 09:47:06.869250007 +0000 UTC m=+0.183062910 container start bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 6 04:47:06 localhost podman[238337]: ceilometer_agent_compute Dec 6 04:47:06 localhost systemd[1]: Started ceilometer_agent_compute container. Dec 6 04:47:06 localhost ceilometer_agent_compute[238351]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Dec 6 04:47:06 localhost ceilometer_agent_compute[238351]: INFO:__main__:Validating config file Dec 6 04:47:06 localhost ceilometer_agent_compute[238351]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Dec 6 04:47:06 localhost ceilometer_agent_compute[238351]: INFO:__main__:Copying service configuration files Dec 6 04:47:06 localhost ceilometer_agent_compute[238351]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf Dec 6 04:47:06 localhost ceilometer_agent_compute[238351]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf Dec 6 04:47:06 localhost ceilometer_agent_compute[238351]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf Dec 6 04:47:06 localhost ceilometer_agent_compute[238351]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml Dec 6 04:47:06 localhost ceilometer_agent_compute[238351]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml Dec 6 04:47:06 localhost ceilometer_agent_compute[238351]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml Dec 6 04:47:06 localhost ceilometer_agent_compute[238351]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf Dec 6 04:47:06 localhost ceilometer_agent_compute[238351]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf Dec 6 04:47:06 localhost ceilometer_agent_compute[238351]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf Dec 6 04:47:06 localhost ceilometer_agent_compute[238351]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf Dec 6 04:47:06 localhost ceilometer_agent_compute[238351]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf Dec 6 04:47:06 localhost ceilometer_agent_compute[238351]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf Dec 6 04:47:06 localhost ceilometer_agent_compute[238351]: INFO:__main__:Writing out command to execute Dec 6 04:47:06 localhost ceilometer_agent_compute[238351]: ++ cat /run_command Dec 6 04:47:06 localhost ceilometer_agent_compute[238351]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout' Dec 6 04:47:06 localhost ceilometer_agent_compute[238351]: + ARGS= Dec 6 04:47:06 localhost ceilometer_agent_compute[238351]: + sudo kolla_copy_cacerts Dec 6 04:47:06 localhost ceilometer_agent_compute[238351]: sudo: unable to send audit message: Operation not permitted Dec 6 04:47:06 localhost ceilometer_agent_compute[238351]: + [[ ! -n '' ]] Dec 6 04:47:06 localhost ceilometer_agent_compute[238351]: + . kolla_extend_start Dec 6 04:47:06 localhost ceilometer_agent_compute[238351]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout' Dec 6 04:47:06 localhost ceilometer_agent_compute[238351]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\''' Dec 6 04:47:06 localhost ceilometer_agent_compute[238351]: + umask 0022 Dec 6 04:47:06 localhost ceilometer_agent_compute[238351]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout Dec 6 04:47:06 localhost podman[238360]: 2025-12-06 09:47:06.9761374 +0000 UTC m=+0.098118865 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible) Dec 6 04:47:07 localhost podman[238360]: 2025-12-06 09:47:07.005354441 +0000 UTC m=+0.127335866 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute) Dec 6 04:47:07 localhost podman[238360]: unhealthy Dec 6 04:47:07 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:47:07 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Failed with result 'exit-code'. Dec 6 04:47:07 localhost python3.9[238538]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:47:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58231 DF PROTO=TCP SPT=46606 DPT=9102 SEQ=1202293018 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52DB51EF0000000001030307) Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.728 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.729 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.729 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.729 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.729 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.729 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.729 2 DEBUG cotyledon.oslo_config_glue [-] batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.729 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.729 2 DEBUG cotyledon.oslo_config_glue [-] config_dir = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.729 2 DEBUG cotyledon.oslo_config_glue [-] config_file = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.729 2 DEBUG cotyledon.oslo_config_glue [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.729 2 DEBUG cotyledon.oslo_config_glue [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.730 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.730 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.730 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.730 2 DEBUG cotyledon.oslo_config_glue [-] host = np0005548789.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.730 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.730 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.730 2 DEBUG cotyledon.oslo_config_glue [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.730 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.730 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.730 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.730 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.730 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.731 2 DEBUG cotyledon.oslo_config_glue [-] log_dir = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.731 2 DEBUG cotyledon.oslo_config_glue [-] log_file = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.731 2 DEBUG cotyledon.oslo_config_glue [-] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.731 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.731 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.731 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.731 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.731 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.731 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.731 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.731 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.731 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.731 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.732 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.732 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.732 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.732 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.732 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.732 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.732 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.732 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.732 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.732 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.732 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.732 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.733 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.733 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.733 2 DEBUG cotyledon.oslo_config_glue [-] sample_source = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.733 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.733 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.733 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.733 2 DEBUG cotyledon.oslo_config_glue [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.733 2 DEBUG cotyledon.oslo_config_glue [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.733 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.733 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.733 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.733 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.733 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.734 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.734 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.734 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.734 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.734 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.734 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.734 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.734 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.734 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.734 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.734 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.734 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.735 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.735 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.735 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.735 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.735 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.735 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.735 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.735 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.735 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.735 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.735 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.735 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.735 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.736 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.736 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.736 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.736 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.736 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.736 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.736 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.736 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.736 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.736 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.736 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.736 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.737 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.737 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.737 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.737 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.737 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.737 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.737 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.737 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.737 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.737 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.737 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.737 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.738 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.738 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.738 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.738 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.738 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.738 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.738 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.738 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.738 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.738 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.738 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.738 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.739 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.739 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.739 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.739 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.739 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.739 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.739 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.739 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.739 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.739 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.739 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.739 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.739 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.740 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.740 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.740 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.740 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.740 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.740 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.740 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.740 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.740 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.740 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.740 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.740 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.741 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.741 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.741 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.741 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.741 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.741 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.741 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.741 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.741 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.741 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.741 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.741 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.741 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.741 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.742 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.742 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.742 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.758 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']]. Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.759 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d]. Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.759 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']]. Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.767 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.865 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.865 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.865 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.865 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.865 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.865 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.866 12 DEBUG cotyledon.oslo_config_glue [-] batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.866 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.866 12 DEBUG cotyledon.oslo_config_glue [-] config_dir = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.866 12 DEBUG cotyledon.oslo_config_glue [-] config_file = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.866 12 DEBUG cotyledon.oslo_config_glue [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.866 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.866 12 DEBUG cotyledon.oslo_config_glue [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.867 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.867 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.867 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.867 12 DEBUG cotyledon.oslo_config_glue [-] host = np0005548789.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.867 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.867 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.867 12 DEBUG cotyledon.oslo_config_glue [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.867 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.868 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.868 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.868 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.868 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.868 12 DEBUG cotyledon.oslo_config_glue [-] log_dir = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.868 12 DEBUG cotyledon.oslo_config_glue [-] log_file = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.868 12 DEBUG cotyledon.oslo_config_glue [-] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.868 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.869 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.869 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.869 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.869 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.869 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.869 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.869 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.869 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.870 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.870 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.870 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.870 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.870 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.870 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.870 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.870 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.871 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.871 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.871 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.871 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.871 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.871 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.871 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.871 12 DEBUG cotyledon.oslo_config_glue [-] sample_source = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.872 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.872 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.872 12 DEBUG cotyledon.oslo_config_glue [-] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.872 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.872 12 DEBUG cotyledon.oslo_config_glue [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.872 12 DEBUG cotyledon.oslo_config_glue [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.872 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.872 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.873 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.873 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.873 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.873 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.873 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.873 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.873 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.874 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.874 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.874 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.874 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.874 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.874 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.874 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.874 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.875 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.875 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.875 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.875 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.875 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.875 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.875 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.875 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.876 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.876 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.876 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.876 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.876 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.876 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.876 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.876 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.877 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.877 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.877 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.877 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.877 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.877 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.877 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.877 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.878 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.878 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.878 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.878 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.878 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.878 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.878 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.878 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.879 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.879 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.879 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.879 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.879 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.879 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.879 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.879 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.880 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.880 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.880 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.880 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.880 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.881 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.881 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.881 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.881 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.881 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.881 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.882 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.882 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.882 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.882 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.882 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.882 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.882 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.882 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.883 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.883 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.883 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.883 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.883 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.883 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.883 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.883 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.884 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.884 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.884 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.884 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.884 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.884 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.884 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.884 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.885 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.885 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.885 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.885 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.885 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.885 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.885 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.885 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.885 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.886 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.886 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.886 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.886 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.886 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.886 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.886 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.886 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.887 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.887 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.887 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.887 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.887 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.887 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.887 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.887 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.888 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.888 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.888 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.888 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.888 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.888 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.888 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.888 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.888 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.889 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.889 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.889 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.889 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.889 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.889 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.889 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.889 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.890 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.890 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.890 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.890 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.890 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.890 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.890 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.890 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.891 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.891 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.891 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.891 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.891 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.891 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.891 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.892 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.892 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.892 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.892 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.892 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.892 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.892 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.892 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.893 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.893 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.893 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.897 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64 Dec 6 04:47:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:07.905 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93 Dec 6 04:47:08 localhost python3.9[238651]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/node_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014427.103299-1545-178586005545684/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.224 12 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET http://nova-internal.openstack.svc:8774/v2.1/flavors?is_public=None -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}172727ab3d6cbaaa6568bf97b3413b148d1c73ec77830424529759a582bd30ea" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:519 Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.285 12 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 327 Content-Type: application/json Date: Sat, 06 Dec 2025 09:47:08 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-87316794-dae6-4ab1-b170-88590f7f7e0e x-openstack-request-id: req-87316794-dae6-4ab1-b170-88590f7f7e0e _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:550 Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.286 12 DEBUG novaclient.v2.client [-] RESP BODY: {"flavors": [{"id": "3b9dcd46-fa1b-4714-ba2b-665da2f67af6", "name": "m1.small", "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/3b9dcd46-fa1b-4714-ba2b-665da2f67af6"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/3b9dcd46-fa1b-4714-ba2b-665da2f67af6"}]}]} _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:582 Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.286 12 DEBUG novaclient.v2.client [-] GET call to compute for http://nova-internal.openstack.svc:8774/v2.1/flavors?is_public=None used request id req-87316794-dae6-4ab1-b170-88590f7f7e0e request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:954 Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.288 12 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET http://nova-internal.openstack.svc:8774/v2.1/flavors/3b9dcd46-fa1b-4714-ba2b-665da2f67af6 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}172727ab3d6cbaaa6568bf97b3413b148d1c73ec77830424529759a582bd30ea" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:519 Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.312 12 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 494 Content-Type: application/json Date: Sat, 06 Dec 2025 09:47:08 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-82509171-38d3-47a9-be25-867586b759f2 x-openstack-request-id: req-82509171-38d3-47a9-be25-867586b759f2 _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:550 Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.312 12 DEBUG novaclient.v2.client [-] RESP BODY: {"flavor": {"id": "3b9dcd46-fa1b-4714-ba2b-665da2f67af6", "name": "m1.small", "ram": 512, "disk": 1, "swap": "", "OS-FLV-EXT-DATA:ephemeral": 1, "OS-FLV-DISABLED:disabled": false, "vcpus": 1, "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/3b9dcd46-fa1b-4714-ba2b-665da2f67af6"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/3b9dcd46-fa1b-4714-ba2b-665da2f67af6"}]}} _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:582 Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.312 12 DEBUG novaclient.v2.client [-] GET call to compute for http://nova-internal.openstack.svc:8774/v2.1/flavors/3b9dcd46-fa1b-4714-ba2b-665da2f67af6 used request id req-82509171-38d3-47a9-be25-867586b759f2 request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:954 Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.314 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'name': 'test', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005548789.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'hostId': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.315 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.338 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/cpu volume: 49850000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7ee337f8-05b5-403d-87a7-4d467303c798', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 49850000000, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'timestamp': '2025-12-06T09:47:08.315494', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '87c02c2a-d288-11f0-aaf2-fa163e118844', 'monotonic_time': 10846.587113354, 'message_signature': '141083752a6e378674e978b44e8a6db2cd113b9e946aa6f7aff6e3df5b0b52f9'}]}, 'timestamp': '2025-12-06 09:47:08.339291', '_unique_id': '043ec992201248f7af02ff2b1b68a493'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.347 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.350 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.354 12 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for b7ed0a2e-9350-4933-9334-4e5e08d3e6aa / tap86fc0b7a-fb inspect_vnics /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/inspector.py:136 Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.354 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd6184e40-16ad-45dc-a899-1dac164651b3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T09:47:08.350900', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '87c298de-d288-11f0-aaf2-fa163e118844', 'monotonic_time': 10846.600412848, 'message_signature': '9724f294b3e72005f3fb28933cae8a0e6465eda485670d43920e77fcd5154705'}]}, 'timestamp': '2025-12-06 09:47:08.355097', '_unique_id': 'a690185a67aa41ef9f8230197a675c28'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.356 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.357 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.357 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'fd840138-50ab-471a-8b69-065e3b8d4863', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T09:47:08.357739', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '87c316d8-d288-11f0-aaf2-fa163e118844', 'monotonic_time': 10846.600412848, 'message_signature': '6cae56bf076e2972b3435578c92164e131e6269695daeb69fa58fa69c4de3899'}]}, 'timestamp': '2025-12-06 09:47:08.358259', '_unique_id': '2e547c75670f42498072442690810c68'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.359 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.360 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.360 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '729db4e5-66cb-4301-855c-a095ddce4baf', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T09:47:08.360588', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '87c38690-d288-11f0-aaf2-fa163e118844', 'monotonic_time': 10846.600412848, 'message_signature': '2272b30d772f4d0d04446c172ac027f2d0f482fb0e3b18cbbb8870bc45c2bf0b'}]}, 'timestamp': '2025-12-06 09:47:08.361142', '_unique_id': '5e0c99b8d4a440fa84e66daf0139b44b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.362 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.363 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.363 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.363 12 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.364 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.401 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.bytes volume: 29130240 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.402 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.bytes volume: 4300800 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '254a3614-647c-4912-8913-76e94bd518ed', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29130240, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T09:47:08.364318', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '87c9cd02-d288-11f0-aaf2-fa163e118844', 'monotonic_time': 10846.613815705, 'message_signature': 'e1d1311fef3ccce7a2ba31972cab699af31dbfd6a8e5e71d5541f082de981e4d'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4300800, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T09:47:08.364318', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '87c9df7c-d288-11f0-aaf2-fa163e118844', 'monotonic_time': 10846.613815705, 'message_signature': '0b78166d14b7caa44b0ab039cadd1a9dffe944990061def47852c22ca3b9642c'}]}, 'timestamp': '2025-12-06 09:47:08.402693', '_unique_id': '00eeceb18d8b4529b777b4ddfd08ac93'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.404 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.405 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.405 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.bytes volume: 11272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '06759330-a8c3-40b3-acf7-4623dce8931c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 11272, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T09:47:08.405665', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '87ca66c2-d288-11f0-aaf2-fa163e118844', 'monotonic_time': 10846.600412848, 'message_signature': 'ea29b49058a6bd1787702e5d19896a5b26a49da5e3055f8bbb35cb868bb08a08'}]}, 'timestamp': '2025-12-06 09:47:08.406182', '_unique_id': '5ab7cfedbe9d4cffa6c1819860dd0f1e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.407 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.408 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.424 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.425 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'cffeb91b-cfed-4064-88b2-1db573ec9e77', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T09:47:08.408472', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '87cd45ea-d288-11f0-aaf2-fa163e118844', 'monotonic_time': 10846.658099055, 'message_signature': '3c70c524c9d202c3c28e03d9b322221dfd045275a70161a0564d93bccb038b27'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T09:47:08.408472', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '87cd614c-d288-11f0-aaf2-fa163e118844', 'monotonic_time': 10846.658099055, 'message_signature': '5c7c838ab57eba479a671623e32a962902627cfdab2a8eb375c977c189a05987'}]}, 'timestamp': '2025-12-06 09:47:08.425828', '_unique_id': '7e1b3b0291154693a7e36076eabc2bb5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.427 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.429 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.iops in the context of pollsters Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.429 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for PerDeviceDiskIOPSPollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.429 12 ERROR ceilometer.polling.manager [-] Prevent pollster disk.device.iops from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.430 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.430 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f1e8ac93-5aea-4d95-8dfb-5fca48559a3a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T09:47:08.430216', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '87ce28c0-d288-11f0-aaf2-fa163e118844', 'monotonic_time': 10846.600412848, 'message_signature': '1c404a1481d74595c317e90ffda926e16b7ce5240ed56a70cf78a6c84d42b491'}]}, 'timestamp': '2025-12-06 09:47:08.430963', '_unique_id': '4a6201e1523146989debaed950930891'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.432 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.433 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.434 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.bytes volume: 9015 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '38913422-47e2-4a2c-a9bf-f9d29bb66abb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9015, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T09:47:08.434187', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '87cec406-d288-11f0-aaf2-fa163e118844', 'monotonic_time': 10846.600412848, 'message_signature': '9f27ed3882202641510e8cba8cbe290e8fc2996935b995f8408db66d502d1248'}]}, 'timestamp': '2025-12-06 09:47:08.434936', '_unique_id': '4055b036c2014b0fb5b4d8b569b02569'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.436 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.437 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.438 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets volume: 129 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0d764d1d-33d0-4218-9206-1eca67dd8549', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 129, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T09:47:08.438150', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '87cf5ea2-d288-11f0-aaf2-fa163e118844', 'monotonic_time': 10846.600412848, 'message_signature': '79e12d15840017e4edb84c9d72765c1fdc1a64659286074a9e925fc0dd4c1596'}]}, 'timestamp': '2025-12-06 09:47:08.438893', '_unique_id': 'bd907a99120e4cba827c9f17db4a5b7f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.440 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.442 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.442 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets volume: 88 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e6f2f3ab-10c1-4877-b9ec-51594d7ca77f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 88, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T09:47:08.442582', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '87d01068-d288-11f0-aaf2-fa163e118844', 'monotonic_time': 10846.600412848, 'message_signature': '3baa851caed6f1a7247b65bd2b0c3a6ac486d1ae6bf3731127ce213407860660'}]}, 'timestamp': '2025-12-06 09:47:08.443444', '_unique_id': 'c69e1af819a042ff8b9c1bb75368294c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.444 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.445 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.445 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.latency volume: 223261606 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.445 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.latency volume: 30984668 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0b0df14a-c349-4dcc-ba41-cd7f6cb7e74b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 223261606, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T09:47:08.445536', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '87d07a26-d288-11f0-aaf2-fa163e118844', 'monotonic_time': 10846.613815705, 'message_signature': '4871a9dd07ffc001fc9035d9564101dc09d88727efa47468d9bca8e39e1adbd3'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 30984668, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T09:47:08.445536', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '87d08aac-d288-11f0-aaf2-fa163e118844', 'monotonic_time': 10846.613815705, 'message_signature': '656b88e09e263ae42b7964b32d9e86f6d7e3abcd7afb16dabe4cc096067d2b64'}]}, 'timestamp': '2025-12-06 09:47:08.446371', '_unique_id': '8361f74a8c89462489af002afaf49d93'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.447 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.448 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.448 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/memory.usage volume: 52.37890625 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'edd27b9b-7080-4d53-9e89-4863f26aa27a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 52.37890625, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'timestamp': '2025-12-06T09:47:08.448412', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '87d0ea2e-d288-11f0-aaf2-fa163e118844', 'monotonic_time': 10846.587113354, 'message_signature': 'fa4a152b247ae6d4967f2b9701b8416a8bc8d9e15ec60b9a84db020bc9a15700'}]}, 'timestamp': '2025-12-06 09:47:08.448843', '_unique_id': '1dad26a12b3d4822bbfbb5c482402937'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.449 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.450 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.450 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.451 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.allocation volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f325470c-e279-4663-b32d-5bceea011757', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T09:47:08.450900', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '87d14c08-d288-11f0-aaf2-fa163e118844', 'monotonic_time': 10846.658099055, 'message_signature': '23617741c5d38c3f00a2556fe8477a59d52d4625c0f0d099d84601a08f644ee4'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T09:47:08.450900', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '87d15c20-d288-11f0-aaf2-fa163e118844', 'monotonic_time': 10846.658099055, 'message_signature': '2d8806066c7bca53bb0e0cf0c4cc4e555f3c2daf25f427eb01a12dfe50e96a2a'}]}, 'timestamp': '2025-12-06 09:47:08.451690', '_unique_id': '60493257326741da844de1837d9900d9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.452 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.453 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.latency in the context of pollsters Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.453 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for PerDeviceDiskLatencyPollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.453 12 ERROR ceilometer.polling.manager [-] Prevent pollster disk.device.latency from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.453 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.453 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.latency volume: 1043514478 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.453 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.latency volume: 200503964 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e50cea33-9d6a-4437-9ff0-118fcd140b15', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1043514478, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T09:47:08.453436', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '87d1ac02-d288-11f0-aaf2-fa163e118844', 'monotonic_time': 10846.613815705, 'message_signature': '96906f12f50b7fff9d1723bcba0a624e1ff69b9cf472736de27ad39dd412a53d'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 200503964, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T09:47:08.453436', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '87d1b706-d288-11f0-aaf2-fa163e118844', 'monotonic_time': 10846.613815705, 'message_signature': '7a847846b9fa09acd6b674d6c3ad56d15381563ade9dddaf338d53a382353f9a'}]}, 'timestamp': '2025-12-06 09:47:08.453991', '_unique_id': '6c4368177e2246449e4f6aa9f13a9643'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.454 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.455 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.455 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0b52f1a4-e59d-42a4-9e4f-4d4303c9d88c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T09:47:08.455331', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '87d1f6d0-d288-11f0-aaf2-fa163e118844', 'monotonic_time': 10846.600412848, 'message_signature': 'afc05951899ed3aa62e463000169ecf8e169cd05c2e1693dd1b8223794d5eee2'}]}, 'timestamp': '2025-12-06 09:47:08.455642', '_unique_id': 'c5f16cd6faff4003974eae47d8cdf074'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.456 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.457 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.requests volume: 1064 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.457 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.requests volume: 222 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9df690ff-4ab8-4973-bbe3-fe5e22483b3b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1064, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T09:47:08.457004', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '87d23712-d288-11f0-aaf2-fa163e118844', 'monotonic_time': 10846.613815705, 'message_signature': '450f9701a8b1fc7f96558206a00588d141229dee53ead8d3e944cde1acafd95c'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 222, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T09:47:08.457004', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '87d240e0-d288-11f0-aaf2-fa163e118844', 'monotonic_time': 10846.613815705, 'message_signature': '8d5c15e804d98f1281c6f79ef01d48d33a9a2a1aeab04c57769a7e12ace72247'}]}, 'timestamp': '2025-12-06 09:47:08.457520', '_unique_id': '783e7ae0f8094428beec6e772df680be'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.458 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.459 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.459 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3f55e46d-722d-4f38-a0df-386c794f591d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T09:47:08.459140', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '87d28abe-d288-11f0-aaf2-fa163e118844', 'monotonic_time': 10846.600412848, 'message_signature': '7875dfc9bd14af5dc33ad2ec7d9897560e7557c967633f035a032d7352414dcb'}]}, 'timestamp': '2025-12-06 09:47:08.459485', '_unique_id': '6b828fa2b006425ab7e64972b3560875'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.460 12 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.461 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.461 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.requests volume: 524 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.461 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '58af9a58-7b6f-4c57-9596-145823c9aee5', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 524, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T09:47:08.461204', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '87d2db36-d288-11f0-aaf2-fa163e118844', 'monotonic_time': 10846.613815705, 'message_signature': 'b71161818da20df873dbd485bf049b0c261dd18fd6096f7bea9f646c626e9c58'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T09:47:08.461204', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '87d2e536-d288-11f0-aaf2-fa163e118844', 'monotonic_time': 10846.613815705, 'message_signature': 'b708a621971869605a253ea72424fd6f6b32e5f3f7cc3c83d0d8daed8274f24e'}]}, 'timestamp': '2025-12-06 09:47:08.461725', '_unique_id': '2ab02862a7d14f968cdb13009e1ceada'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.462 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.463 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.463 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.463 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '706eb7a4-138f-4aa5-b295-73c688676eb0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T09:47:08.463327', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '87d32f28-d288-11f0-aaf2-fa163e118844', 'monotonic_time': 10846.658099055, 'message_signature': '27bb5d4cd6b5d8eaa3810f21a5f2d0ede3d948d6d955d6c3b91b71726647bae8'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T09:47:08.463327', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '87d33950-d288-11f0-aaf2-fa163e118844', 'monotonic_time': 10846.658099055, 'message_signature': 'b6015ea64bcdda8bbc1a55d27a743e1eb721ea45f7842c84016901a4fe2b8826'}]}, 'timestamp': '2025-12-06 09:47:08.463896', '_unique_id': '178e33c67a49493ea3a7dbd4c363431c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.464 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.465 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.465 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.bytes volume: 73908224 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.465 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd666734d-860b-42cd-9f7d-21f35447f0b1', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73908224, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T09:47:08.465320', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '87d37be0-d288-11f0-aaf2-fa163e118844', 'monotonic_time': 10846.613815705, 'message_signature': 'f745f8184ee388da9096fb44fbaed756b25dc0939c668af9afcaf3f0ea6b13f3'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T09:47:08.465320', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '87d385ea-d288-11f0-aaf2-fa163e118844', 'monotonic_time': 10846.613815705, 'message_signature': 'e3f4e83116a999c16d1507c59e35812faf4289a22e11dffb8f7df6966d43946d'}]}, 'timestamp': '2025-12-06 09:47:08.465868', '_unique_id': '793b3e197c4542f19725c50e9789ddec'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:47:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:47:08.466 12 ERROR oslo_messaging.notify.messaging Dec 6 04:47:09 localhost python3.9[238779]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=node_exporter.json debug=False Dec 6 04:47:09 localhost nova_compute[230884]: 2025-12-06 09:47:09.123 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:47:09 localhost sshd[238780]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:47:10 localhost nova_compute[230884]: 2025-12-06 09:47:10.272 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:47:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8781 DF PROTO=TCP SPT=56060 DPT=9102 SEQ=3650887006 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52DB5E2F0000000001030307) Dec 6 04:47:11 localhost python3.9[238891]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Dec 6 04:47:12 localhost python3[239001]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=node_exporter.json log_base_path=/var/log/containers/stdouts debug=False Dec 6 04:47:12 localhost podman[239037]: Dec 6 04:47:12 localhost podman[239037]: 2025-12-06 09:47:12.774855856 +0000 UTC m=+0.076347962 container create d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, maintainer=The Prometheus Authors , config_id=edpm, container_name=node_exporter, managed_by=edpm_ansible) Dec 6 04:47:12 localhost podman[239037]: 2025-12-06 09:47:12.741949118 +0000 UTC m=+0.043441214 image pull quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c Dec 6 04:47:12 localhost python3[239001]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck node_exporter --label config_id=edpm --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl Dec 6 04:47:13 localhost python3.9[239185]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:47:14 localhost nova_compute[230884]: 2025-12-06 09:47:14.147 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:47:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38770 DF PROTO=TCP SPT=42688 DPT=9101 SEQ=1025857698 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52DB6BB00000000001030307) Dec 6 04:47:14 localhost python3.9[239297]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:47:15 localhost nova_compute[230884]: 2025-12-06 09:47:15.308 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:47:15 localhost python3.9[239406]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765014434.794463-1704-153090461926392/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:47:16 localhost python3.9[239461]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 6 04:47:16 localhost systemd[1]: Reloading. Dec 6 04:47:16 localhost systemd-sysv-generator[239488]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:47:16 localhost systemd-rc-local-generator[239483]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:47:16 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:47:16 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 04:47:16 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:47:16 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:47:16 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:47:16 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 04:47:16 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:47:16 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:47:16 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:47:17 localhost python3.9[239552]: ansible-systemd Invoked with state=restarted name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:47:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 04:47:17 localhost systemd[1]: Reloading. Dec 6 04:47:17 localhost podman[239554]: 2025-12-06 09:47:17.184492217 +0000 UTC m=+0.105858043 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Dec 6 04:47:17 localhost podman[239554]: 2025-12-06 09:47:17.205058031 +0000 UTC m=+0.126423867 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, tcib_managed=true, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 6 04:47:17 localhost systemd-rc-local-generator[239596]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:47:17 localhost systemd-sysv-generator[239600]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:47:17 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:47:17 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 04:47:17 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:47:17 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:47:17 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:47:17 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 04:47:17 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:47:17 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:47:17 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:47:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38772 DF PROTO=TCP SPT=42688 DPT=9101 SEQ=1025857698 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52DB77AF0000000001030307) Dec 6 04:47:17 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 04:47:17 localhost systemd[1]: Starting node_exporter container... Dec 6 04:47:17 localhost systemd[1]: Started libcrun container. Dec 6 04:47:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 04:47:17 localhost podman[239611]: 2025-12-06 09:47:17.608397975 +0000 UTC m=+0.143112118 container init d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 6 04:47:17 localhost node_exporter[239626]: ts=2025-12-06T09:47:17.625Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)" Dec 6 04:47:17 localhost node_exporter[239626]: ts=2025-12-06T09:47:17.625Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)" Dec 6 04:47:17 localhost node_exporter[239626]: ts=2025-12-06T09:47:17.625Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required." Dec 6 04:47:17 localhost node_exporter[239626]: ts=2025-12-06T09:47:17.625Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$ Dec 6 04:47:17 localhost node_exporter[239626]: ts=2025-12-06T09:47:17.626Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data Dec 6 04:47:17 localhost node_exporter[239626]: ts=2025-12-06T09:47:17.626Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/) Dec 6 04:47:17 localhost node_exporter[239626]: ts=2025-12-06T09:47:17.626Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ Dec 6 04:47:17 localhost node_exporter[239626]: ts=2025-12-06T09:47:17.626Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service Dec 6 04:47:17 localhost node_exporter[239626]: ts=2025-12-06T09:47:17.627Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice) Dec 6 04:47:17 localhost node_exporter[239626]: ts=2025-12-06T09:47:17.627Z caller=node_exporter.go:110 level=info msg="Enabled collectors" Dec 6 04:47:17 localhost node_exporter[239626]: ts=2025-12-06T09:47:17.627Z caller=node_exporter.go:117 level=info collector=arp Dec 6 04:47:17 localhost node_exporter[239626]: ts=2025-12-06T09:47:17.627Z caller=node_exporter.go:117 level=info collector=bcache Dec 6 04:47:17 localhost node_exporter[239626]: ts=2025-12-06T09:47:17.627Z caller=node_exporter.go:117 level=info collector=bonding Dec 6 04:47:17 localhost node_exporter[239626]: ts=2025-12-06T09:47:17.627Z caller=node_exporter.go:117 level=info collector=btrfs Dec 6 04:47:17 localhost node_exporter[239626]: ts=2025-12-06T09:47:17.627Z caller=node_exporter.go:117 level=info collector=conntrack Dec 6 04:47:17 localhost node_exporter[239626]: ts=2025-12-06T09:47:17.627Z caller=node_exporter.go:117 level=info collector=cpu Dec 6 04:47:17 localhost node_exporter[239626]: ts=2025-12-06T09:47:17.627Z caller=node_exporter.go:117 level=info collector=cpufreq Dec 6 04:47:17 localhost node_exporter[239626]: ts=2025-12-06T09:47:17.627Z caller=node_exporter.go:117 level=info collector=diskstats Dec 6 04:47:17 localhost node_exporter[239626]: ts=2025-12-06T09:47:17.627Z caller=node_exporter.go:117 level=info collector=edac Dec 6 04:47:17 localhost node_exporter[239626]: ts=2025-12-06T09:47:17.627Z caller=node_exporter.go:117 level=info collector=fibrechannel Dec 6 04:47:17 localhost node_exporter[239626]: ts=2025-12-06T09:47:17.627Z caller=node_exporter.go:117 level=info collector=filefd Dec 6 04:47:17 localhost node_exporter[239626]: ts=2025-12-06T09:47:17.627Z caller=node_exporter.go:117 level=info collector=filesystem Dec 6 04:47:17 localhost node_exporter[239626]: ts=2025-12-06T09:47:17.627Z caller=node_exporter.go:117 level=info collector=infiniband Dec 6 04:47:17 localhost node_exporter[239626]: ts=2025-12-06T09:47:17.627Z caller=node_exporter.go:117 level=info collector=ipvs Dec 6 04:47:17 localhost node_exporter[239626]: ts=2025-12-06T09:47:17.627Z caller=node_exporter.go:117 level=info collector=loadavg Dec 6 04:47:17 localhost node_exporter[239626]: ts=2025-12-06T09:47:17.627Z caller=node_exporter.go:117 level=info collector=mdadm Dec 6 04:47:17 localhost node_exporter[239626]: ts=2025-12-06T09:47:17.627Z caller=node_exporter.go:117 level=info collector=meminfo Dec 6 04:47:17 localhost node_exporter[239626]: ts=2025-12-06T09:47:17.627Z caller=node_exporter.go:117 level=info collector=netclass Dec 6 04:47:17 localhost node_exporter[239626]: ts=2025-12-06T09:47:17.627Z caller=node_exporter.go:117 level=info collector=netdev Dec 6 04:47:17 localhost node_exporter[239626]: ts=2025-12-06T09:47:17.627Z caller=node_exporter.go:117 level=info collector=netstat Dec 6 04:47:17 localhost node_exporter[239626]: ts=2025-12-06T09:47:17.627Z caller=node_exporter.go:117 level=info collector=nfs Dec 6 04:47:17 localhost node_exporter[239626]: ts=2025-12-06T09:47:17.627Z caller=node_exporter.go:117 level=info collector=nfsd Dec 6 04:47:17 localhost node_exporter[239626]: ts=2025-12-06T09:47:17.627Z caller=node_exporter.go:117 level=info collector=nvme Dec 6 04:47:17 localhost node_exporter[239626]: ts=2025-12-06T09:47:17.627Z caller=node_exporter.go:117 level=info collector=schedstat Dec 6 04:47:17 localhost node_exporter[239626]: ts=2025-12-06T09:47:17.627Z caller=node_exporter.go:117 level=info collector=sockstat Dec 6 04:47:17 localhost node_exporter[239626]: ts=2025-12-06T09:47:17.627Z caller=node_exporter.go:117 level=info collector=softnet Dec 6 04:47:17 localhost node_exporter[239626]: ts=2025-12-06T09:47:17.627Z caller=node_exporter.go:117 level=info collector=systemd Dec 6 04:47:17 localhost node_exporter[239626]: ts=2025-12-06T09:47:17.627Z caller=node_exporter.go:117 level=info collector=tapestats Dec 6 04:47:17 localhost node_exporter[239626]: ts=2025-12-06T09:47:17.627Z caller=node_exporter.go:117 level=info collector=udp_queues Dec 6 04:47:17 localhost node_exporter[239626]: ts=2025-12-06T09:47:17.627Z caller=node_exporter.go:117 level=info collector=vmstat Dec 6 04:47:17 localhost node_exporter[239626]: ts=2025-12-06T09:47:17.627Z caller=node_exporter.go:117 level=info collector=xfs Dec 6 04:47:17 localhost node_exporter[239626]: ts=2025-12-06T09:47:17.627Z caller=node_exporter.go:117 level=info collector=zfs Dec 6 04:47:17 localhost node_exporter[239626]: ts=2025-12-06T09:47:17.628Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100 Dec 6 04:47:17 localhost node_exporter[239626]: ts=2025-12-06T09:47:17.628Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9100 Dec 6 04:47:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 04:47:17 localhost podman[239611]: 2025-12-06 09:47:17.643352879 +0000 UTC m=+0.178066982 container start d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 6 04:47:17 localhost podman[239611]: node_exporter Dec 6 04:47:17 localhost systemd[1]: Started node_exporter container. Dec 6 04:47:17 localhost podman[239635]: 2025-12-06 09:47:17.772744899 +0000 UTC m=+0.122630156 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=starting, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 6 04:47:17 localhost podman[239635]: 2025-12-06 09:47:17.784174783 +0000 UTC m=+0.134060030 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 6 04:47:17 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 04:47:18 localhost sshd[239676]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:47:19 localhost nova_compute[230884]: 2025-12-06 09:47:19.190 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:47:19 localhost python3.9[239770]: ansible-ansible.builtin.systemd Invoked with name=edpm_node_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 6 04:47:19 localhost systemd[1]: Stopping node_exporter container... Dec 6 04:47:19 localhost systemd[1]: libpod-d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.scope: Deactivated successfully. Dec 6 04:47:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51078 DF PROTO=TCP SPT=36754 DPT=9105 SEQ=1920759448 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52DB80EF0000000001030307) Dec 6 04:47:19 localhost podman[239774]: 2025-12-06 09:47:19.733442006 +0000 UTC m=+0.075408373 container died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 6 04:47:19 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.timer: Deactivated successfully. Dec 6 04:47:19 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 04:47:19 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538-userdata-shm.mount: Deactivated successfully. Dec 6 04:47:19 localhost systemd[1]: var-lib-containers-storage-overlay-c91146eb3363a08d75f235168b658b230f8ccbe671d00a8efc84b46337c2ff5e-merged.mount: Deactivated successfully. Dec 6 04:47:19 localhost podman[239774]: 2025-12-06 09:47:19.781215857 +0000 UTC m=+0.123182224 container cleanup d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 6 04:47:19 localhost podman[239774]: node_exporter Dec 6 04:47:19 localhost systemd[1]: edpm_node_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT Dec 6 04:47:19 localhost podman[239800]: 2025-12-06 09:47:19.878099062 +0000 UTC m=+0.069036819 container cleanup d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 6 04:47:19 localhost podman[239800]: node_exporter Dec 6 04:47:19 localhost systemd[1]: edpm_node_exporter.service: Failed with result 'exit-code'. Dec 6 04:47:19 localhost systemd[1]: Stopped node_exporter container. Dec 6 04:47:19 localhost systemd[1]: Starting node_exporter container... Dec 6 04:47:20 localhost systemd[1]: Started libcrun container. Dec 6 04:47:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 04:47:20 localhost podman[239813]: 2025-12-06 09:47:20.045787842 +0000 UTC m=+0.136529718 container init d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 04:47:20 localhost node_exporter[239828]: ts=2025-12-06T09:47:20.059Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)" Dec 6 04:47:20 localhost node_exporter[239828]: ts=2025-12-06T09:47:20.059Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)" Dec 6 04:47:20 localhost node_exporter[239828]: ts=2025-12-06T09:47:20.059Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required." Dec 6 04:47:20 localhost node_exporter[239828]: ts=2025-12-06T09:47:20.060Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/) Dec 6 04:47:20 localhost node_exporter[239828]: ts=2025-12-06T09:47:20.060Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ Dec 6 04:47:20 localhost node_exporter[239828]: ts=2025-12-06T09:47:20.060Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service Dec 6 04:47:20 localhost node_exporter[239828]: ts=2025-12-06T09:47:20.060Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice) Dec 6 04:47:20 localhost node_exporter[239828]: ts=2025-12-06T09:47:20.060Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$ Dec 6 04:47:20 localhost node_exporter[239828]: ts=2025-12-06T09:47:20.061Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data Dec 6 04:47:20 localhost node_exporter[239828]: ts=2025-12-06T09:47:20.061Z caller=node_exporter.go:110 level=info msg="Enabled collectors" Dec 6 04:47:20 localhost node_exporter[239828]: ts=2025-12-06T09:47:20.061Z caller=node_exporter.go:117 level=info collector=arp Dec 6 04:47:20 localhost node_exporter[239828]: ts=2025-12-06T09:47:20.061Z caller=node_exporter.go:117 level=info collector=bcache Dec 6 04:47:20 localhost node_exporter[239828]: ts=2025-12-06T09:47:20.061Z caller=node_exporter.go:117 level=info collector=bonding Dec 6 04:47:20 localhost node_exporter[239828]: ts=2025-12-06T09:47:20.061Z caller=node_exporter.go:117 level=info collector=btrfs Dec 6 04:47:20 localhost node_exporter[239828]: ts=2025-12-06T09:47:20.061Z caller=node_exporter.go:117 level=info collector=conntrack Dec 6 04:47:20 localhost node_exporter[239828]: ts=2025-12-06T09:47:20.061Z caller=node_exporter.go:117 level=info collector=cpu Dec 6 04:47:20 localhost node_exporter[239828]: ts=2025-12-06T09:47:20.061Z caller=node_exporter.go:117 level=info collector=cpufreq Dec 6 04:47:20 localhost node_exporter[239828]: ts=2025-12-06T09:47:20.061Z caller=node_exporter.go:117 level=info collector=diskstats Dec 6 04:47:20 localhost node_exporter[239828]: ts=2025-12-06T09:47:20.061Z caller=node_exporter.go:117 level=info collector=edac Dec 6 04:47:20 localhost node_exporter[239828]: ts=2025-12-06T09:47:20.061Z caller=node_exporter.go:117 level=info collector=fibrechannel Dec 6 04:47:20 localhost node_exporter[239828]: ts=2025-12-06T09:47:20.061Z caller=node_exporter.go:117 level=info collector=filefd Dec 6 04:47:20 localhost node_exporter[239828]: ts=2025-12-06T09:47:20.061Z caller=node_exporter.go:117 level=info collector=filesystem Dec 6 04:47:20 localhost node_exporter[239828]: ts=2025-12-06T09:47:20.061Z caller=node_exporter.go:117 level=info collector=infiniband Dec 6 04:47:20 localhost node_exporter[239828]: ts=2025-12-06T09:47:20.061Z caller=node_exporter.go:117 level=info collector=ipvs Dec 6 04:47:20 localhost node_exporter[239828]: ts=2025-12-06T09:47:20.061Z caller=node_exporter.go:117 level=info collector=loadavg Dec 6 04:47:20 localhost node_exporter[239828]: ts=2025-12-06T09:47:20.061Z caller=node_exporter.go:117 level=info collector=mdadm Dec 6 04:47:20 localhost node_exporter[239828]: ts=2025-12-06T09:47:20.061Z caller=node_exporter.go:117 level=info collector=meminfo Dec 6 04:47:20 localhost node_exporter[239828]: ts=2025-12-06T09:47:20.061Z caller=node_exporter.go:117 level=info collector=netclass Dec 6 04:47:20 localhost node_exporter[239828]: ts=2025-12-06T09:47:20.061Z caller=node_exporter.go:117 level=info collector=netdev Dec 6 04:47:20 localhost node_exporter[239828]: ts=2025-12-06T09:47:20.061Z caller=node_exporter.go:117 level=info collector=netstat Dec 6 04:47:20 localhost node_exporter[239828]: ts=2025-12-06T09:47:20.061Z caller=node_exporter.go:117 level=info collector=nfs Dec 6 04:47:20 localhost node_exporter[239828]: ts=2025-12-06T09:47:20.061Z caller=node_exporter.go:117 level=info collector=nfsd Dec 6 04:47:20 localhost node_exporter[239828]: ts=2025-12-06T09:47:20.061Z caller=node_exporter.go:117 level=info collector=nvme Dec 6 04:47:20 localhost node_exporter[239828]: ts=2025-12-06T09:47:20.061Z caller=node_exporter.go:117 level=info collector=schedstat Dec 6 04:47:20 localhost node_exporter[239828]: ts=2025-12-06T09:47:20.061Z caller=node_exporter.go:117 level=info collector=sockstat Dec 6 04:47:20 localhost node_exporter[239828]: ts=2025-12-06T09:47:20.061Z caller=node_exporter.go:117 level=info collector=softnet Dec 6 04:47:20 localhost node_exporter[239828]: ts=2025-12-06T09:47:20.061Z caller=node_exporter.go:117 level=info collector=systemd Dec 6 04:47:20 localhost node_exporter[239828]: ts=2025-12-06T09:47:20.061Z caller=node_exporter.go:117 level=info collector=tapestats Dec 6 04:47:20 localhost node_exporter[239828]: ts=2025-12-06T09:47:20.062Z caller=node_exporter.go:117 level=info collector=udp_queues Dec 6 04:47:20 localhost node_exporter[239828]: ts=2025-12-06T09:47:20.062Z caller=node_exporter.go:117 level=info collector=vmstat Dec 6 04:47:20 localhost node_exporter[239828]: ts=2025-12-06T09:47:20.062Z caller=node_exporter.go:117 level=info collector=xfs Dec 6 04:47:20 localhost node_exporter[239828]: ts=2025-12-06T09:47:20.062Z caller=node_exporter.go:117 level=info collector=zfs Dec 6 04:47:20 localhost node_exporter[239828]: ts=2025-12-06T09:47:20.062Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100 Dec 6 04:47:20 localhost node_exporter[239828]: ts=2025-12-06T09:47:20.062Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9100 Dec 6 04:47:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 04:47:20 localhost podman[239813]: 2025-12-06 09:47:20.08024757 +0000 UTC m=+0.170989406 container start d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 6 04:47:20 localhost podman[239813]: node_exporter Dec 6 04:47:20 localhost systemd[1]: Started node_exporter container. Dec 6 04:47:20 localhost podman[239837]: 2025-12-06 09:47:20.169840232 +0000 UTC m=+0.084675147 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=starting, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 6 04:47:20 localhost podman[239837]: 2025-12-06 09:47:20.182681451 +0000 UTC m=+0.097516356 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 6 04:47:20 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 04:47:20 localhost nova_compute[230884]: 2025-12-06 09:47:20.349 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:47:20 localhost python3.9[239969]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:47:22 localhost python3.9[240057]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/podman_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014440.381052-1800-196194191017888/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 6 04:47:23 localhost python3.9[240167]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=podman_exporter.json debug=False Dec 6 04:47:23 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51079 DF PROTO=TCP SPT=36754 DPT=9105 SEQ=1920759448 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52DB90AF0000000001030307) Dec 6 04:47:24 localhost nova_compute[230884]: 2025-12-06 09:47:24.192 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:47:24 localhost python3.9[240277]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Dec 6 04:47:25 localhost nova_compute[230884]: 2025-12-06 09:47:25.390 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:47:25 localhost python3[240387]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=podman_exporter.json log_base_path=/var/log/containers/stdouts debug=False Dec 6 04:47:26 localhost sshd[240414]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:47:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15808 DF PROTO=TCP SPT=33906 DPT=9882 SEQ=2958895315 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52DB9D700000000001030307) Dec 6 04:47:27 localhost podman[240401]: 2025-12-06 09:47:25.915403834 +0000 UTC m=+0.045127678 image pull quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd Dec 6 04:47:27 localhost podman[240470]: Dec 6 04:47:28 localhost podman[240470]: 2025-12-06 09:47:28.006954297 +0000 UTC m=+0.084881624 container create b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, maintainer=Navid Yaghoobi , config_id=edpm, container_name=podman_exporter, managed_by=edpm_ansible) Dec 6 04:47:28 localhost podman[240470]: 2025-12-06 09:47:27.968552925 +0000 UTC m=+0.046480352 image pull quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd Dec 6 04:47:28 localhost python3[240387]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env OS_ENDPOINT_TYPE=internal --env CONTAINER_HOST=unix:///run/podman/podman.sock --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=edpm --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd Dec 6 04:47:28 localhost python3.9[240616]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:47:29 localhost nova_compute[230884]: 2025-12-06 09:47:29.239 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:47:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38774 DF PROTO=TCP SPT=42688 DPT=9101 SEQ=1025857698 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52DBA7EF0000000001030307) Dec 6 04:47:29 localhost python3.9[240728]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:47:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 04:47:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 04:47:30 localhost podman[240839]: 2025-12-06 09:47:30.368704485 +0000 UTC m=+0.096968398 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true) Dec 6 04:47:30 localhost podman[240839]: 2025-12-06 09:47:30.377138304 +0000 UTC m=+0.105402257 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Dec 6 04:47:30 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 04:47:30 localhost nova_compute[230884]: 2025-12-06 09:47:30.435 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:47:30 localhost python3.9[240837]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765014449.8067646-1959-241526902827975/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:47:30 localhost podman[240838]: 2025-12-06 09:47:30.462640167 +0000 UTC m=+0.190600010 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 04:47:30 localhost podman[240838]: 2025-12-06 09:47:30.495306118 +0000 UTC m=+0.223265941 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true) Dec 6 04:47:30 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 04:47:31 localhost python3.9[240933]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 6 04:47:31 localhost systemd[1]: Reloading. Dec 6 04:47:31 localhost systemd-rc-local-generator[240954]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:47:31 localhost systemd-sysv-generator[240957]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:47:31 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:47:31 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 04:47:31 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:47:31 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:47:31 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:47:31 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 04:47:31 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:47:31 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:47:31 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:47:31 localhost python3.9[241024]: ansible-systemd Invoked with state=restarted name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:47:32 localhost systemd[1]: Reloading. Dec 6 04:47:32 localhost systemd-rc-local-generator[241054]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:47:32 localhost systemd-sysv-generator[241057]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:47:32 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:47:32 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 04:47:32 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:47:32 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:47:32 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:47:32 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 04:47:32 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:47:32 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:47:32 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:47:32 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51080 DF PROTO=TCP SPT=36754 DPT=9105 SEQ=1920759448 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52DBB1EF0000000001030307) Dec 6 04:47:32 localhost systemd[1]: Starting podman_exporter container... Dec 6 04:47:32 localhost systemd[1]: Started libcrun container. Dec 6 04:47:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 04:47:32 localhost podman[241065]: 2025-12-06 09:47:32.48757635 +0000 UTC m=+0.128479003 container init b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 6 04:47:32 localhost podman_exporter[241078]: ts=2025-12-06T09:47:32.501Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)" Dec 6 04:47:32 localhost podman_exporter[241078]: ts=2025-12-06T09:47:32.501Z caller=exporter.go:69 level=info msg=metrics enhanced=false Dec 6 04:47:32 localhost podman_exporter[241078]: ts=2025-12-06T09:47:32.501Z caller=handler.go:94 level=info msg="enabled collectors" Dec 6 04:47:32 localhost podman_exporter[241078]: ts=2025-12-06T09:47:32.501Z caller=handler.go:105 level=info collector=container Dec 6 04:47:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 04:47:32 localhost systemd[1]: Starting Podman API Service... Dec 6 04:47:32 localhost podman[241065]: 2025-12-06 09:47:32.522080208 +0000 UTC m=+0.162982871 container start b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 6 04:47:32 localhost podman[241065]: podman_exporter Dec 6 04:47:32 localhost systemd[1]: Started Podman API Service. Dec 6 04:47:32 localhost systemd[1]: Started podman_exporter container. Dec 6 04:47:32 localhost podman[241090]: time="2025-12-06T09:47:32Z" level=info msg="/usr/bin/podman filtering at log level info" Dec 6 04:47:32 localhost podman[241090]: time="2025-12-06T09:47:32Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" Dec 6 04:47:32 localhost podman[241090]: time="2025-12-06T09:47:32Z" level=info msg="Setting parallel job count to 25" Dec 6 04:47:32 localhost podman[241090]: time="2025-12-06T09:47:32Z" level=info msg="Using systemd socket activation to determine API endpoint" Dec 6 04:47:32 localhost podman[241090]: time="2025-12-06T09:47:32Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"/run/podman/podman.sock\"" Dec 6 04:47:32 localhost podman[241090]: @ - - [06/Dec/2025:09:47:32 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1" Dec 6 04:47:32 localhost podman[241090]: time="2025-12-06T09:47:32Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 04:47:32 localhost podman[241089]: 2025-12-06 09:47:32.584065062 +0000 UTC m=+0.058162543 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=starting, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 6 04:47:32 localhost podman[241089]: 2025-12-06 09:47:32.667090516 +0000 UTC m=+0.141188017 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 6 04:47:32 localhost podman[241089]: unhealthy Dec 6 04:47:33 localhost systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully. Dec 6 04:47:33 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:47:33 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Failed with result 'exit-code'. Dec 6 04:47:33 localhost sshd[241143]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:47:34 localhost nova_compute[230884]: 2025-12-06 09:47:34.288 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:47:34 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 6 04:47:34 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 6 04:47:34 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 6 04:47:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58668 DF PROTO=TCP SPT=38750 DPT=9102 SEQ=2585128302 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52DBBB700000000001030307) Dec 6 04:47:34 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 6 04:47:35 localhost python3.9[241237]: ansible-ansible.builtin.systemd Invoked with name=edpm_podman_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 6 04:47:35 localhost systemd[1]: Stopping podman_exporter container... Dec 6 04:47:35 localhost podman[241090]: @ - - [06/Dec/2025:09:47:32 +0000] "GET /v4.9.3/libpod/events?filters=%7B%7D&since=&stream=true&until= HTTP/1.1" 200 2790 "" "Go-http-client/1.1" Dec 6 04:47:35 localhost systemd[1]: libpod-b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.scope: Deactivated successfully. Dec 6 04:47:35 localhost podman[241241]: 2025-12-06 09:47:35.203942299 +0000 UTC m=+0.057811071 container died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 6 04:47:35 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.timer: Deactivated successfully. Dec 6 04:47:35 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 04:47:35 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941-userdata-shm.mount: Deactivated successfully. Dec 6 04:47:35 localhost systemd[1]: var-lib-containers-storage-overlay-cd6425452938e99a947d98ed440c416f97c1a47fc1a973380479b4612f15ab3d-merged.mount: Deactivated successfully. Dec 6 04:47:35 localhost nova_compute[230884]: 2025-12-06 09:47:35.482 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:47:35 localhost systemd[1]: var-lib-containers-storage-overlay-fcbe7548a736ce5f8ebae55fdcfeeff017a268d646edfaeb837e7d7f4a13d780-merged.mount: Deactivated successfully. Dec 6 04:47:35 localhost podman[241241]: 2025-12-06 09:47:35.587633697 +0000 UTC m=+0.441502419 container cleanup b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 6 04:47:35 localhost podman[241241]: podman_exporter Dec 6 04:47:35 localhost podman[241254]: 2025-12-06 09:47:35.60089392 +0000 UTC m=+0.395953620 container cleanup b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 04:47:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 04:47:37 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 6 04:47:37 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Dec 6 04:47:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19323 DF PROTO=TCP SPT=48568 DPT=9102 SEQ=2118299854 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52DBC7EF0000000001030307) Dec 6 04:47:38 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Dec 6 04:47:38 localhost systemd[1]: edpm_podman_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT Dec 6 04:47:38 localhost podman[241269]: 2025-12-06 09:47:38.090642534 +0000 UTC m=+0.751047247 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0) Dec 6 04:47:38 localhost podman[241281]: 2025-12-06 09:47:38.147532985 +0000 UTC m=+0.077445916 container cleanup b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 6 04:47:38 localhost podman[241281]: podman_exporter Dec 6 04:47:38 localhost podman[241269]: 2025-12-06 09:47:38.175260838 +0000 UTC m=+0.835665541 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=edpm, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 04:47:38 localhost podman[241269]: unhealthy Dec 6 04:47:39 localhost nova_compute[230884]: 2025-12-06 09:47:39.291 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:47:39 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 6 04:47:39 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 6 04:47:40 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 6 04:47:40 localhost systemd[1]: edpm_podman_exporter.service: Failed with result 'exit-code'. Dec 6 04:47:40 localhost systemd[1]: Stopped podman_exporter container. Dec 6 04:47:40 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:47:40 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Failed with result 'exit-code'. Dec 6 04:47:40 localhost systemd[1]: Starting podman_exporter container... Dec 6 04:47:40 localhost nova_compute[230884]: 2025-12-06 09:47:40.526 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:47:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58670 DF PROTO=TCP SPT=38750 DPT=9102 SEQ=2585128302 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52DBD32F0000000001030307) Dec 6 04:47:41 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 6 04:47:41 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 6 04:47:41 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 6 04:47:41 localhost systemd[1]: Started libcrun container. Dec 6 04:47:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 04:47:41 localhost podman[241299]: 2025-12-06 09:47:41.592996983 +0000 UTC m=+1.413586795 container init b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 04:47:41 localhost podman_exporter[241313]: ts=2025-12-06T09:47:41.606Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)" Dec 6 04:47:41 localhost podman_exporter[241313]: ts=2025-12-06T09:47:41.606Z caller=exporter.go:69 level=info msg=metrics enhanced=false Dec 6 04:47:41 localhost podman_exporter[241313]: ts=2025-12-06T09:47:41.606Z caller=handler.go:94 level=info msg="enabled collectors" Dec 6 04:47:41 localhost podman_exporter[241313]: ts=2025-12-06T09:47:41.607Z caller=handler.go:105 level=info collector=container Dec 6 04:47:41 localhost podman[241090]: @ - - [06/Dec/2025:09:47:41 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1" Dec 6 04:47:41 localhost podman[241090]: time="2025-12-06T09:47:41Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 04:47:41 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 6 04:47:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 04:47:41 localhost podman[241299]: 2025-12-06 09:47:41.635961531 +0000 UTC m=+1.456551333 container start b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 6 04:47:41 localhost podman[241299]: podman_exporter Dec 6 04:47:42 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 6 04:47:42 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 6 04:47:42 localhost systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully. Dec 6 04:47:42 localhost systemd[1]: Started podman_exporter container. Dec 6 04:47:42 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 6 04:47:42 localhost podman[241323]: 2025-12-06 09:47:42.39463334 +0000 UTC m=+0.755722135 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=starting, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 6 04:47:42 localhost podman[241323]: 2025-12-06 09:47:42.40907031 +0000 UTC m=+0.770159145 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 6 04:47:42 localhost podman[241323]: unhealthy Dec 6 04:47:42 localhost systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully. Dec 6 04:47:42 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 6 04:47:42 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 6 04:47:43 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 6 04:47:43 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 6 04:47:43 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Main process exited, code=exited, status=1/FAILURE Dec 6 04:47:43 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Failed with result 'exit-code'. Dec 6 04:47:44 localhost systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully. Dec 6 04:47:44 localhost systemd[1]: var-lib-containers-storage-overlay-cd6425452938e99a947d98ed440c416f97c1a47fc1a973380479b4612f15ab3d-merged.mount: Deactivated successfully. Dec 6 04:47:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24014 DF PROTO=TCP SPT=34684 DPT=9101 SEQ=3076688077 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52DBE0E00000000001030307) Dec 6 04:47:44 localhost nova_compute[230884]: 2025-12-06 09:47:44.341 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:47:44 localhost python3.9[241456]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:47:45 localhost python3.9[241544]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014464.0755699-2055-77208776298872/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Dec 6 04:47:45 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Dec 6 04:47:45 localhost systemd[1]: var-lib-containers-storage-overlay-7d7c22414f3b3b03ee747009a3ba1860a523968c599aef8234ba1bea94f6d58e-merged.mount: Deactivated successfully. Dec 6 04:47:45 localhost systemd[1]: var-lib-containers-storage-overlay-7d7c22414f3b3b03ee747009a3ba1860a523968c599aef8234ba1bea94f6d58e-merged.mount: Deactivated successfully. Dec 6 04:47:45 localhost nova_compute[230884]: 2025-12-06 09:47:45.564 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:47:45 localhost sshd[241562]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:47:46 localhost systemd[1]: var-lib-containers-storage-overlay-0dae0ae2501f0b947a8e64948b264823feec8c7ddb8b7849cb102fbfe0c75da8-merged.mount: Deactivated successfully. Dec 6 04:47:46 localhost systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully. Dec 6 04:47:46 localhost systemd[1]: var-lib-containers-storage-overlay-06baa34adcac19ffd1cac321f0c14e5e32037c7b357d2eb54e065b4d177d72fd-merged.mount: Deactivated successfully. Dec 6 04:47:46 localhost systemd[1]: var-lib-containers-storage-overlay-06baa34adcac19ffd1cac321f0c14e5e32037c7b357d2eb54e065b4d177d72fd-merged.mount: Deactivated successfully. Dec 6 04:47:47 localhost python3.9[241656]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=openstack_network_exporter.json debug=False Dec 6 04:47:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:47:47.277 160509 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 04:47:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:47:47.277 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 04:47:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:47:47.279 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 04:47:47 localhost systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully. Dec 6 04:47:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24016 DF PROTO=TCP SPT=34684 DPT=9101 SEQ=3076688077 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52DBECEF0000000001030307) Dec 6 04:47:47 localhost systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully. Dec 6 04:47:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 04:47:47 localhost podman[241767]: 2025-12-06 09:47:47.927714806 +0000 UTC m=+0.123852935 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 04:47:47 localhost podman[241767]: 2025-12-06 09:47:47.941239967 +0000 UTC m=+0.137378146 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0) Dec 6 04:47:47 localhost python3.9[241766]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Dec 6 04:47:48 localhost systemd[1]: var-lib-containers-storage-overlay-0dae0ae2501f0b947a8e64948b264823feec8c7ddb8b7849cb102fbfe0c75da8-merged.mount: Deactivated successfully. Dec 6 04:47:48 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 6 04:47:48 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Dec 6 04:47:48 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 04:47:49 localhost sshd[241896]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:47:49 localhost nova_compute[230884]: 2025-12-06 09:47:49.381 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:47:49 localhost python3[241895]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=openstack_network_exporter.json log_base_path=/var/log/containers/stdouts debug=False Dec 6 04:47:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1129 DF PROTO=TCP SPT=51478 DPT=9105 SEQ=2814942310 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52DBF5EF0000000001030307) Dec 6 04:47:49 localhost sshd[241910]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:47:50 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 6 04:47:50 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 6 04:47:50 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully. Dec 6 04:47:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 04:47:50 localhost systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully. Dec 6 04:47:50 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 6 04:47:50 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 6 04:47:50 localhost podman[241911]: 2025-12-06 09:47:50.497281342 +0000 UTC m=+0.205790365 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 6 04:47:50 localhost podman[241911]: 2025-12-06 09:47:50.533460994 +0000 UTC m=+0.241970007 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 04:47:50 localhost nova_compute[230884]: 2025-12-06 09:47:50.620 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:47:51 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 6 04:47:51 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 6 04:47:51 localhost podman[241090]: time="2025-12-06T09:47:51Z" level=error msg="Getting root fs size for \"0d3b158edc684f8600676f0dcf0cb5a14357db1593e587d02be14e52e3f8b304\": getting diffsize of layer \"3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae\" and its parent \"efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf\": unmounting layer 3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae: replacing mount point \"/var/lib/containers/storage/overlay/3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae/merged\": device or resource busy" Dec 6 04:47:52 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 6 04:47:52 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully. Dec 6 04:47:52 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully. Dec 6 04:47:52 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 6 04:47:52 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 04:47:53 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 6 04:47:53 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 6 04:47:53 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1130 DF PROTO=TCP SPT=51478 DPT=9105 SEQ=2814942310 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52DC05AF0000000001030307) Dec 6 04:47:54 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 6 04:47:54 localhost nova_compute[230884]: 2025-12-06 09:47:54.382 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:47:54 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 6 04:47:54 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 6 04:47:54 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 6 04:47:55 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 6 04:47:55 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Dec 6 04:47:55 localhost systemd[1]: var-lib-containers-storage-overlay-7d7c22414f3b3b03ee747009a3ba1860a523968c599aef8234ba1bea94f6d58e-merged.mount: Deactivated successfully. Dec 6 04:47:55 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 6 04:47:55 localhost nova_compute[230884]: 2025-12-06 09:47:55.621 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:47:55 localhost systemd[1]: var-lib-containers-storage-overlay-06baa34adcac19ffd1cac321f0c14e5e32037c7b357d2eb54e065b4d177d72fd-merged.mount: Deactivated successfully. Dec 6 04:47:55 localhost systemd[1]: var-lib-containers-storage-overlay-0dae0ae2501f0b947a8e64948b264823feec8c7ddb8b7849cb102fbfe0c75da8-merged.mount: Deactivated successfully. Dec 6 04:47:55 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15811 DF PROTO=TCP SPT=33906 DPT=9882 SEQ=2958895315 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52DC0DF00000000001030307) Dec 6 04:47:56 localhost systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully. Dec 6 04:47:56 localhost systemd[1]: var-lib-containers-storage-overlay-06baa34adcac19ffd1cac321f0c14e5e32037c7b357d2eb54e065b4d177d72fd-merged.mount: Deactivated successfully. Dec 6 04:47:56 localhost systemd[1]: var-lib-containers-storage-overlay-06baa34adcac19ffd1cac321f0c14e5e32037c7b357d2eb54e065b4d177d72fd-merged.mount: Deactivated successfully. Dec 6 04:47:56 localhost nova_compute[230884]: 2025-12-06 09:47:56.460 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:47:56 localhost nova_compute[230884]: 2025-12-06 09:47:56.461 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:47:56 localhost nova_compute[230884]: 2025-12-06 09:47:56.481 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:47:56 localhost nova_compute[230884]: 2025-12-06 09:47:56.481 230888 DEBUG nova.compute.manager [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 6 04:47:56 localhost nova_compute[230884]: 2025-12-06 09:47:56.482 230888 DEBUG nova.compute.manager [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 6 04:47:57 localhost nova_compute[230884]: 2025-12-06 09:47:57.199 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Acquiring lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 6 04:47:57 localhost nova_compute[230884]: 2025-12-06 09:47:57.199 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Acquired lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 6 04:47:57 localhost nova_compute[230884]: 2025-12-06 09:47:57.199 230888 DEBUG nova.network.neutron [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 6 04:47:57 localhost nova_compute[230884]: 2025-12-06 09:47:57.200 230888 DEBUG nova.objects.instance [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 6 04:47:57 localhost systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully. Dec 6 04:47:57 localhost systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully. Dec 6 04:47:57 localhost systemd[1]: var-lib-containers-storage-overlay-0dae0ae2501f0b947a8e64948b264823feec8c7ddb8b7849cb102fbfe0c75da8-merged.mount: Deactivated successfully. Dec 6 04:47:58 localhost nova_compute[230884]: 2025-12-06 09:47:58.270 230888 DEBUG nova.network.neutron [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updating instance_info_cache with network_info: [{"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 6 04:47:58 localhost nova_compute[230884]: 2025-12-06 09:47:58.282 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Releasing lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 6 04:47:58 localhost nova_compute[230884]: 2025-12-06 09:47:58.282 230888 DEBUG nova.compute.manager [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 6 04:47:58 localhost nova_compute[230884]: 2025-12-06 09:47:58.283 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:47:58 localhost nova_compute[230884]: 2025-12-06 09:47:58.283 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:47:58 localhost nova_compute[230884]: 2025-12-06 09:47:58.284 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:47:58 localhost nova_compute[230884]: 2025-12-06 09:47:58.284 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:47:58 localhost nova_compute[230884]: 2025-12-06 09:47:58.284 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:47:58 localhost nova_compute[230884]: 2025-12-06 09:47:58.284 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:47:58 localhost nova_compute[230884]: 2025-12-06 09:47:58.285 230888 DEBUG nova.compute.manager [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 6 04:47:58 localhost nova_compute[230884]: 2025-12-06 09:47:58.285 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:47:58 localhost nova_compute[230884]: 2025-12-06 09:47:58.299 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 04:47:58 localhost nova_compute[230884]: 2025-12-06 09:47:58.300 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 04:47:58 localhost nova_compute[230884]: 2025-12-06 09:47:58.300 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 04:47:58 localhost nova_compute[230884]: 2025-12-06 09:47:58.300 230888 DEBUG nova.compute.resource_tracker [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Auditing locally available compute resources for np0005548789.localdomain (node: np0005548789.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 6 04:47:58 localhost nova_compute[230884]: 2025-12-06 09:47:58.301 230888 DEBUG oslo_concurrency.processutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 04:47:58 localhost nova_compute[230884]: 2025-12-06 09:47:58.773 230888 DEBUG oslo_concurrency.processutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 04:47:58 localhost nova_compute[230884]: 2025-12-06 09:47:58.842 230888 DEBUG nova.virt.libvirt.driver [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 04:47:58 localhost nova_compute[230884]: 2025-12-06 09:47:58.842 230888 DEBUG nova.virt.libvirt.driver [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 04:47:58 localhost nova_compute[230884]: 2025-12-06 09:47:58.991 230888 WARNING nova.virt.libvirt.driver [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 6 04:47:58 localhost nova_compute[230884]: 2025-12-06 09:47:58.992 230888 DEBUG nova.compute.resource_tracker [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Hypervisor/Node resource view: name=np0005548789.localdomain free_ram=12423MB free_disk=41.83721923828125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 6 04:47:58 localhost nova_compute[230884]: 2025-12-06 09:47:58.992 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 04:47:58 localhost nova_compute[230884]: 2025-12-06 09:47:58.992 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 04:47:59 localhost nova_compute[230884]: 2025-12-06 09:47:59.078 230888 DEBUG nova.compute.resource_tracker [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 6 04:47:59 localhost nova_compute[230884]: 2025-12-06 09:47:59.079 230888 DEBUG nova.compute.resource_tracker [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 6 04:47:59 localhost nova_compute[230884]: 2025-12-06 09:47:59.079 230888 DEBUG nova.compute.resource_tracker [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Final resource view: name=np0005548789.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 6 04:47:59 localhost nova_compute[230884]: 2025-12-06 09:47:59.135 230888 DEBUG oslo_concurrency.processutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 04:47:59 localhost nova_compute[230884]: 2025-12-06 09:47:59.424 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:47:59 localhost nova_compute[230884]: 2025-12-06 09:47:59.595 230888 DEBUG oslo_concurrency.processutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 04:47:59 localhost nova_compute[230884]: 2025-12-06 09:47:59.602 230888 DEBUG nova.compute.provider_tree [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Inventory has not changed in ProviderTree for provider: 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 6 04:47:59 localhost nova_compute[230884]: 2025-12-06 09:47:59.620 230888 DEBUG nova.scheduler.client.report [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Inventory has not changed for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 6 04:47:59 localhost nova_compute[230884]: 2025-12-06 09:47:59.622 230888 DEBUG nova.compute.resource_tracker [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Compute_service record updated for np0005548789.localdomain:np0005548789.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 6 04:47:59 localhost nova_compute[230884]: 2025-12-06 09:47:59.623 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 04:47:59 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 6 04:47:59 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 6 04:47:59 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24018 DF PROTO=TCP SPT=34684 DPT=9101 SEQ=3076688077 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52DC1DF00000000001030307) Dec 6 04:47:59 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully. Dec 6 04:48:00 localhost systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully. Dec 6 04:48:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 04:48:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 04:48:00 localhost nova_compute[230884]: 2025-12-06 09:48:00.677 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:48:00 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 6 04:48:00 localhost podman[241991]: 2025-12-06 09:48:00.754956739 +0000 UTC m=+0.065503367 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Dec 6 04:48:00 localhost systemd[1]: tmp-crun.SL3jlP.mount: Deactivated successfully. Dec 6 04:48:00 localhost podman[241992]: 2025-12-06 09:48:00.821698744 +0000 UTC m=+0.134642748 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent) Dec 6 04:48:00 localhost podman[241992]: 2025-12-06 09:48:00.854124867 +0000 UTC m=+0.167068871 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, tcib_managed=true) Dec 6 04:48:00 localhost podman[241991]: 2025-12-06 09:48:00.873660019 +0000 UTC m=+0.184206667 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller) Dec 6 04:48:01 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 6 04:48:01 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 6 04:48:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1131 DF PROTO=TCP SPT=51478 DPT=9105 SEQ=2814942310 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52DC25EF0000000001030307) Dec 6 04:48:02 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully. Dec 6 04:48:02 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully. Dec 6 04:48:02 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 6 04:48:03 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 04:48:03 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 04:48:04 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 6 04:48:04 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Dec 6 04:48:04 localhost nova_compute[230884]: 2025-12-06 09:48:04.472 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:48:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27375 DF PROTO=TCP SPT=36682 DPT=9102 SEQ=4003239329 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52DC30AF0000000001030307) Dec 6 04:48:04 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 6 04:48:04 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 6 04:48:05 localhost kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 6 04:48:05 localhost kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 6 04:48:05 localhost podman[241090]: time="2025-12-06T09:48:05Z" level=error msg="Getting root fs size for \"15b3289dfb7ed52ab0d10f0af104ac3227bc5ecb093aa34d613c9c265f6e2f89\": getting diffsize of layer \"efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf\" and its parent \"c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6\": unmounting layer efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf: replacing mount point \"/var/lib/containers/storage/overlay/efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf/merged\": device or resource busy" Dec 6 04:48:05 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 6 04:48:05 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Dec 6 04:48:05 localhost nova_compute[230884]: 2025-12-06 09:48:05.718 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:48:05 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Dec 6 04:48:05 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 6 04:48:05 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Dec 6 04:48:07 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 6 04:48:07 localhost systemd[1]: var-lib-containers-storage-overlay-89ee8dead5a29d0553b978375c66d4bc010ba2732baca36dcfe5a54e3214c8ff-merged.mount: Deactivated successfully. Dec 6 04:48:08 localhost systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully. Dec 6 04:48:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10210 DF PROTO=TCP SPT=49382 DPT=9882 SEQ=3923200607 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52DC41EF0000000001030307) Dec 6 04:48:09 localhost nova_compute[230884]: 2025-12-06 09:48:09.507 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:48:10 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully. Dec 6 04:48:10 localhost systemd[1]: var-lib-containers-storage-overlay-853ccb0b7aef1ea23933a0a39c3ed46ab9d9a29acf9ba782f87031dcfb79c247-merged.mount: Deactivated successfully. Dec 6 04:48:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 04:48:10 localhost systemd[1]: var-lib-containers-storage-overlay-853ccb0b7aef1ea23933a0a39c3ed46ab9d9a29acf9ba782f87031dcfb79c247-merged.mount: Deactivated successfully. Dec 6 04:48:10 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Dec 6 04:48:10 localhost nova_compute[230884]: 2025-12-06 09:48:10.757 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:48:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27377 DF PROTO=TCP SPT=36682 DPT=9102 SEQ=4003239329 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52DC486F0000000001030307) Dec 6 04:48:11 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Dec 6 04:48:11 localhost podman[242093]: 2025-12-06 09:48:11.147731904 +0000 UTC m=+0.909440243 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 04:48:11 localhost podman[242093]: 2025-12-06 09:48:11.177552756 +0000 UTC m=+0.939261115 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 6 04:48:11 localhost podman[242093]: unhealthy Dec 6 04:55:38 localhost python3.9[264841]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:55:39 localhost rsyslogd[760]: imjournal: 8252 messages lost due to rate-limiting (20000 allowed within 600 seconds) Dec 6 04:55:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 04:55:39 localhost podman[264942]: 2025-12-06 09:55:39.848852589 +0000 UTC m=+0.068774731 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=edpm) Dec 6 04:55:39 localhost podman[264942]: 2025-12-06 09:55:39.86135096 +0000 UTC m=+0.081273162 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3) Dec 6 04:55:39 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 04:55:40 localhost python3.9[264964]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:55:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52215 DF PROTO=TCP SPT=53674 DPT=9102 SEQ=1596883805 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E326300000000001030307) Dec 6 04:55:40 localhost sshd[265082]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:55:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 04:55:40 localhost python3.9[265081]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:55:40 localhost podman[265084]: 2025-12-06 09:55:40.925261496 +0000 UTC m=+0.087393068 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.6, config_id=edpm, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, distribution-scope=public, io.buildah.version=1.33.7, name=ubi9-minimal, architecture=x86_64) Dec 6 04:55:40 localhost podman[265084]: 2025-12-06 09:55:40.939193182 +0000 UTC m=+0.101324754 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, distribution-scope=public, managed_by=edpm_ansible, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, version=9.6, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git) Dec 6 04:55:40 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 04:55:42 localhost python3.9[265213]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:55:42 localhost nova_compute[230884]: 2025-12-06 09:55:42.337 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:55:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 04:55:43 localhost podman[265324]: 2025-12-06 09:55:43.342062611 +0000 UTC m=+0.082965134 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true) Dec 6 04:55:43 localhost podman[265324]: 2025-12-06 09:55:43.38073201 +0000 UTC m=+0.121634513 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Dec 6 04:55:43 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 04:55:43 localhost python3.9[265323]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:55:43 localhost nova_compute[230884]: 2025-12-06 09:55:43.552 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:55:44 localhost sshd[265455]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:55:44 localhost python3.9[265454]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:55:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 04:55:46 localhost podman[265568]: 2025-12-06 09:55:46.500483872 +0000 UTC m=+0.075935669 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 6 04:55:46 localhost podman[265568]: 2025-12-06 09:55:46.532006374 +0000 UTC m=+0.107458171 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 6 04:55:46 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 04:55:46 localhost python3.9[265569]: ansible-ansible.builtin.service_facts Invoked Dec 6 04:55:46 localhost openstack_network_exporter[243110]: ERROR 09:55:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 04:55:46 localhost openstack_network_exporter[243110]: ERROR 09:55:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 04:55:46 localhost openstack_network_exporter[243110]: ERROR 09:55:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 04:55:46 localhost openstack_network_exporter[243110]: ERROR 09:55:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 04:55:46 localhost openstack_network_exporter[243110]: Dec 6 04:55:46 localhost openstack_network_exporter[243110]: ERROR 09:55:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 04:55:46 localhost openstack_network_exporter[243110]: Dec 6 04:55:46 localhost network[265608]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 6 04:55:46 localhost network[265609]: 'network-scripts' will be removed from distribution in near future. Dec 6 04:55:46 localhost network[265610]: It is advised to switch to 'NetworkManager' instead for network management. Dec 6 04:55:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:55:47.285 160509 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 04:55:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:55:47.286 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 04:55:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:55:47.287 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 04:55:47 localhost nova_compute[230884]: 2025-12-06 09:55:47.382 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:55:48 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:55:48 localhost nova_compute[230884]: 2025-12-06 09:55:48.558 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:55:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52216 DF PROTO=TCP SPT=53674 DPT=9102 SEQ=1596883805 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E345EF0000000001030307) Dec 6 04:55:50 localhost ovn_controller[154851]: 2025-12-06T09:55:50Z|00051|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory Dec 6 04:55:52 localhost nova_compute[230884]: 2025-12-06 09:55:52.430 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:55:52 localhost nova_compute[230884]: 2025-12-06 09:55:52.500 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:55:53 localhost nova_compute[230884]: 2025-12-06 09:55:53.523 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:55:53 localhost nova_compute[230884]: 2025-12-06 09:55:53.523 230888 DEBUG nova.compute.manager [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Dec 6 04:55:53 localhost nova_compute[230884]: 2025-12-06 09:55:53.558 230888 DEBUG nova.compute.manager [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Dec 6 04:55:53 localhost nova_compute[230884]: 2025-12-06 09:55:53.587 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:55:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 04:55:53 localhost podman[265752]: 2025-12-06 09:55:53.903814762 +0000 UTC m=+0.068630676 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Dec 6 04:55:53 localhost podman[241090]: time="2025-12-06T09:55:53Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 04:55:53 localhost podman[241090]: @ - - [06/Dec/2025:09:55:53 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 150643 "" "Go-http-client/1.1" Dec 6 04:55:54 localhost podman[265752]: 2025-12-06 09:55:54.016519473 +0000 UTC m=+0.181335367 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Dec 6 04:55:54 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 04:55:54 localhost podman[241090]: @ - - [06/Dec/2025:09:55:53 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17712 "" "Go-http-client/1.1" Dec 6 04:55:54 localhost python3.9[265871]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Dec 6 04:55:55 localhost python3.9[265981]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled Dec 6 04:55:56 localhost python3.9[266091]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:55:57 localhost python3.9[266148]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/modules-load.d/dm-multipath.conf _original_basename=module-load.conf.j2 recurse=False state=file path=/etc/modules-load.d/dm-multipath.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:55:57 localhost nova_compute[230884]: 2025-12-06 09:55:57.479 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:55:57 localhost python3.9[266258]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:55:58 localhost nova_compute[230884]: 2025-12-06 09:55:58.537 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:55:58 localhost nova_compute[230884]: 2025-12-06 09:55:58.635 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:55:58 localhost python3.9[266368]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 04:55:59 localhost nova_compute[230884]: 2025-12-06 09:55:59.497 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:55:59 localhost nova_compute[230884]: 2025-12-06 09:55:59.533 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:55:59 localhost nova_compute[230884]: 2025-12-06 09:55:59.534 230888 DEBUG nova.compute.manager [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 6 04:55:59 localhost python3.9[266478]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:56:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 04:56:00 localhost podman[266591]: 2025-12-06 09:56:00.456489236 +0000 UTC m=+0.077993612 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, managed_by=edpm_ansible) Dec 6 04:56:00 localhost podman[266591]: 2025-12-06 09:56:00.466235423 +0000 UTC m=+0.087739809 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent) Dec 6 04:56:00 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 04:56:00 localhost nova_compute[230884]: 2025-12-06 09:56:00.500 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:56:00 localhost nova_compute[230884]: 2025-12-06 09:56:00.500 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:56:00 localhost nova_compute[230884]: 2025-12-06 09:56:00.501 230888 DEBUG nova.compute.manager [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 6 04:56:00 localhost nova_compute[230884]: 2025-12-06 09:56:00.501 230888 DEBUG nova.compute.manager [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 6 04:56:00 localhost python3.9[266590]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:56:00 localhost nova_compute[230884]: 2025-12-06 09:56:00.592 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Acquiring lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 6 04:56:00 localhost nova_compute[230884]: 2025-12-06 09:56:00.592 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Acquired lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 6 04:56:00 localhost nova_compute[230884]: 2025-12-06 09:56:00.593 230888 DEBUG nova.network.neutron [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 6 04:56:00 localhost nova_compute[230884]: 2025-12-06 09:56:00.593 230888 DEBUG nova.objects.instance [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 6 04:56:01 localhost nova_compute[230884]: 2025-12-06 09:56:01.109 230888 DEBUG nova.network.neutron [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updating instance_info_cache with network_info: [{"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 6 04:56:01 localhost nova_compute[230884]: 2025-12-06 09:56:01.129 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Releasing lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 6 04:56:01 localhost nova_compute[230884]: 2025-12-06 09:56:01.129 230888 DEBUG nova.compute.manager [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 6 04:56:01 localhost nova_compute[230884]: 2025-12-06 09:56:01.130 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:56:01 localhost nova_compute[230884]: 2025-12-06 09:56:01.130 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:56:01 localhost nova_compute[230884]: 2025-12-06 09:56:01.155 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 04:56:01 localhost nova_compute[230884]: 2025-12-06 09:56:01.156 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 04:56:01 localhost nova_compute[230884]: 2025-12-06 09:56:01.156 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 04:56:01 localhost nova_compute[230884]: 2025-12-06 09:56:01.156 230888 DEBUG nova.compute.resource_tracker [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Auditing locally available compute resources for np0005548789.localdomain (node: np0005548789.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 6 04:56:01 localhost nova_compute[230884]: 2025-12-06 09:56:01.157 230888 DEBUG oslo_concurrency.processutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 04:56:01 localhost python3.9[266721]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:56:01 localhost nova_compute[230884]: 2025-12-06 09:56:01.598 230888 DEBUG oslo_concurrency.processutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 04:56:01 localhost nova_compute[230884]: 2025-12-06 09:56:01.697 230888 DEBUG nova.virt.libvirt.driver [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 04:56:01 localhost nova_compute[230884]: 2025-12-06 09:56:01.698 230888 DEBUG nova.virt.libvirt.driver [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 04:56:01 localhost nova_compute[230884]: 2025-12-06 09:56:01.891 230888 WARNING nova.virt.libvirt.driver [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 6 04:56:01 localhost nova_compute[230884]: 2025-12-06 09:56:01.893 230888 DEBUG nova.compute.resource_tracker [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Hypervisor/Node resource view: name=np0005548789.localdomain free_ram=11866MB free_disk=41.83721923828125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 6 04:56:01 localhost nova_compute[230884]: 2025-12-06 09:56:01.893 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 04:56:01 localhost nova_compute[230884]: 2025-12-06 09:56:01.894 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 04:56:02 localhost nova_compute[230884]: 2025-12-06 09:56:02.194 230888 DEBUG nova.compute.resource_tracker [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 6 04:56:02 localhost nova_compute[230884]: 2025-12-06 09:56:02.194 230888 DEBUG nova.compute.resource_tracker [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 6 04:56:02 localhost nova_compute[230884]: 2025-12-06 09:56:02.194 230888 DEBUG nova.compute.resource_tracker [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Final resource view: name=np0005548789.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 6 04:56:02 localhost nova_compute[230884]: 2025-12-06 09:56:02.268 230888 DEBUG nova.scheduler.client.report [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Refreshing inventories for resource provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Dec 6 04:56:02 localhost nova_compute[230884]: 2025-12-06 09:56:02.333 230888 DEBUG nova.scheduler.client.report [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Updating ProviderTree inventory for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Dec 6 04:56:02 localhost nova_compute[230884]: 2025-12-06 09:56:02.333 230888 DEBUG nova.compute.provider_tree [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Updating inventory in ProviderTree for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 6 04:56:02 localhost nova_compute[230884]: 2025-12-06 09:56:02.349 230888 DEBUG nova.scheduler.client.report [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Refreshing aggregate associations for resource provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Dec 6 04:56:02 localhost nova_compute[230884]: 2025-12-06 09:56:02.385 230888 DEBUG nova.scheduler.client.report [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Refreshing trait associations for resource provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad, traits: COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE42,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE4A,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AMD_SVM,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX2,HW_CPU_X86_SSE2,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AESNI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE41,HW_CPU_X86_BMI,HW_CPU_X86_FMA3,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SHA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_AVX,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NODE,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_CIRRUS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Dec 6 04:56:02 localhost python3.9[266853]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:56:02 localhost nova_compute[230884]: 2025-12-06 09:56:02.421 230888 DEBUG oslo_concurrency.processutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 04:56:02 localhost systemd-journald[47810]: Field hash table of /run/log/journal/4b30904fc4748c16d0c72dbebcabab49/system.journal has a fill level at 75.7 (252 of 333 items), suggesting rotation. Dec 6 04:56:02 localhost systemd-journald[47810]: /run/log/journal/4b30904fc4748c16d0c72dbebcabab49/system.journal: Journal header limits reached or header out-of-date, rotating. Dec 6 04:56:02 localhost rsyslogd[760]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 6 04:56:02 localhost rsyslogd[760]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 6 04:56:02 localhost nova_compute[230884]: 2025-12-06 09:56:02.530 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:56:02 localhost nova_compute[230884]: 2025-12-06 09:56:02.870 230888 DEBUG oslo_concurrency.processutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 04:56:02 localhost nova_compute[230884]: 2025-12-06 09:56:02.877 230888 DEBUG nova.compute.provider_tree [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Inventory has not changed in ProviderTree for provider: 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 6 04:56:02 localhost nova_compute[230884]: 2025-12-06 09:56:02.903 230888 DEBUG nova.scheduler.client.report [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Inventory has not changed for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 6 04:56:02 localhost nova_compute[230884]: 2025-12-06 09:56:02.906 230888 DEBUG nova.compute.resource_tracker [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Compute_service record updated for np0005548789.localdomain:np0005548789.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 6 04:56:02 localhost nova_compute[230884]: 2025-12-06 09:56:02.906 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.012s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 04:56:02 localhost nova_compute[230884]: 2025-12-06 09:56:02.907 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:56:02 localhost nova_compute[230884]: 2025-12-06 09:56:02.907 230888 DEBUG nova.compute.manager [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Dec 6 04:56:03 localhost python3.9[266986]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:56:03 localhost nova_compute[230884]: 2025-12-06 09:56:03.641 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:56:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32400 DF PROTO=TCP SPT=53894 DPT=9102 SEQ=3828970947 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E37FA80000000001030307) Dec 6 04:56:03 localhost python3.9[267096]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:56:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 04:56:04 localhost systemd[1]: tmp-crun.z1xR0e.mount: Deactivated successfully. Dec 6 04:56:04 localhost podman[267207]: 2025-12-06 09:56:04.386993496 +0000 UTC m=+0.070925766 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 6 04:56:04 localhost podman[267207]: 2025-12-06 09:56:04.392790433 +0000 UTC m=+0.076722723 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 6 04:56:04 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 04:56:04 localhost python3.9[267206]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:56:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32401 DF PROTO=TCP SPT=53894 DPT=9102 SEQ=3828970947 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E383AF0000000001030307) Dec 6 04:56:05 localhost python3.9[267339]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:56:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52217 DF PROTO=TCP SPT=53674 DPT=9102 SEQ=1596883805 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E385F00000000001030307) Dec 6 04:56:05 localhost python3.9[267449]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:56:06 localhost nova_compute[230884]: 2025-12-06 09:56:06.294 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:56:06 localhost nova_compute[230884]: 2025-12-06 09:56:06.294 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:56:06 localhost nova_compute[230884]: 2025-12-06 09:56:06.295 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:56:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32402 DF PROTO=TCP SPT=53894 DPT=9102 SEQ=3828970947 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E38BAF0000000001030307) Dec 6 04:56:07 localhost python3.9[267561]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 04:56:07 localhost nova_compute[230884]: 2025-12-06 09:56:07.574 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:56:07 localhost python3.9[267671]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:56:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=439 DF PROTO=TCP SPT=32932 DPT=9102 SEQ=690632304 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E38FF00000000001030307) Dec 6 04:56:08 localhost python3.9[267728]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 04:56:08 localhost nova_compute[230884]: 2025-12-06 09:56:08.697 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:56:08 localhost python3.9[267838]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:56:09 localhost python3.9[267895]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 04:56:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 04:56:10 localhost systemd[1]: tmp-crun.BsYvoK.mount: Deactivated successfully. Dec 6 04:56:10 localhost podman[268006]: 2025-12-06 09:56:10.125074002 +0000 UTC m=+0.098727704 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 6 04:56:10 localhost podman[268006]: 2025-12-06 09:56:10.136175601 +0000 UTC m=+0.109829313 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS) Dec 6 04:56:10 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 04:56:10 localhost python3.9[268005]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:56:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32403 DF PROTO=TCP SPT=53894 DPT=9102 SEQ=3828970947 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E39B6F0000000001030307) Dec 6 04:56:10 localhost python3.9[268132]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:56:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 04:56:11 localhost podman[268189]: 2025-12-06 09:56:11.26993942 +0000 UTC m=+0.099941492 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.6, config_id=edpm, name=ubi9-minimal, container_name=openstack_network_exporter, vcs-type=git, build-date=2025-08-20T13:12:41) Dec 6 04:56:11 localhost podman[268189]: 2025-12-06 09:56:11.285110683 +0000 UTC m=+0.115112695 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_id=edpm, io.buildah.version=1.33.7, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git, io.openshift.expose-services=, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6) Dec 6 04:56:11 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 04:56:11 localhost python3.9[268190]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:56:12 localhost python3.9[268317]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:56:12 localhost nova_compute[230884]: 2025-12-06 09:56:12.619 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:56:12 localhost python3.9[268374]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:56:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 04:56:13 localhost systemd[1]: tmp-crun.yTrVZh.mount: Deactivated successfully. Dec 6 04:56:13 localhost podman[268485]: 2025-12-06 09:56:13.680117312 +0000 UTC m=+0.086218353 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Dec 6 04:56:13 localhost podman[268485]: 2025-12-06 09:56:13.691149648 +0000 UTC m=+0.097250689 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 04:56:13 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 04:56:13 localhost nova_compute[230884]: 2025-12-06 09:56:13.736 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:56:13 localhost python3.9[268484]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:56:13 localhost systemd[1]: Reloading. Dec 6 04:56:14 localhost systemd-sysv-generator[268533]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:56:14 localhost systemd-rc-local-generator[268528]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:56:14 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:56:14 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 04:56:14 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:56:14 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:56:14 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:56:14 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 04:56:14 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:56:14 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:56:14 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:56:16 localhost python3.9[268652]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:56:16 localhost python3.9[268709]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:56:16 localhost openstack_network_exporter[243110]: ERROR 09:56:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 04:56:16 localhost openstack_network_exporter[243110]: ERROR 09:56:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 04:56:16 localhost openstack_network_exporter[243110]: ERROR 09:56:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 04:56:16 localhost openstack_network_exporter[243110]: ERROR 09:56:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 04:56:16 localhost openstack_network_exporter[243110]: Dec 6 04:56:16 localhost openstack_network_exporter[243110]: ERROR 09:56:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 04:56:16 localhost openstack_network_exporter[243110]: Dec 6 04:56:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 04:56:16 localhost systemd[1]: tmp-crun.zdmXB7.mount: Deactivated successfully. Dec 6 04:56:16 localhost podman[268751]: 2025-12-06 09:56:16.925415905 +0000 UTC m=+0.085380498 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 6 04:56:16 localhost podman[268751]: 2025-12-06 09:56:16.965068406 +0000 UTC m=+0.125033009 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 6 04:56:16 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 04:56:17 localhost python3.9[268842]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:56:17 localhost sshd[268900]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:56:17 localhost nova_compute[230884]: 2025-12-06 09:56:17.671 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:56:17 localhost python3.9[268899]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:56:18 localhost python3.9[269011]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:56:18 localhost systemd[1]: Reloading. Dec 6 04:56:18 localhost nova_compute[230884]: 2025-12-06 09:56:18.781 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:56:18 localhost systemd-rc-local-generator[269039]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:56:18 localhost systemd-sysv-generator[269043]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:56:18 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:56:18 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 04:56:18 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:56:18 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:56:18 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:56:19 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 04:56:19 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:56:19 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:56:19 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:56:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32404 DF PROTO=TCP SPT=53894 DPT=9102 SEQ=3828970947 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E3BBEF0000000001030307) Dec 6 04:56:19 localhost systemd[1]: Starting Create netns directory... Dec 6 04:56:19 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Dec 6 04:56:19 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Dec 6 04:56:19 localhost systemd[1]: Finished Create netns directory. Dec 6 04:56:20 localhost python3.9[269164]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 04:56:20 localhost python3.9[269274]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:56:21 localhost python3.9[269331]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/multipathd/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/multipathd/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 04:56:22 localhost python3.9[269441]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 04:56:22 localhost nova_compute[230884]: 2025-12-06 09:56:22.379 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:56:22 localhost nova_compute[230884]: 2025-12-06 09:56:22.411 230888 DEBUG nova.compute.manager [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Triggering sync for uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m Dec 6 04:56:22 localhost nova_compute[230884]: 2025-12-06 09:56:22.412 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Acquiring lock "b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 04:56:22 localhost nova_compute[230884]: 2025-12-06 09:56:22.412 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Lock "b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" acquired by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 04:56:22 localhost nova_compute[230884]: 2025-12-06 09:56:22.471 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Lock "b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" "released" by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" :: held 0.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 04:56:22 localhost nova_compute[230884]: 2025-12-06 09:56:22.720 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:56:23 localhost python3.9[269551]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:56:23 localhost python3.9[269608]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/multipathd.json _original_basename=.kh1soc9z recurse=False state=file path=/var/lib/kolla/config_files/multipathd.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:56:23 localhost nova_compute[230884]: 2025-12-06 09:56:23.814 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:56:23 localhost podman[241090]: time="2025-12-06T09:56:23Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 04:56:23 localhost podman[241090]: @ - - [06/Dec/2025:09:56:23 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 150643 "" "Go-http-client/1.1" Dec 6 04:56:23 localhost podman[241090]: @ - - [06/Dec/2025:09:56:23 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17722 "" "Go-http-client/1.1" Dec 6 04:56:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 04:56:24 localhost podman[269718]: 2025-12-06 09:56:24.547360838 +0000 UTC m=+0.088308657 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 6 04:56:24 localhost podman[269718]: 2025-12-06 09:56:24.589089731 +0000 UTC m=+0.130037500 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Dec 6 04:56:24 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 04:56:24 localhost python3.9[269719]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:56:27 localhost python3.9[270020]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False Dec 6 04:56:27 localhost nova_compute[230884]: 2025-12-06 09:56:27.780 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:56:28 localhost python3.9[270130]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Dec 6 04:56:28 localhost nova_compute[230884]: 2025-12-06 09:56:28.816 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:56:29 localhost python3.9[270240]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Dec 6 04:56:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 04:56:30 localhost podman[270285]: 2025-12-06 09:56:30.916541719 +0000 UTC m=+0.080474928 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 04:56:30 localhost podman[270285]: 2025-12-06 09:56:30.950190266 +0000 UTC m=+0.114123455 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Dec 6 04:56:30 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 04:56:32 localhost nova_compute[230884]: 2025-12-06 09:56:32.823 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:56:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11733 DF PROTO=TCP SPT=45218 DPT=9102 SEQ=596911704 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E3F4D70000000001030307) Dec 6 04:56:33 localhost nova_compute[230884]: 2025-12-06 09:56:33.818 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:56:33 localhost python3[270395]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Dec 6 04:56:34 localhost python3[270395]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7",#012 "Digest": "sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-multipathd:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-12-01T06:11:02.031267563Z",#012 "Config": {#012 "User": "root",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 249482216,#012 "VirtualSize": 249482216,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/a6426b16bb5884060eaf559f46c5a81bf85811eff8d5d75aaee95a48f0b492cc/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/a6426b16bb5884060eaf559f46c5a81bf85811eff8d5d75aaee95a48f0b492cc/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",#012 "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",#012 "sha256:8c448567789503f6c5be645a12473dfc27734872532d528b6ee764c214f9f2f3"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "root",#012 "History": [#012 {#012 "created": "2025-11-25T04:02:36.223494528Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:36.223562059Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251125\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:39.054452717Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-12-01T06:09:28.025707917Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025744608Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025767729Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025791379Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.02581523Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025867611Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.469442331Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:10:02.029095017Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:10:05.672474685Z",#012 "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:10:06.113425253Z",#012 Dec 6 04:56:34 localhost sshd[270540]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:56:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11734 DF PROTO=TCP SPT=45218 DPT=9102 SEQ=596911704 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E3F8EF0000000001030307) Dec 6 04:56:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 04:56:34 localhost systemd[1]: tmp-crun.6ocq2N.mount: Deactivated successfully. Dec 6 04:56:34 localhost podman[270548]: 2025-12-06 09:56:34.915648983 +0000 UTC m=+0.074110314 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 04:56:34 localhost podman[270548]: 2025-12-06 09:56:34.950671812 +0000 UTC m=+0.109133093 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 04:56:34 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 04:56:35 localhost python3.9[270592]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:56:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32405 DF PROTO=TCP SPT=53894 DPT=9102 SEQ=3828970947 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E3FBEF0000000001030307) Dec 6 04:56:36 localhost python3.9[270704]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:56:36 localhost python3.9[270759]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:56:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11735 DF PROTO=TCP SPT=45218 DPT=9102 SEQ=596911704 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E400EF0000000001030307) Dec 6 04:56:37 localhost python3.9[270868]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765014996.5451448-1365-203454329109261/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:56:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52218 DF PROTO=TCP SPT=53674 DPT=9102 SEQ=1596883805 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E403EF0000000001030307) Dec 6 04:56:37 localhost nova_compute[230884]: 2025-12-06 09:56:37.869 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:56:38 localhost python3.9[270923]: ansible-systemd Invoked with state=started name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:56:38 localhost nova_compute[230884]: 2025-12-06 09:56:38.821 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:56:39 localhost python3.9[271118]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:56:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 04:56:40 localhost systemd[1]: tmp-crun.lyPeF1.mount: Deactivated successfully. Dec 6 04:56:40 localhost podman[271228]: 2025-12-06 09:56:40.681456737 +0000 UTC m=+0.103536092 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=edpm) Dec 6 04:56:40 localhost podman[271228]: 2025-12-06 09:56:40.696135065 +0000 UTC m=+0.118214470 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible) Dec 6 04:56:40 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 04:56:40 localhost python3.9[271229]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:56:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11736 DF PROTO=TCP SPT=45218 DPT=9102 SEQ=596911704 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E410AF0000000001030307) Dec 6 04:56:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 04:56:41 localhost systemd[1]: tmp-crun.mf2P2V.mount: Deactivated successfully. Dec 6 04:56:41 localhost podman[271357]: 2025-12-06 09:56:41.882672605 +0000 UTC m=+0.104601555 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, name=ubi9-minimal, build-date=2025-08-20T13:12:41, config_id=edpm, io.buildah.version=1.33.7, vendor=Red Hat, Inc., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, architecture=x86_64) Dec 6 04:56:41 localhost podman[271357]: 2025-12-06 09:56:41.903194871 +0000 UTC m=+0.125123781 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, config_id=edpm, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, architecture=x86_64, release=1755695350, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41) Dec 6 04:56:41 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 04:56:41 localhost python3.9[271356]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Dec 6 04:56:42 localhost python3.9[271486]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled Dec 6 04:56:42 localhost nova_compute[230884]: 2025-12-06 09:56:42.924 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:56:43 localhost python3.9[271596]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:56:43 localhost nova_compute[230884]: 2025-12-06 09:56:43.825 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:56:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 04:56:43 localhost podman[271634]: 2025-12-06 09:56:43.932844936 +0000 UTC m=+0.088430750 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3) Dec 6 04:56:44 localhost podman[271634]: 2025-12-06 09:56:44.004660188 +0000 UTC m=+0.160246182 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible) Dec 6 04:56:44 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 04:56:44 localhost python3.9[271664]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/modules-load.d/nvme-fabrics.conf _original_basename=module-load.conf.j2 recurse=False state=file path=/etc/modules-load.d/nvme-fabrics.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:56:44 localhost python3.9[271780]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:56:45 localhost python3.9[271890]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Dec 6 04:56:46 localhost openstack_network_exporter[243110]: ERROR 09:56:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 04:56:46 localhost openstack_network_exporter[243110]: ERROR 09:56:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 04:56:46 localhost openstack_network_exporter[243110]: ERROR 09:56:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 04:56:46 localhost openstack_network_exporter[243110]: Dec 6 04:56:46 localhost openstack_network_exporter[243110]: ERROR 09:56:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 04:56:46 localhost openstack_network_exporter[243110]: ERROR 09:56:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 04:56:46 localhost openstack_network_exporter[243110]: Dec 6 04:56:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:56:47.287 160509 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 04:56:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:56:47.287 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 04:56:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:56:47.289 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 04:56:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 04:56:47 localhost podman[271893]: 2025-12-06 09:56:47.759504636 +0000 UTC m=+0.081136348 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 6 04:56:47 localhost podman[271893]: 2025-12-06 09:56:47.772210534 +0000 UTC m=+0.093842216 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 6 04:56:47 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 04:56:47 localhost nova_compute[230884]: 2025-12-06 09:56:47.927 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:56:48 localhost nova_compute[230884]: 2025-12-06 09:56:48.868 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:56:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11737 DF PROTO=TCP SPT=45218 DPT=9102 SEQ=596911704 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E431EF0000000001030307) Dec 6 04:56:50 localhost python3.9[272021]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 6 04:56:51 localhost python3.9[272135]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:56:52 localhost python3.9[272245]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 6 04:56:52 localhost systemd[1]: Reloading. Dec 6 04:56:52 localhost systemd-rc-local-generator[272272]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:56:52 localhost systemd-sysv-generator[272277]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:56:52 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:56:52 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 04:56:52 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:56:52 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:56:52 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:56:52 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 04:56:52 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:56:52 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:56:52 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:56:52 localhost nova_compute[230884]: 2025-12-06 09:56:52.964 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:56:53 localhost python3.9[272389]: ansible-ansible.builtin.service_facts Invoked Dec 6 04:56:53 localhost sshd[272390]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:56:53 localhost network[272408]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Dec 6 04:56:53 localhost network[272409]: 'network-scripts' will be removed from distribution in near future. Dec 6 04:56:53 localhost network[272410]: It is advised to switch to 'NetworkManager' instead for network management. Dec 6 04:56:53 localhost nova_compute[230884]: 2025-12-06 09:56:53.871 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:56:53 localhost podman[241090]: time="2025-12-06T09:56:53Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 04:56:53 localhost podman[241090]: @ - - [06/Dec/2025:09:56:53 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 150643 "" "Go-http-client/1.1" Dec 6 04:56:53 localhost podman[241090]: @ - - [06/Dec/2025:09:56:53 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17723 "" "Go-http-client/1.1" Dec 6 04:56:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 04:56:54 localhost podman[272433]: 2025-12-06 09:56:54.727477268 +0000 UTC m=+0.081029025 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Dec 6 04:56:54 localhost podman[272433]: 2025-12-06 09:56:54.766091626 +0000 UTC m=+0.119643353 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 6 04:56:54 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 04:56:55 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:56:58 localhost nova_compute[230884]: 2025-12-06 09:56:58.004 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:56:58 localhost nova_compute[230884]: 2025-12-06 09:56:58.874 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:56:59 localhost python3.9[272668]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:57:00 localhost nova_compute[230884]: 2025-12-06 09:57:00.500 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:57:00 localhost nova_compute[230884]: 2025-12-06 09:57:00.501 230888 DEBUG nova.compute.manager [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 6 04:57:00 localhost nova_compute[230884]: 2025-12-06 09:57:00.501 230888 DEBUG nova.compute.manager [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 6 04:57:00 localhost python3.9[272779]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:57:00 localhost nova_compute[230884]: 2025-12-06 09:57:00.940 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Acquiring lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 6 04:57:00 localhost nova_compute[230884]: 2025-12-06 09:57:00.940 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Acquired lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 6 04:57:00 localhost nova_compute[230884]: 2025-12-06 09:57:00.941 230888 DEBUG nova.network.neutron [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 6 04:57:00 localhost nova_compute[230884]: 2025-12-06 09:57:00.941 230888 DEBUG nova.objects.instance [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 6 04:57:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 04:57:01 localhost systemd[1]: tmp-crun.rpIhmt.mount: Deactivated successfully. Dec 6 04:57:01 localhost podman[272890]: 2025-12-06 09:57:01.303136142 +0000 UTC m=+0.109636648 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 6 04:57:01 localhost podman[272890]: 2025-12-06 09:57:01.332284691 +0000 UTC m=+0.138785217 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 6 04:57:01 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 04:57:01 localhost nova_compute[230884]: 2025-12-06 09:57:01.399 230888 DEBUG nova.network.neutron [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updating instance_info_cache with network_info: [{"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 6 04:57:01 localhost nova_compute[230884]: 2025-12-06 09:57:01.421 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Releasing lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 6 04:57:01 localhost nova_compute[230884]: 2025-12-06 09:57:01.422 230888 DEBUG nova.compute.manager [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 6 04:57:01 localhost nova_compute[230884]: 2025-12-06 09:57:01.423 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:57:01 localhost nova_compute[230884]: 2025-12-06 09:57:01.423 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:57:01 localhost nova_compute[230884]: 2025-12-06 09:57:01.423 230888 DEBUG nova.compute.manager [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 6 04:57:01 localhost nova_compute[230884]: 2025-12-06 09:57:01.424 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:57:01 localhost nova_compute[230884]: 2025-12-06 09:57:01.453 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 04:57:01 localhost nova_compute[230884]: 2025-12-06 09:57:01.453 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 04:57:01 localhost nova_compute[230884]: 2025-12-06 09:57:01.454 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 04:57:01 localhost nova_compute[230884]: 2025-12-06 09:57:01.454 230888 DEBUG nova.compute.resource_tracker [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Auditing locally available compute resources for np0005548789.localdomain (node: np0005548789.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 6 04:57:01 localhost nova_compute[230884]: 2025-12-06 09:57:01.455 230888 DEBUG oslo_concurrency.processutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 04:57:01 localhost python3.9[272891]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:57:01 localhost nova_compute[230884]: 2025-12-06 09:57:01.917 230888 DEBUG oslo_concurrency.processutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 04:57:02 localhost nova_compute[230884]: 2025-12-06 09:57:02.119 230888 DEBUG nova.virt.libvirt.driver [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 04:57:02 localhost nova_compute[230884]: 2025-12-06 09:57:02.120 230888 DEBUG nova.virt.libvirt.driver [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 04:57:02 localhost nova_compute[230884]: 2025-12-06 09:57:02.281 230888 WARNING nova.virt.libvirt.driver [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 6 04:57:02 localhost nova_compute[230884]: 2025-12-06 09:57:02.282 230888 DEBUG nova.compute.resource_tracker [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Hypervisor/Node resource view: name=np0005548789.localdomain free_ram=11838MB free_disk=41.83721923828125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 6 04:57:02 localhost nova_compute[230884]: 2025-12-06 09:57:02.283 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 04:57:02 localhost nova_compute[230884]: 2025-12-06 09:57:02.283 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 04:57:02 localhost python3.9[273039]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:57:02 localhost nova_compute[230884]: 2025-12-06 09:57:02.388 230888 DEBUG nova.compute.resource_tracker [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 6 04:57:02 localhost nova_compute[230884]: 2025-12-06 09:57:02.389 230888 DEBUG nova.compute.resource_tracker [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 6 04:57:02 localhost nova_compute[230884]: 2025-12-06 09:57:02.389 230888 DEBUG nova.compute.resource_tracker [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Final resource view: name=np0005548789.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 6 04:57:02 localhost nova_compute[230884]: 2025-12-06 09:57:02.452 230888 DEBUG oslo_concurrency.processutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 04:57:02 localhost nova_compute[230884]: 2025-12-06 09:57:02.905 230888 DEBUG oslo_concurrency.processutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 04:57:02 localhost nova_compute[230884]: 2025-12-06 09:57:02.912 230888 DEBUG nova.compute.provider_tree [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Inventory has not changed in ProviderTree for provider: 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 6 04:57:02 localhost nova_compute[230884]: 2025-12-06 09:57:02.934 230888 DEBUG nova.scheduler.client.report [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Inventory has not changed for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 6 04:57:02 localhost nova_compute[230884]: 2025-12-06 09:57:02.937 230888 DEBUG nova.compute.resource_tracker [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Compute_service record updated for np0005548789.localdomain:np0005548789.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 6 04:57:02 localhost nova_compute[230884]: 2025-12-06 09:57:02.937 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 04:57:03 localhost nova_compute[230884]: 2025-12-06 09:57:03.007 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:57:03 localhost nova_compute[230884]: 2025-12-06 09:57:03.016 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:57:03 localhost nova_compute[230884]: 2025-12-06 09:57:03.016 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:57:03 localhost python3.9[273170]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:57:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48528 DF PROTO=TCP SPT=37038 DPT=9102 SEQ=3499426967 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E46A070000000001030307) Dec 6 04:57:03 localhost nova_compute[230884]: 2025-12-06 09:57:03.876 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:57:03 localhost python3.9[273283]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:57:04 localhost nova_compute[230884]: 2025-12-06 09:57:04.500 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:57:04 localhost python3.9[273394]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:57:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48529 DF PROTO=TCP SPT=37038 DPT=9102 SEQ=3499426967 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E46E2F0000000001030307) Dec 6 04:57:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 04:57:05 localhost systemd[1]: tmp-crun.fJQYTY.mount: Deactivated successfully. Dec 6 04:57:05 localhost podman[273505]: 2025-12-06 09:57:05.237074857 +0000 UTC m=+0.090233275 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 6 04:57:05 localhost podman[273505]: 2025-12-06 09:57:05.24631826 +0000 UTC m=+0.099476688 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 6 04:57:05 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 04:57:05 localhost python3.9[273506]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:57:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11738 DF PROTO=TCP SPT=45218 DPT=9102 SEQ=596911704 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E471EF0000000001030307) Dec 6 04:57:06 localhost nova_compute[230884]: 2025-12-06 09:57:06.501 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:57:06 localhost nova_compute[230884]: 2025-12-06 09:57:06.501 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:57:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48530 DF PROTO=TCP SPT=37038 DPT=9102 SEQ=3499426967 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E4762F0000000001030307) Dec 6 04:57:07 localhost python3.9[273638]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:57:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32406 DF PROTO=TCP SPT=53894 DPT=9102 SEQ=3828970947 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E479EF0000000001030307) Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.911 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'name': 'test', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005548789.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'hostId': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.912 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.915 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.bytes volume: 9461 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '62dfa7d2-f5e9-4bd7-b4b3-0ff7bbda66a3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9461, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T09:57:07.912306', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'ed207af6-d289-11f0-aaf2-fa163e118844', 'monotonic_time': 11446.161664054, 'message_signature': '3966393b3e25aac4ca3ed7b0834410c6fbdac21627e3958b0df6d6fe933a1a34'}]}, 'timestamp': '2025-12-06 09:57:07.917141', '_unique_id': '534bc53231c84fbb9013b6d570a86167'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.918 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.920 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.933 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.934 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5ee6098c-9ebb-489e-863e-7bd23d1d6d87', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T09:57:07.920453', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ed231e28-d289-11f0-aaf2-fa163e118844', 'monotonic_time': 11446.169846523, 'message_signature': '22b905e7f1223ecd88b846e3e6617db996c512b3babd052d9f8855f0649f72a6'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T09:57:07.920453', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ed233304-d289-11f0-aaf2-fa163e118844', 'monotonic_time': 11446.169846523, 'message_signature': 'b8308019fd51a3975d5701c03eb50c1125db62be760fd53770e40cb0729eeac2'}]}, 'timestamp': '2025-12-06 09:57:07.934800', '_unique_id': '0ddec6eebfa44eed980a5f74e51d90e1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.936 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.937 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 6 04:57:07 localhost python3.9[273748]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.970 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.bytes volume: 29130240 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.970 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.bytes volume: 4300800 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '50586b76-fd9b-4f87-9d32-723ddcd924dd', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 29130240, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T09:57:07.937539', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ed28b11c-d289-11f0-aaf2-fa163e118844', 'monotonic_time': 11446.186909024, 'message_signature': 'ade3b3e8584038870ed9366e3528f1b372153be46135e12f5af981ec2e7f9089'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4300800, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T09:57:07.937539', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ed28c65c-d289-11f0-aaf2-fa163e118844', 'monotonic_time': 11446.186909024, 'message_signature': '838878e4faa21653b95a476d2214f81aa30d8e2e94a56af155ea179601da82b1'}]}, 'timestamp': '2025-12-06 09:57:07.971291', '_unique_id': '334c72e5bfe44343b40739a28088e5bc'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.972 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.973 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.992 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/memory.usage volume: 52.37890625 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'fca32807-7ffb-48e5-819e-3cab18562a42', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 52.37890625, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'timestamp': '2025-12-06T09:57:07.974132', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': 'ed2c23c4-d289-11f0-aaf2-fa163e118844', 'monotonic_time': 11446.241890373, 'message_signature': '7ea4d356a2276580414d4ae3c52caad1ab69424a3ef4a0c0973a21eeea8d1e2f'}]}, 'timestamp': '2025-12-06 09:57:07.993353', '_unique_id': 'f8f5573a02cb4676bac67f7560ff2086'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:57:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.994 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.996 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.996 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.996 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.allocation volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7753410b-f092-42d9-8f32-f3309e3a3e9a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T09:57:07.996183', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ed2ca650-d289-11f0-aaf2-fa163e118844', 'monotonic_time': 11446.169846523, 'message_signature': '9f3dc85f3a6ed3e9e29b54e07fb4fe67266996bc3712265124ef85fd1bfa45a8'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T09:57:07.996183', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ed2cb866-d289-11f0-aaf2-fa163e118844', 'monotonic_time': 11446.169846523, 'message_signature': '22ea942adca4ce7360245f968422372e55d34a5a899bdb7eb5dc1126661dcc56'}]}, 'timestamp': '2025-12-06 09:57:07.997099', '_unique_id': 'faad32db3b6e47c2a7e6ec163d25d421'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.998 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.999 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:07.999 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.000 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '53da2765-f05c-4943-bed1-2bc6b8e286df', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T09:57:07.999665', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ed2d30b6-d289-11f0-aaf2-fa163e118844', 'monotonic_time': 11446.169846523, 'message_signature': 'fae15123509e8891b7884ab14e2aab43a69fc34ed02075b092d5718b191e8de3'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T09:57:07.999665', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ed2d434e-d289-11f0-aaf2-fa163e118844', 'monotonic_time': 11446.169846523, 'message_signature': '3a1447ae1541164cacd513252efd092271fbe93e8485ea763b08c3736c3c3e72'}]}, 'timestamp': '2025-12-06 09:57:08.000660', '_unique_id': 'cc7d07be4fa4443485a557e5bf845c80'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.001 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.002 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.003 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.requests volume: 524 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.003 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd4b9675a-ab2f-4be8-bfec-163236929546', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 524, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T09:57:08.003117', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ed2db54a-d289-11f0-aaf2-fa163e118844', 'monotonic_time': 11446.186909024, 'message_signature': '30449e3ea13ebadacdf55be0157cb95ba812cb45d5e634cbe2bfca65ae69ab63'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T09:57:08.003117', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ed2dc788-d289-11f0-aaf2-fa163e118844', 'monotonic_time': 11446.186909024, 'message_signature': 'bb35d5a5d0200399bdb58ce67d5a52b6e4ba8d29bdb7831575443105cd4a843b'}]}, 'timestamp': '2025-12-06 09:57:08.004074', '_unique_id': '550789307d9c42a0a8572f02502da1d9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.005 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.006 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.006 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.latency volume: 1043514478 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.006 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.latency volume: 200503964 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ae02039d-eb4a-49b1-9a71-646c1bd97c05', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1043514478, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T09:57:08.006488', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ed2e38e4-d289-11f0-aaf2-fa163e118844', 'monotonic_time': 11446.186909024, 'message_signature': '04287330ced4e155e8f57df4a3ff0c697c637e62bcf50b3e91257262c8f811a7'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 200503964, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T09:57:08.006488', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ed2e4b54-d289-11f0-aaf2-fa163e118844', 'monotonic_time': 11446.186909024, 'message_signature': '77a3d0d7ae2b7b88a96cda1728edf1bbed26bb1c4cc30b0b86e8225559a4869b'}]}, 'timestamp': '2025-12-06 09:57:08.007414', '_unique_id': '0a2308ccd3c94a23bf1772751fe77581'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.008 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.009 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.009 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.requests volume: 1064 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.010 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.requests volume: 222 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7108e5b1-f667-4b01-a97b-9569f53c6c8b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1064, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T09:57:08.009883', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ed2ebdaa-d289-11f0-aaf2-fa163e118844', 'monotonic_time': 11446.186909024, 'message_signature': '3e6632db63175e1bbde779abf779fed9ca0c8104d6b2c07f060f417dc11ec07e'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 222, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T09:57:08.009883', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ed2ed09c-d289-11f0-aaf2-fa163e118844', 'monotonic_time': 11446.186909024, 'message_signature': '301c1e4d3aae45e003c16d0ccb27e6cfa2711842c5b1ee3846ed2557f92d063d'}]}, 'timestamp': '2025-12-06 09:57:08.010895', '_unique_id': '3fa75abda40244869d8e836f98cc393d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.011 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.013 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.013 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.013 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a224913a-83bd-458c-ae76-edc8605a26f0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T09:57:08.013802', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'ed2f5724-d289-11f0-aaf2-fa163e118844', 'monotonic_time': 11446.161664054, 'message_signature': '59817b026f94bb7e2428f849dc6b6546e00d5762cd85c343f9e0e1eb55591d9b'}]}, 'timestamp': '2025-12-06 09:57:08.014297', '_unique_id': '31f47b0abe134254b03de19e9d3a2e8d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.015 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.016 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.016 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f1fb34b5-1544-4b08-b6c1-dabf32483eaf', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T09:57:08.016560', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'ed2fc506-d289-11f0-aaf2-fa163e118844', 'monotonic_time': 11446.161664054, 'message_signature': '758f20508ecc9d001dd8003f780b9d611f42a1f40678ef3c282717ea122644ac'}]}, 'timestamp': '2025-12-06 09:57:08.017114', '_unique_id': 'e4be80e9a0bc4037946daf182129f7f7'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.018 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.019 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.019 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.019 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.019 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.019 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.bytes volume: 11272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9696c544-ca6e-478c-bc7a-8f150fdb23e7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 11272, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T09:57:08.019922', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'ed30463e-d289-11f0-aaf2-fa163e118844', 'monotonic_time': 11446.161664054, 'message_signature': 'a3d0dafa47f014c2e657f1a38e135e0171c30579ead55422a94a4e3713274d75'}]}, 'timestamp': '2025-12-06 09:57:08.020459', '_unique_id': '196390bf837c493eb40e5670fa022888'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.021 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.023 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.023 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a21aec9a-4818-4c1e-9b20-2ee31e077df8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T09:57:08.023733', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'ed30e260-d289-11f0-aaf2-fa163e118844', 'monotonic_time': 11446.161664054, 'message_signature': '3939103945f8d4258791de5d6902e36b2c824d7a931c32853ea3cf529626bb7b'}]}, 'timestamp': '2025-12-06 09:57:08.024540', '_unique_id': '7aefa85d777247fe9ba183216cdb19a8'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.026 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.027 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.027 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets volume: 129 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ba304367-e8c3-4708-89cc-9c8881696cee', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 129, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T09:57:08.027744', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'ed317a54-d289-11f0-aaf2-fa163e118844', 'monotonic_time': 11446.161664054, 'message_signature': 'd899aa9f18cf85f74cf80df8f10a0574fe36fcbcd45ff3a2a512985879b6f5d4'}]}, 'timestamp': '2025-12-06 09:57:08.028434', '_unique_id': 'b7ae27dd944246b7a791eb8f0f9e40bb'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.029 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.045 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.045 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:57:08 localhost nova_compute[230884]: 2025-12-06 09:57:08.047 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e9d1e5a0-380d-4111-afa2-e4d48cebf4bd', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T09:57:08.045517', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'ed343000-d289-11f0-aaf2-fa163e118844', 'monotonic_time': 11446.161664054, 'message_signature': '2b381c4e940f98cda2b0b2d98e848ff8ecadae2f59dbc94a70aeed13f4102f6f'}]}, 'timestamp': '2025-12-06 09:57:08.046079', '_unique_id': '04bd5d079aef446e8602eebf53d5928f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.048 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.049 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.049 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a3240774-79ca-4b99-87a0-89db06101102', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T09:57:08.049896', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'ed34d866-d289-11f0-aaf2-fa163e118844', 'monotonic_time': 11446.161664054, 'message_signature': '3d6839866218a66352c1b51a1c03c9624e957b6ecfcf1bd8957ca77e9b820167'}]}, 'timestamp': '2025-12-06 09:57:08.050378', '_unique_id': '204825a98a3541b9ad79f8ac75240bf8'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.051 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.052 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.052 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.bytes.delta volume: 446 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3a2bcb7f-a193-46c7-9a08-a4df9f9a8dcf', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 446, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T09:57:08.052631', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'ed354418-d289-11f0-aaf2-fa163e118844', 'monotonic_time': 11446.161664054, 'message_signature': '1fa61a8fb3184da8d21c5c1563bcbbffcd719a07174b688118637fbd2aa58896'}]}, 'timestamp': '2025-12-06 09:57:08.053053', '_unique_id': '2cdd3a7e492646e39e2f13d7f87df5da'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.053 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.054 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.054 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.latency volume: 223261606 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.054 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.latency volume: 30984668 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '282d62f4-6d84-4f88-a99e-aa7de173c1a5', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 223261606, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T09:57:08.054572', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ed358c7a-d289-11f0-aaf2-fa163e118844', 'monotonic_time': 11446.186909024, 'message_signature': 'f7bb277b6c202677ccb767432f8c63626a8309da89905353665c6f911db63983'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 30984668, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T09:57:08.054572', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ed359774-d289-11f0-aaf2-fa163e118844', 'monotonic_time': 11446.186909024, 'message_signature': '7ac531f3dcfe11b06575d59f8a4957619f911ab9eb5adc509e93c6870caed47a'}]}, 'timestamp': '2025-12-06 09:57:08.055154', '_unique_id': '29630b38cfd045f5b8a11dffae61dd09'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.055 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.056 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.056 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.bytes volume: 73908224 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8a7f2147-6af7-407d-b550-87547d54fb4d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 73908224, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T09:57:08.056704', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ed35e03a-d289-11f0-aaf2-fa163e118844', 'monotonic_time': 11446.186909024, 'message_signature': '46d0e153b97648f0abe486278f990dec6674c3b2145fecc8508284a16cba70cf'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T09:57:08.056704', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ed35ea3a-d289-11f0-aaf2-fa163e118844', 'monotonic_time': 11446.186909024, 'message_signature': 'b132b75a7c77771c07fa77fdf85f0460e8b772fd39ffd39d33816348277b2782'}]}, 'timestamp': '2025-12-06 09:57:08.057272', '_unique_id': 'a605e28691324433a8b01a9e7953b056'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.057 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.058 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.058 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/cpu volume: 54990000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '59fdac27-99bc-44a4-8d85-97afbfc570bb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 54990000000, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'timestamp': '2025-12-06T09:57:08.058618', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': 'ed3629f0-d289-11f0-aaf2-fa163e118844', 'monotonic_time': 11446.241890373, 'message_signature': 'e82db03b163d325c68f3993e718eac53f0147c2ed5b3c2d117ecea4151ba293f'}]}, 'timestamp': '2025-12-06 09:57:08.058915', '_unique_id': 'c2166662e9214158851d58d8240590e7'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.059 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.060 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.060 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets volume: 93 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '83ebaf13-9ac5-477c-96c8-f4c561834867', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 93, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T09:57:08.060284', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'ed366a6e-d289-11f0-aaf2-fa163e118844', 'monotonic_time': 11446.161664054, 'message_signature': '66f205a0dae88bd8ac671c9831dbd2e3c61964b9c5fb6900433798cdf5c8ee30'}]}, 'timestamp': '2025-12-06 09:57:08.060609', '_unique_id': 'a976aca861ee47f0b5aff47b253f189c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging yield Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 04:57:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:57:08.061 12 ERROR oslo_messaging.notify.messaging Dec 6 04:57:08 localhost python3.9[273858]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:57:08 localhost nova_compute[230884]: 2025-12-06 09:57:08.878 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:57:09 localhost python3.9[273968]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:57:09 localhost python3.9[274078]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:57:10 localhost python3.9[274188]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:57:10 localhost sshd[274245]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:57:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48531 DF PROTO=TCP SPT=37038 DPT=9102 SEQ=3499426967 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E485F00000000001030307) Dec 6 04:57:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 04:57:10 localhost podman[274276]: 2025-12-06 09:57:10.934024639 +0000 UTC m=+0.087722879 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 04:57:10 localhost podman[274276]: 2025-12-06 09:57:10.968267864 +0000 UTC m=+0.121966084 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 04:57:10 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 04:57:11 localhost python3.9[274319]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:57:11 localhost python3.9[274429]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:57:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 04:57:12 localhost podman[274447]: 2025-12-06 09:57:12.054643086 +0000 UTC m=+0.081459817 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vendor=Red Hat, Inc., vcs-type=git, managed_by=edpm_ansible, io.buildah.version=1.33.7, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, distribution-scope=public) Dec 6 04:57:12 localhost podman[274447]: 2025-12-06 09:57:12.068011454 +0000 UTC m=+0.094828215 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=openstack_network_exporter, release=1755695350, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, io.buildah.version=1.33.7, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6) Dec 6 04:57:12 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 04:57:12 localhost python3.9[274559]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:57:13 localhost nova_compute[230884]: 2025-12-06 09:57:13.091 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:57:13 localhost python3.9[274669]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:57:13 localhost nova_compute[230884]: 2025-12-06 09:57:13.880 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:57:14 localhost python3.9[274779]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:57:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 04:57:14 localhost podman[274890]: 2025-12-06 09:57:14.943974904 +0000 UTC m=+0.096153846 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3) Dec 6 04:57:14 localhost podman[274890]: 2025-12-06 09:57:14.954807054 +0000 UTC m=+0.106985996 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd) Dec 6 04:57:14 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 04:57:15 localhost python3.9[274889]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:57:15 localhost python3.9[275018]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:57:16 localhost python3.9[275128]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:57:16 localhost openstack_network_exporter[243110]: ERROR 09:57:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 04:57:16 localhost openstack_network_exporter[243110]: ERROR 09:57:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 04:57:16 localhost openstack_network_exporter[243110]: ERROR 09:57:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 04:57:16 localhost openstack_network_exporter[243110]: ERROR 09:57:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 04:57:16 localhost openstack_network_exporter[243110]: Dec 6 04:57:16 localhost openstack_network_exporter[243110]: ERROR 09:57:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 04:57:16 localhost openstack_network_exporter[243110]: Dec 6 04:57:17 localhost python3.9[275238]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:57:17 localhost python3.9[275348]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:57:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 04:57:17 localhost systemd[1]: tmp-crun.IcmrjZ.mount: Deactivated successfully. Dec 6 04:57:17 localhost podman[275366]: 2025-12-06 09:57:17.917923415 +0000 UTC m=+0.080622832 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 6 04:57:17 localhost podman[275366]: 2025-12-06 09:57:17.926734634 +0000 UTC m=+0.089434061 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 6 04:57:17 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 04:57:18 localhost nova_compute[230884]: 2025-12-06 09:57:18.139 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:57:18 localhost python3.9[275481]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:57:18 localhost nova_compute[230884]: 2025-12-06 09:57:18.884 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:57:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48532 DF PROTO=TCP SPT=37038 DPT=9102 SEQ=3499426967 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E4A5EF0000000001030307) Dec 6 04:57:19 localhost python3.9[275591]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Dec 6 04:57:20 localhost python3.9[275701]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Dec 6 04:57:20 localhost systemd[1]: Reloading. Dec 6 04:57:20 localhost systemd-rc-local-generator[275728]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 04:57:20 localhost systemd-sysv-generator[275732]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 04:57:20 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:57:20 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 04:57:20 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:57:20 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:57:20 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 04:57:20 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 04:57:20 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:57:20 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:57:20 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 04:57:21 localhost python3.9[275847]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:57:21 localhost sshd[275849]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:57:22 localhost python3.9[275959]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:57:23 localhost python3.9[276070]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:57:23 localhost nova_compute[230884]: 2025-12-06 09:57:23.142 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:57:23 localhost python3.9[276181]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:57:23 localhost nova_compute[230884]: 2025-12-06 09:57:23.887 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:57:23 localhost podman[241090]: time="2025-12-06T09:57:23Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 04:57:23 localhost podman[241090]: @ - - [06/Dec/2025:09:57:23 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 150643 "" "Go-http-client/1.1" Dec 6 04:57:23 localhost podman[241090]: @ - - [06/Dec/2025:09:57:23 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17717 "" "Go-http-client/1.1" Dec 6 04:57:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 04:57:24 localhost systemd[1]: tmp-crun.LeOjCl.mount: Deactivated successfully. Dec 6 04:57:24 localhost podman[276204]: 2025-12-06 09:57:24.933295642 +0000 UTC m=+0.094275058 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 04:57:24 localhost podman[276204]: 2025-12-06 09:57:24.995204811 +0000 UTC m=+0.156184227 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 04:57:25 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 04:57:25 localhost python3.9[276317]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:57:26 localhost python3.9[276428]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:57:26 localhost python3.9[276539]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:57:27 localhost python3.9[276650]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 04:57:28 localhost nova_compute[230884]: 2025-12-06 09:57:28.187 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:57:28 localhost nova_compute[230884]: 2025-12-06 09:57:28.889 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:57:29 localhost sshd[276669]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:57:30 localhost python3.9[276762]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 04:57:31 localhost python3.9[276872]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 04:57:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 04:57:31 localhost systemd[1]: tmp-crun.emWgcX.mount: Deactivated successfully. Dec 6 04:57:31 localhost podman[276982]: 2025-12-06 09:57:31.638487831 +0000 UTC m=+0.089811353 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 6 04:57:31 localhost podman[276982]: 2025-12-06 09:57:31.646803835 +0000 UTC m=+0.098127367 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Dec 6 04:57:31 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 04:57:31 localhost python3.9[276983]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 04:57:32 localhost python3.9[277110]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 04:57:33 localhost python3.9[277220]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 04:57:33 localhost nova_compute[230884]: 2025-12-06 09:57:33.227 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:57:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1212 DF PROTO=TCP SPT=55404 DPT=9102 SEQ=3800488960 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E4DF370000000001030307) Dec 6 04:57:33 localhost python3.9[277330]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 04:57:33 localhost nova_compute[230884]: 2025-12-06 09:57:33.893 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:57:34 localhost python3.9[277440]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Dec 6 04:57:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1213 DF PROTO=TCP SPT=55404 DPT=9102 SEQ=3800488960 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E4E32F0000000001030307) Dec 6 04:57:35 localhost python3.9[277550]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Dec 6 04:57:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48533 DF PROTO=TCP SPT=37038 DPT=9102 SEQ=3499426967 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E4E5EF0000000001030307) Dec 6 04:57:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 04:57:35 localhost sshd[277667]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:57:35 localhost podman[277661]: 2025-12-06 09:57:35.670080047 +0000 UTC m=+0.082061506 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 6 04:57:35 localhost podman[277661]: 2025-12-06 09:57:35.679986169 +0000 UTC m=+0.091967648 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 6 04:57:35 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 04:57:35 localhost python3.9[277660]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Dec 6 04:57:36 localhost python3.9[277792]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Dec 6 04:57:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1214 DF PROTO=TCP SPT=55404 DPT=9102 SEQ=3800488960 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E4EB2F0000000001030307) Dec 6 04:57:37 localhost sshd[277810]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:57:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11739 DF PROTO=TCP SPT=45218 DPT=9102 SEQ=596911704 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E4EFEF0000000001030307) Dec 6 04:57:38 localhost nova_compute[230884]: 2025-12-06 09:57:38.276 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:57:38 localhost nova_compute[230884]: 2025-12-06 09:57:38.896 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:57:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1215 DF PROTO=TCP SPT=55404 DPT=9102 SEQ=3800488960 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E4FAEF0000000001030307) Dec 6 04:57:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 04:57:41 localhost systemd[1]: tmp-crun.IhTb5W.mount: Deactivated successfully. Dec 6 04:57:41 localhost podman[277897]: 2025-12-06 09:57:41.269974656 +0000 UTC m=+0.086719259 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 6 04:57:41 localhost podman[277897]: 2025-12-06 09:57:41.310360898 +0000 UTC m=+0.127105521 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 6 04:57:41 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 04:57:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 04:57:42 localhost systemd[1]: tmp-crun.CI4bZ5.mount: Deactivated successfully. Dec 6 04:57:42 localhost podman[278010]: 2025-12-06 09:57:42.430717587 +0000 UTC m=+0.091392600 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, release=1755695350, managed_by=edpm_ansible, vcs-type=git, version=9.6, name=ubi9-minimal, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7) Dec 6 04:57:42 localhost podman[278010]: 2025-12-06 09:57:42.472561375 +0000 UTC m=+0.133236428 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_id=edpm, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, distribution-scope=public, release=1755695350, vcs-type=git, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41) Dec 6 04:57:42 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 04:57:42 localhost python3.9[278009]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None Dec 6 04:57:43 localhost nova_compute[230884]: 2025-12-06 09:57:43.313 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:57:43 localhost sshd[278048]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:57:43 localhost systemd-logind[766]: New session 61 of user zuul. Dec 6 04:57:43 localhost systemd[1]: Started Session 61 of User zuul. Dec 6 04:57:43 localhost nova_compute[230884]: 2025-12-06 09:57:43.899 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:57:44 localhost systemd[1]: session-61.scope: Deactivated successfully. Dec 6 04:57:44 localhost systemd-logind[766]: Session 61 logged out. Waiting for processes to exit. Dec 6 04:57:44 localhost systemd-logind[766]: Removed session 61. Dec 6 04:57:44 localhost python3.9[278159]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:57:45 localhost python3.9[278245]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765015064.289732-3038-260918501980785/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 04:57:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 04:57:45 localhost python3.9[278353]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:57:45 localhost podman[278354]: 2025-12-06 09:57:45.917943106 +0000 UTC m=+0.075859716 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Dec 6 04:57:45 localhost podman[278354]: 2025-12-06 09:57:45.936297557 +0000 UTC m=+0.094214197 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 04:57:45 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 04:57:46 localhost python3.9[278427]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 04:57:46 localhost openstack_network_exporter[243110]: ERROR 09:57:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 04:57:46 localhost openstack_network_exporter[243110]: ERROR 09:57:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 04:57:46 localhost openstack_network_exporter[243110]: ERROR 09:57:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 04:57:46 localhost openstack_network_exporter[243110]: ERROR 09:57:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 04:57:46 localhost openstack_network_exporter[243110]: Dec 6 04:57:46 localhost openstack_network_exporter[243110]: ERROR 09:57:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 04:57:46 localhost openstack_network_exporter[243110]: Dec 6 04:57:46 localhost python3.9[278535]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:57:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:57:47.288 160509 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 04:57:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:57:47.289 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 04:57:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:57:47.290 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 04:57:47 localhost python3.9[278621]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765015066.535572-3038-74208196040910/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 04:57:48 localhost python3.9[278729]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:57:48 localhost nova_compute[230884]: 2025-12-06 09:57:48.335 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:57:48 localhost python3.9[278815]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765015067.6185415-3038-225793067926828/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=84cd402761cf817a5c030b63eb0a858a413df311 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 04:57:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 04:57:48 localhost nova_compute[230884]: 2025-12-06 09:57:48.902 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:57:48 localhost podman[278849]: 2025-12-06 09:57:48.949075085 +0000 UTC m=+0.102352291 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 6 04:57:48 localhost podman[278849]: 2025-12-06 09:57:48.96350276 +0000 UTC m=+0.116780016 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 6 04:57:48 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 04:57:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1216 DF PROTO=TCP SPT=55404 DPT=9102 SEQ=3800488960 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E51BEF0000000001030307) Dec 6 04:57:49 localhost python3.9[278947]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:57:49 localhost python3.9[279033]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765015068.83889-3038-167063296716008/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 04:57:50 localhost python3.9[279141]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:57:51 localhost python3.9[279227]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765015069.9751801-3038-96652130942322/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 04:57:52 localhost python3.9[279337]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:57:53 localhost python3.9[279447]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:57:53 localhost nova_compute[230884]: 2025-12-06 09:57:53.338 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:57:53 localhost python3.9[279557]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:57:53 localhost podman[241090]: time="2025-12-06T09:57:53Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 04:57:53 localhost nova_compute[230884]: 2025-12-06 09:57:53.935 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:57:53 localhost podman[241090]: @ - - [06/Dec/2025:09:57:53 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 150643 "" "Go-http-client/1.1" Dec 6 04:57:53 localhost podman[241090]: @ - - [06/Dec/2025:09:57:53 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17727 "" "Go-http-client/1.1" Dec 6 04:57:54 localhost python3.9[279669]: ansible-ansible.builtin.file Invoked with group=nova mode=0400 owner=nova path=/var/lib/nova/compute_id state=file recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:57:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 04:57:55 localhost systemd[1]: tmp-crun.SCOg4w.mount: Deactivated successfully. Dec 6 04:57:55 localhost podman[279769]: 2025-12-06 09:57:55.546634055 +0000 UTC m=+0.097168831 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller) Dec 6 04:57:55 localhost podman[279769]: 2025-12-06 09:57:55.589126107 +0000 UTC m=+0.139660863 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 04:57:55 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 04:57:55 localhost python3.9[279783]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:57:56 localhost python3.9[279913]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:57:56 localhost python3.9[279968]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/containers/nova_compute.json _original_basename=nova_compute.json.j2 recurse=False state=file path=/var/lib/openstack/config/containers/nova_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 04:57:57 localhost python3.9[280076]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Dec 6 04:57:58 localhost python3.9[280131]: ansible-ansible.legacy.file Invoked with mode=0700 setype=container_file_t dest=/var/lib/openstack/config/containers/nova_compute_init.json _original_basename=nova_compute_init.json.j2 recurse=False state=file path=/var/lib/openstack/config/containers/nova_compute_init.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Dec 6 04:57:58 localhost nova_compute[230884]: 2025-12-06 09:57:58.341 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:57:58 localhost nova_compute[230884]: 2025-12-06 09:57:58.936 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:57:59 localhost python3.9[280241]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False Dec 6 04:58:00 localhost python3.9[280351]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Dec 6 04:58:00 localhost nova_compute[230884]: 2025-12-06 09:58:00.500 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:58:00 localhost nova_compute[230884]: 2025-12-06 09:58:00.501 230888 DEBUG nova.compute.manager [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 6 04:58:00 localhost nova_compute[230884]: 2025-12-06 09:58:00.501 230888 DEBUG nova.compute.manager [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 6 04:58:00 localhost nova_compute[230884]: 2025-12-06 09:58:00.632 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Acquiring lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 6 04:58:00 localhost nova_compute[230884]: 2025-12-06 09:58:00.632 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Acquired lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 6 04:58:00 localhost nova_compute[230884]: 2025-12-06 09:58:00.632 230888 DEBUG nova.network.neutron [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 6 04:58:00 localhost nova_compute[230884]: 2025-12-06 09:58:00.632 230888 DEBUG nova.objects.instance [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 6 04:58:01 localhost nova_compute[230884]: 2025-12-06 09:58:01.070 230888 DEBUG nova.network.neutron [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updating instance_info_cache with network_info: [{"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 6 04:58:01 localhost nova_compute[230884]: 2025-12-06 09:58:01.085 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Releasing lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 6 04:58:01 localhost nova_compute[230884]: 2025-12-06 09:58:01.085 230888 DEBUG nova.compute.manager [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 6 04:58:01 localhost nova_compute[230884]: 2025-12-06 09:58:01.086 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:58:01 localhost nova_compute[230884]: 2025-12-06 09:58:01.086 230888 DEBUG nova.compute.manager [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 6 04:58:01 localhost nova_compute[230884]: 2025-12-06 09:58:01.087 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:58:01 localhost nova_compute[230884]: 2025-12-06 09:58:01.104 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 04:58:01 localhost nova_compute[230884]: 2025-12-06 09:58:01.105 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 04:58:01 localhost nova_compute[230884]: 2025-12-06 09:58:01.105 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 04:58:01 localhost nova_compute[230884]: 2025-12-06 09:58:01.105 230888 DEBUG nova.compute.resource_tracker [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Auditing locally available compute resources for np0005548789.localdomain (node: np0005548789.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 6 04:58:01 localhost nova_compute[230884]: 2025-12-06 09:58:01.106 230888 DEBUG oslo_concurrency.processutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 04:58:01 localhost python3[280461]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False Dec 6 04:58:01 localhost python3[280461]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3",#012 "Digest": "sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-12-01T06:31:10.62653219Z",#012 "Config": {#012 "User": "nova",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 1211779450,#012 "VirtualSize": 1211779450,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22/diff:/var/lib/containers/storage/overlay/11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60/diff:/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",#012 "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",#012 "sha256:86c2cd3987225f8a9bf38cc88e9c24b56bdf4a194f2301186519b4a7571b0c92",#012 "sha256:baa8e0bc73d6b505f07c40d4f69a464312cc41ae2045c7975dd4759c27721a22",#012 "sha256:d0cde44181262e43c105085c32a5af158b232f2e2ce4fe4b50530d7cdc5126cd"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "nova",#012 "History": [#012 {#012 "created": "2025-11-25T04:02:36.223494528Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:36.223562059Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251125\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:39.054452717Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-12-01T06:09:28.025707917Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025744608Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025767729Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025791379Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.02581523Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025867611Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.469442331Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:10:02.029095017Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 Dec 6 04:58:01 localhost nova_compute[230884]: 2025-12-06 09:58:01.668 230888 DEBUG oslo_concurrency.processutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 04:58:01 localhost nova_compute[230884]: 2025-12-06 09:58:01.728 230888 DEBUG nova.virt.libvirt.driver [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 04:58:01 localhost nova_compute[230884]: 2025-12-06 09:58:01.728 230888 DEBUG nova.virt.libvirt.driver [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 04:58:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 04:58:01 localhost nova_compute[230884]: 2025-12-06 09:58:01.908 230888 WARNING nova.virt.libvirt.driver [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 6 04:58:01 localhost nova_compute[230884]: 2025-12-06 09:58:01.909 230888 DEBUG nova.compute.resource_tracker [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Hypervisor/Node resource view: name=np0005548789.localdomain free_ram=11840MB free_disk=41.83721923828125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 6 04:58:01 localhost nova_compute[230884]: 2025-12-06 09:58:01.909 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 04:58:01 localhost nova_compute[230884]: 2025-12-06 09:58:01.910 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 04:58:01 localhost systemd[1]: tmp-crun.sPJdzD.mount: Deactivated successfully. Dec 6 04:58:01 localhost podman[280603]: 2025-12-06 09:58:01.925606168 +0000 UTC m=+0.086661957 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible) Dec 6 04:58:01 localhost podman[280603]: 2025-12-06 09:58:01.962116905 +0000 UTC m=+0.123172664 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible) Dec 6 04:58:01 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 04:58:01 localhost nova_compute[230884]: 2025-12-06 09:58:01.993 230888 DEBUG nova.compute.resource_tracker [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 6 04:58:01 localhost nova_compute[230884]: 2025-12-06 09:58:01.994 230888 DEBUG nova.compute.resource_tracker [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 6 04:58:01 localhost nova_compute[230884]: 2025-12-06 09:58:01.994 230888 DEBUG nova.compute.resource_tracker [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Final resource view: name=np0005548789.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 6 04:58:02 localhost nova_compute[230884]: 2025-12-06 09:58:02.030 230888 DEBUG oslo_concurrency.processutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 04:58:02 localhost python3.9[280674]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:58:02 localhost nova_compute[230884]: 2025-12-06 09:58:02.431 230888 DEBUG oslo_concurrency.processutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.400s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 04:58:02 localhost nova_compute[230884]: 2025-12-06 09:58:02.438 230888 DEBUG nova.compute.provider_tree [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Inventory has not changed in ProviderTree for provider: 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 6 04:58:02 localhost nova_compute[230884]: 2025-12-06 09:58:02.456 230888 DEBUG nova.scheduler.client.report [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Inventory has not changed for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 6 04:58:02 localhost nova_compute[230884]: 2025-12-06 09:58:02.458 230888 DEBUG nova.compute.resource_tracker [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Compute_service record updated for np0005548789.localdomain:np0005548789.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 6 04:58:02 localhost nova_compute[230884]: 2025-12-06 09:58:02.458 230888 DEBUG oslo_concurrency.lockutils [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.548s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 04:58:02 localhost nova_compute[230884]: 2025-12-06 09:58:02.872 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:58:02 localhost nova_compute[230884]: 2025-12-06 09:58:02.873 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:58:03 localhost python3.9[280808]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False Dec 6 04:58:03 localhost nova_compute[230884]: 2025-12-06 09:58:03.383 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:58:03 localhost nova_compute[230884]: 2025-12-06 09:58:03.496 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:58:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6956 DF PROTO=TCP SPT=55472 DPT=9102 SEQ=2212927204 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E554670000000001030307) Dec 6 04:58:03 localhost nova_compute[230884]: 2025-12-06 09:58:03.939 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:58:04 localhost python3.9[280918]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Dec 6 04:58:04 localhost nova_compute[230884]: 2025-12-06 09:58:04.495 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:58:04 localhost nova_compute[230884]: 2025-12-06 09:58:04.523 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:58:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6957 DF PROTO=TCP SPT=55472 DPT=9102 SEQ=2212927204 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E5586F0000000001030307) Dec 6 04:58:05 localhost python3[281028]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False Dec 6 04:58:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1217 DF PROTO=TCP SPT=55404 DPT=9102 SEQ=3800488960 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E55BEF0000000001030307) Dec 6 04:58:05 localhost python3[281028]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3",#012 "Digest": "sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-12-01T06:31:10.62653219Z",#012 "Config": {#012 "User": "nova",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 1211779450,#012 "VirtualSize": 1211779450,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22/diff:/var/lib/containers/storage/overlay/11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60/diff:/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",#012 "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",#012 "sha256:86c2cd3987225f8a9bf38cc88e9c24b56bdf4a194f2301186519b4a7571b0c92",#012 "sha256:baa8e0bc73d6b505f07c40d4f69a464312cc41ae2045c7975dd4759c27721a22",#012 "sha256:d0cde44181262e43c105085c32a5af158b232f2e2ce4fe4b50530d7cdc5126cd"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "nova",#012 "History": [#012 {#012 "created": "2025-11-25T04:02:36.223494528Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:36.223562059Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251125\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:39.054452717Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-12-01T06:09:28.025707917Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025744608Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025767729Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025791379Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.02581523Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.025867611Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:09:28.469442331Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-12-01T06:10:02.029095017Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 Dec 6 04:58:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 04:58:05 localhost podman[281092]: 2025-12-06 09:58:05.938961683 +0000 UTC m=+0.087268865 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 04:58:05 localhost podman[281092]: 2025-12-06 09:58:05.952203712 +0000 UTC m=+0.100510914 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 6 04:58:05 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 04:58:06 localhost nova_compute[230884]: 2025-12-06 09:58:06.501 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:58:06 localhost python3.9[281224]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:58:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6958 DF PROTO=TCP SPT=55472 DPT=9102 SEQ=2212927204 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E5606F0000000001030307) Dec 6 04:58:07 localhost python3.9[281336]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:58:07 localhost nova_compute[230884]: 2025-12-06 09:58:07.500 230888 DEBUG oslo_service.periodic_task [None req-f3bc090b-070e-4f32-9f6c-48ba00ae8240 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:58:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48534 DF PROTO=TCP SPT=37038 DPT=9102 SEQ=3499426967 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E563EF0000000001030307) Dec 6 04:58:08 localhost python3.9[281445]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765015087.5055525-3716-125524085609437/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 04:58:08 localhost nova_compute[230884]: 2025-12-06 09:58:08.421 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:58:08 localhost python3.9[281500]: ansible-systemd Invoked with state=started name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 04:58:08 localhost nova_compute[230884]: 2025-12-06 09:58:08.941 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:58:10 localhost python3.9[281610]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:58:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6959 DF PROTO=TCP SPT=55472 DPT=9102 SEQ=2212927204 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E5702F0000000001030307) Dec 6 04:58:11 localhost python3.9[281718]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:58:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 04:58:11 localhost podman[281789]: 2025-12-06 09:58:11.927797812 +0000 UTC m=+0.086382257 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm) Dec 6 04:58:11 localhost podman[281789]: 2025-12-06 09:58:11.941115293 +0000 UTC m=+0.099699718 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 6 04:58:11 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 04:58:12 localhost python3.9[281844]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Dec 6 04:58:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 04:58:12 localhost systemd[1]: tmp-crun.VDQoCU.mount: Deactivated successfully. Dec 6 04:58:12 localhost podman[281902]: 2025-12-06 09:58:12.920192748 +0000 UTC m=+0.084169789 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-type=git, vendor=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, version=9.6, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Dec 6 04:58:12 localhost podman[281902]: 2025-12-06 09:58:12.964368232 +0000 UTC m=+0.128345063 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, container_name=openstack_network_exporter, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, managed_by=edpm_ansible, release=1755695350, version=9.6, io.openshift.expose-services=, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Dec 6 04:58:12 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 04:58:13 localhost python3.9[281976]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None Dec 6 04:58:13 localhost nova_compute[230884]: 2025-12-06 09:58:13.456 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:58:13 localhost systemd-journald[47810]: Field hash table of /run/log/journal/4b30904fc4748c16d0c72dbebcabab49/system.journal has a fill level at 102.7 (342 of 333 items), suggesting rotation. Dec 6 04:58:13 localhost systemd-journald[47810]: /run/log/journal/4b30904fc4748c16d0c72dbebcabab49/system.journal: Journal header limits reached or header out-of-date, rotating. Dec 6 04:58:13 localhost rsyslogd[760]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 6 04:58:13 localhost rsyslogd[760]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 6 04:58:13 localhost rsyslogd[760]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 6 04:58:13 localhost nova_compute[230884]: 2025-12-06 09:58:13.944 230888 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:58:14 localhost python3.9[282109]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Dec 6 04:58:14 localhost systemd[1]: Stopping nova_compute container... Dec 6 04:58:14 localhost nova_compute[230884]: 2025-12-06 09:58:14.620 230888 DEBUG oslo_privsep.comm [-] EOF on privsep read channel _reader_main /usr/lib/python3.9/site-packages/oslo_privsep/comm.py:170#033[00m Dec 6 04:58:16 localhost nova_compute[230884]: 2025-12-06 09:58:16.398 230888 WARNING amqp [-] Received method (60, 30) during closing channel 1. This method will be ignored#033[00m Dec 6 04:58:16 localhost nova_compute[230884]: 2025-12-06 09:58:16.400 230888 DEBUG oslo_concurrency.lockutils [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 6 04:58:16 localhost nova_compute[230884]: 2025-12-06 09:58:16.400 230888 DEBUG oslo_concurrency.lockutils [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 6 04:58:16 localhost nova_compute[230884]: 2025-12-06 09:58:16.401 230888 DEBUG oslo_concurrency.lockutils [None req-c4502e63-c574-4851-9fe1-84d6480c5d52 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 6 04:58:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 04:58:16 localhost openstack_network_exporter[243110]: ERROR 09:58:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 04:58:16 localhost openstack_network_exporter[243110]: ERROR 09:58:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 04:58:16 localhost openstack_network_exporter[243110]: ERROR 09:58:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 04:58:16 localhost openstack_network_exporter[243110]: ERROR 09:58:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 04:58:16 localhost openstack_network_exporter[243110]: Dec 6 04:58:16 localhost openstack_network_exporter[243110]: ERROR 09:58:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 04:58:16 localhost openstack_network_exporter[243110]: Dec 6 04:58:16 localhost podman[282126]: 2025-12-06 09:58:16.690306444 +0000 UTC m=+0.105263371 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 04:58:16 localhost podman[282126]: 2025-12-06 09:58:16.732257829 +0000 UTC m=+0.147214746 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true) Dec 6 04:58:16 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 04:58:16 localhost systemd[1]: libpod-6674d58fdb9d90e78bfb85f434c919baa1836ad3e98a097c0a64c1152f7163c8.scope: Deactivated successfully. Dec 6 04:58:16 localhost systemd[1]: libpod-6674d58fdb9d90e78bfb85f434c919baa1836ad3e98a097c0a64c1152f7163c8.scope: Consumed 20.370s CPU time. Dec 6 04:58:16 localhost journal[203911]: End of file while reading data: Input/output error Dec 6 04:58:16 localhost podman[282113]: 2025-12-06 09:58:16.805400617 +0000 UTC m=+2.258915126 container died 6674d58fdb9d90e78bfb85f434c919baa1836ad3e98a097c0a64c1152f7163c8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125) Dec 6 04:58:16 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6674d58fdb9d90e78bfb85f434c919baa1836ad3e98a097c0a64c1152f7163c8-userdata-shm.mount: Deactivated successfully. Dec 6 04:58:16 localhost systemd[1]: var-lib-containers-storage-overlay-81aea125146c60e1b0ee38b0c4ee8c70ba3c42600b7bfc70695e2bff0e11c0ad-merged.mount: Deactivated successfully. Dec 6 04:58:16 localhost podman[282113]: 2025-12-06 09:58:16.968303936 +0000 UTC m=+2.421818415 container cleanup 6674d58fdb9d90e78bfb85f434c919baa1836ad3e98a097c0a64c1152f7163c8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 6 04:58:16 localhost podman[282113]: nova_compute Dec 6 04:58:17 localhost podman[282176]: error opening file `/run/crun/6674d58fdb9d90e78bfb85f434c919baa1836ad3e98a097c0a64c1152f7163c8/status`: No such file or directory Dec 6 04:58:17 localhost podman[282163]: 2025-12-06 09:58:17.067968532 +0000 UTC m=+0.066473963 container cleanup 6674d58fdb9d90e78bfb85f434c919baa1836ad3e98a097c0a64c1152f7163c8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible) Dec 6 04:58:17 localhost podman[282163]: nova_compute Dec 6 04:58:17 localhost systemd[1]: edpm_nova_compute.service: Deactivated successfully. Dec 6 04:58:17 localhost systemd[1]: Stopped nova_compute container. Dec 6 04:58:17 localhost systemd[1]: Starting nova_compute container... Dec 6 04:58:17 localhost systemd[1]: Started libcrun container. Dec 6 04:58:17 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81aea125146c60e1b0ee38b0c4ee8c70ba3c42600b7bfc70695e2bff0e11c0ad/merged/etc/nvme supports timestamps until 2038 (0x7fffffff) Dec 6 04:58:17 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81aea125146c60e1b0ee38b0c4ee8c70ba3c42600b7bfc70695e2bff0e11c0ad/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Dec 6 04:58:17 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81aea125146c60e1b0ee38b0c4ee8c70ba3c42600b7bfc70695e2bff0e11c0ad/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Dec 6 04:58:17 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81aea125146c60e1b0ee38b0c4ee8c70ba3c42600b7bfc70695e2bff0e11c0ad/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Dec 6 04:58:17 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81aea125146c60e1b0ee38b0c4ee8c70ba3c42600b7bfc70695e2bff0e11c0ad/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 6 04:58:17 localhost podman[282178]: 2025-12-06 09:58:17.228162408 +0000 UTC m=+0.133638747 container init 6674d58fdb9d90e78bfb85f434c919baa1836ad3e98a097c0a64c1152f7163c8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 04:58:17 localhost podman[282178]: 2025-12-06 09:58:17.23894211 +0000 UTC m=+0.144418469 container start 6674d58fdb9d90e78bfb85f434c919baa1836ad3e98a097c0a64c1152f7163c8 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, container_name=nova_compute, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 04:58:17 localhost podman[282178]: nova_compute Dec 6 04:58:17 localhost nova_compute[282193]: + sudo -E kolla_set_configs Dec 6 04:58:17 localhost systemd[1]: Started nova_compute container. Dec 6 04:58:17 localhost nova_compute[282193]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Dec 6 04:58:17 localhost nova_compute[282193]: INFO:__main__:Validating config file Dec 6 04:58:17 localhost nova_compute[282193]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Dec 6 04:58:17 localhost nova_compute[282193]: INFO:__main__:Copying service configuration files Dec 6 04:58:17 localhost nova_compute[282193]: INFO:__main__:Deleting /etc/nova/nova.conf Dec 6 04:58:17 localhost nova_compute[282193]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf Dec 6 04:58:17 localhost nova_compute[282193]: INFO:__main__:Setting permission for /etc/nova/nova.conf Dec 6 04:58:17 localhost nova_compute[282193]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf Dec 6 04:58:17 localhost nova_compute[282193]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf Dec 6 04:58:17 localhost nova_compute[282193]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf Dec 6 04:58:17 localhost nova_compute[282193]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf Dec 6 04:58:17 localhost nova_compute[282193]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf Dec 6 04:58:17 localhost nova_compute[282193]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf Dec 6 04:58:17 localhost nova_compute[282193]: INFO:__main__:Deleting /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Dec 6 04:58:17 localhost nova_compute[282193]: INFO:__main__:Copying /var/lib/kolla/config_files/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Dec 6 04:58:17 localhost nova_compute[282193]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Dec 6 04:58:17 localhost nova_compute[282193]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf Dec 6 04:58:17 localhost nova_compute[282193]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf Dec 6 04:58:17 localhost nova_compute[282193]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf Dec 6 04:58:17 localhost nova_compute[282193]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf Dec 6 04:58:17 localhost nova_compute[282193]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf Dec 6 04:58:17 localhost nova_compute[282193]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf Dec 6 04:58:17 localhost nova_compute[282193]: INFO:__main__:Deleting /etc/ceph Dec 6 04:58:17 localhost nova_compute[282193]: INFO:__main__:Creating directory /etc/ceph Dec 6 04:58:17 localhost nova_compute[282193]: INFO:__main__:Setting permission for /etc/ceph Dec 6 04:58:17 localhost nova_compute[282193]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf Dec 6 04:58:17 localhost nova_compute[282193]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Dec 6 04:58:17 localhost nova_compute[282193]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring Dec 6 04:58:17 localhost nova_compute[282193]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Dec 6 04:58:17 localhost nova_compute[282193]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey Dec 6 04:58:17 localhost nova_compute[282193]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey Dec 6 04:58:17 localhost nova_compute[282193]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Dec 6 04:58:17 localhost nova_compute[282193]: INFO:__main__:Deleting /var/lib/nova/.ssh/config Dec 6 04:58:17 localhost nova_compute[282193]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config Dec 6 04:58:17 localhost nova_compute[282193]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Dec 6 04:58:17 localhost nova_compute[282193]: INFO:__main__:Deleting /usr/sbin/iscsiadm Dec 6 04:58:17 localhost nova_compute[282193]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm Dec 6 04:58:17 localhost nova_compute[282193]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm Dec 6 04:58:17 localhost nova_compute[282193]: INFO:__main__:Writing out command to execute Dec 6 04:58:17 localhost nova_compute[282193]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Dec 6 04:58:17 localhost nova_compute[282193]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Dec 6 04:58:17 localhost nova_compute[282193]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ Dec 6 04:58:17 localhost nova_compute[282193]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Dec 6 04:58:17 localhost nova_compute[282193]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Dec 6 04:58:17 localhost nova_compute[282193]: ++ cat /run_command Dec 6 04:58:17 localhost nova_compute[282193]: + CMD=nova-compute Dec 6 04:58:17 localhost nova_compute[282193]: + ARGS= Dec 6 04:58:17 localhost nova_compute[282193]: + sudo kolla_copy_cacerts Dec 6 04:58:17 localhost nova_compute[282193]: + [[ ! -n '' ]] Dec 6 04:58:17 localhost nova_compute[282193]: + . kolla_extend_start Dec 6 04:58:17 localhost nova_compute[282193]: Running command: 'nova-compute' Dec 6 04:58:17 localhost nova_compute[282193]: + echo 'Running command: '\''nova-compute'\''' Dec 6 04:58:17 localhost nova_compute[282193]: + umask 0022 Dec 6 04:58:17 localhost nova_compute[282193]: + exec nova-compute Dec 6 04:58:18 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6960 DF PROTO=TCP SPT=55472 DPT=9102 SEQ=2212927204 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E58FF00000000001030307) Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.027 282197 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.028 282197 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.028 282197 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.028 282197 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.150 282197 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.172 282197 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.172 282197 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.647 282197 INFO nova.virt.driver [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.811 282197 INFO nova.compute.provider_config [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.820 282197 DEBUG oslo_concurrency.lockutils [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.821 282197 DEBUG oslo_concurrency.lockutils [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.821 282197 DEBUG oslo_concurrency.lockutils [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.821 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.821 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.821 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.822 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.822 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.822 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.822 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] allow_resize_to_same_host = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.822 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] arq_binding_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.822 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] backdoor_port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.823 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] backdoor_socket = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.823 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] block_device_allocate_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.823 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.823 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cert = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.823 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] compute_driver = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.823 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] compute_monitors = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.823 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] config_dir = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.824 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] config_drive_format = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.824 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] config_file = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.824 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.824 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] console_host = np0005548789.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.824 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] control_exchange = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.824 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cpu_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.824 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.825 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.825 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.825 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] default_availability_zone = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.825 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] default_ephemeral_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.825 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.825 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] default_schedule_zone = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.825 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] disk_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.826 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] enable_new_services = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.826 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] enabled_apis = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.826 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] enabled_ssl_apis = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.826 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] flat_injected = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.826 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] force_config_drive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.826 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] force_raw_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.826 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.827 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.827 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] host = np0005548789.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.827 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] initial_cpu_allocation_ratio = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.827 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] initial_disk_allocation_ratio = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.827 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] initial_ram_allocation_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.827 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] injected_network_template = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.828 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] instance_build_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.828 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] instance_delete_interval = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.828 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.828 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] instance_name_template = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.828 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] instance_usage_audit = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.828 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] instance_usage_audit_period = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.828 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.829 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] instances_path = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.829 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.829 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.829 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] live_migration_retry_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.829 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.829 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.829 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.830 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.830 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.830 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.830 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.830 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] log_rotation_type = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.830 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.830 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.831 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.831 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.831 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.831 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] long_rpc_timeout = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.831 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] max_concurrent_builds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.831 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.831 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] max_concurrent_snapshots = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.832 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] max_local_block_devices = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.832 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] max_logfile_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.832 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] max_logfile_size_mb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.832 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.832 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] metadata_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.832 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] metadata_listen_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.833 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] metadata_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.833 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] migrate_max_retries = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.833 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] mkisofs_cmd = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.833 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] my_block_storage_ip = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.833 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] my_ip = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.833 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] network_allocate_retries = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.833 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.834 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] osapi_compute_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.834 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] osapi_compute_listen_port = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.834 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] osapi_compute_unique_server_name_scope = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.834 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] osapi_compute_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.834 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] password_length = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.834 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] periodic_enable = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.834 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] periodic_fuzzy_delay = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.835 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] pointer_model = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.835 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] preallocate_images = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.835 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.835 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] pybasedir = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.835 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] ram_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.835 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.835 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.836 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.836 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] reboot_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.836 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] reclaim_instance_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.836 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] record = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.836 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] reimage_timeout_per_gb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.836 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] report_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.836 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] rescue_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.837 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] reserved_host_cpus = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.837 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] reserved_host_disk_mb = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.837 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] reserved_host_memory_mb = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.837 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] reserved_huge_pages = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.837 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] resize_confirm_window = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.837 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] resize_fs_using_block_device = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.837 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.838 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] rootwrap_config = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.838 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] rpc_response_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.838 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] run_external_periodic_tasks = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.838 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.838 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.838 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.838 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.839 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] service_down_time = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.839 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] servicegroup_driver = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.839 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] shelved_offload_time = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.839 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] shelved_poll_interval = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.839 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.839 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] source_is_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.839 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] ssl_only = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.839 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] state_path = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.840 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] sync_power_state_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.840 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] sync_power_state_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.840 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.840 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] tempdir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.840 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] timeout_nbd = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.840 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.840 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] update_resources_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.841 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] use_cow_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.841 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.841 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.841 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.841 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] use_rootwrap_daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.841 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.841 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.842 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vcpu_pin_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.842 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vif_plugging_is_fatal = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.842 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vif_plugging_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.842 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] virt_mkfs = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.842 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] volume_usage_poll_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.842 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.843 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] web = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.843 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.843 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_concurrency.lock_path = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.843 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.843 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.843 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.844 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.844 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.844 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] api.auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.844 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] api.compute_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.844 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.844 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] api.dhcp_domain = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.844 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] api.enable_instance_password = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.845 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] api.glance_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.845 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.845 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.845 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.845 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.845 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] api.local_metadata_per_cell = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.845 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] api.max_limit = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.846 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] api.metadata_cache_expiration = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.846 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] api.neutron_default_tenant_id = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.846 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] api.use_forwarded_for = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.846 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] api.use_neutron_default_nets = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.846 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.846 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.846 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.847 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] api.vendordata_dynamic_ssl_certfile = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.847 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.847 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] api.vendordata_jsonfile_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.847 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] api.vendordata_providers = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.847 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cache.backend = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.847 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cache.backend_argument = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.847 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cache.config_prefix = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.848 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cache.dead_timeout = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.848 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cache.debug_cache_backend = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.848 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cache.enable_retry_client = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.848 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cache.enable_socket_keepalive = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.848 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cache.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.848 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cache.expiration_time = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.848 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.849 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cache.hashclient_retry_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.849 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cache.memcache_dead_retry = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.849 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cache.memcache_password = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.849 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.849 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.849 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cache.memcache_pool_maxsize = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.849 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.850 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cache.memcache_sasl_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.850 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cache.memcache_servers = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.850 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cache.memcache_socket_timeout = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.850 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cache.memcache_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.850 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cache.proxies = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.850 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cache.retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.850 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cache.retry_delay = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.851 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cache.socket_keepalive_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.851 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cache.socket_keepalive_idle = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.851 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.851 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cache.tls_allowed_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.851 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cache.tls_cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.851 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cache.tls_certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.851 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cache.tls_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.852 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cache.tls_keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.852 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cinder.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.852 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cinder.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.852 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cinder.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.852 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cinder.catalog_info = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.852 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cinder.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.852 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cinder.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.853 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cinder.cross_az_attach = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.853 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cinder.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.853 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cinder.endpoint_template = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.853 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cinder.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.853 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cinder.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.853 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cinder.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.854 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cinder.os_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.854 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cinder.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.854 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cinder.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.854 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.854 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] compute.cpu_dedicated_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.854 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] compute.cpu_shared_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.854 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.854 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.855 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.855 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.855 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.855 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.855 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.855 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.856 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] compute.vmdk_allowed_types = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.856 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] conductor.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.856 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] console.allowed_origins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.856 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] console.ssl_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.856 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] console.ssl_minimum_version = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.856 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] consoleauth.token_ttl = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.856 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cyborg.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.857 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cyborg.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.857 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cyborg.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.857 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cyborg.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.857 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cyborg.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.857 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cyborg.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.857 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cyborg.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.857 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cyborg.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.858 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cyborg.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.858 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cyborg.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.858 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cyborg.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.858 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cyborg.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.858 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cyborg.service_type = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.858 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cyborg.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.858 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cyborg.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.858 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.859 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cyborg.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.859 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cyborg.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.859 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] cyborg.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.859 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.859 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.859 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.860 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.860 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.860 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.860 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.860 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.860 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.861 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.861 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.861 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.861 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.861 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.862 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.862 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.862 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.862 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.862 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.862 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.863 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] api_database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.863 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] api_database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.863 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] api_database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.863 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] api_database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.863 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.863 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] api_database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.863 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.864 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] api_database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.864 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.864 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.864 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] api_database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.864 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] api_database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.864 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] api_database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.865 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] api_database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.865 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] api_database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.865 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.865 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] api_database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.865 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] api_database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.865 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] api_database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.865 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.866 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] devices.enabled_mdev_types = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.866 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.866 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.866 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.866 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] glance.api_servers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.866 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] glance.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.866 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] glance.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.867 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] glance.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.867 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] glance.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.867 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] glance.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.867 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] glance.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.867 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.867 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.867 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] glance.enable_rbd_download = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.868 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] glance.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.868 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] glance.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.868 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] glance.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.868 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] glance.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.868 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] glance.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.868 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] glance.num_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.869 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] glance.rbd_ceph_conf = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.869 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] glance.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.869 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] glance.rbd_pool = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.869 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] glance.rbd_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.869 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] glance.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.869 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] glance.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.869 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] glance.service_type = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.870 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] glance.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.870 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] glance.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.870 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.870 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] glance.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.870 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] glance.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.870 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.871 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] glance.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.871 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] guestfs.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.871 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] hyperv.config_drive_cdrom = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.871 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.871 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] hyperv.dynamic_memory_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.871 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.872 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] hyperv.enable_remotefx = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.872 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] hyperv.instances_path_share = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.872 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] hyperv.iscsi_initiator_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.872 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] hyperv.limit_cpu_features = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.872 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.873 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.873 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.873 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.873 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] hyperv.qemu_img_cmd = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.873 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] hyperv.use_multipath_io = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.873 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.874 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.874 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] hyperv.vswitch_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.874 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.874 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] mks.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.874 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] mks.mksproxy_base_url = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.875 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] image_cache.manager_interval = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.875 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.875 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.875 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.875 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.875 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] image_cache.subdirectory_name = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.876 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] ironic.api_max_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.876 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] ironic.api_retry_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.876 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.876 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.876 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.876 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.877 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.877 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.877 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.877 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.877 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.877 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.878 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.878 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.878 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] ironic.partition_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.878 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] ironic.peer_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.878 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.878 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.879 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.879 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] ironic.service_type = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.879 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.879 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.879 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.879 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.880 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] ironic.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.880 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.880 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] key_manager.backend = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.880 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] key_manager.fixed_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.880 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] barbican.auth_endpoint = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.881 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] barbican.barbican_api_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.881 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] barbican.barbican_endpoint = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.881 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.881 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] barbican.barbican_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.881 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] barbican.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.881 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] barbican.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.882 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] barbican.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.882 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] barbican.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.882 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] barbican.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.882 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] barbican.number_of_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.882 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] barbican.retry_delay = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.882 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.883 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] barbican.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.883 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] barbican.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.883 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] barbican.verify_ssl = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.883 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] barbican.verify_ssl_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.883 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.883 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.884 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] barbican_service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.884 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.884 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.884 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.884 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] barbican_service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.884 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.884 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] barbican_service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.885 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vault.approle_role_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.885 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vault.approle_secret_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.885 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vault.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.885 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vault.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.885 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vault.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.885 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vault.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.886 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vault.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.886 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vault.kv_mountpoint = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.886 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vault.kv_version = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.886 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vault.namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.886 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vault.root_token_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.887 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vault.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.887 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vault.ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.887 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vault.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.887 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vault.use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.887 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vault.vault_url = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.887 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] keystone.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.888 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] keystone.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.888 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] keystone.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.888 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] keystone.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.888 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] keystone.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.888 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] keystone.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.888 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] keystone.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.888 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] keystone.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.889 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] keystone.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.889 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] keystone.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.889 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] keystone.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.889 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] keystone.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.889 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] keystone.service_type = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.889 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] keystone.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.889 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] keystone.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.890 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.890 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] keystone.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.890 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] keystone.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.890 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] keystone.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.890 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.connection_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.890 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.cpu_mode = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.890 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.cpu_model_extra_flags = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.891 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.cpu_models = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.891 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.891 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.891 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.cpu_power_management = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.891 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.891 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.891 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.device_detach_timeout = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.892 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.disk_cachemodes = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.892 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.disk_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.892 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.enabled_perf_events = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.892 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.file_backed_memory = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.892 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.gid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.892 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.hw_disk_discard = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.893 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.hw_machine_type = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.893 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.images_rbd_ceph_conf = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.893 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.893 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.893 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.893 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.images_rbd_pool = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.893 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.images_type = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.893 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.images_volume_group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.894 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.inject_key = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.894 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.inject_partition = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.894 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.894 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.iscsi_iface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.894 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.iser_use_multipath = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.894 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.895 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.895 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.895 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.895 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.895 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.895 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.896 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.896 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.live_migration_scheme = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.896 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.896 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.896 282197 WARNING oslo_config.cfg [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal ( Dec 6 04:58:19 localhost nova_compute[282193]: live_migration_uri is deprecated for removal in favor of two other options that Dec 6 04:58:19 localhost nova_compute[282193]: allow to change live migration scheme and target URI: ``live_migration_scheme`` Dec 6 04:58:19 localhost nova_compute[282193]: and ``live_migration_inbound_addr`` respectively. Dec 6 04:58:19 localhost nova_compute[282193]: ). Its value may be silently ignored in the future.#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.896 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.live_migration_uri = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.897 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.897 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.max_queues = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.897 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.897 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.nfs_mount_options = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.897 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.nfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.897 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.897 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.num_iser_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.898 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.898 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.898 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.num_pcie_ports = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.898 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.num_volume_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.898 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.pmem_namespaces = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.898 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.quobyte_client_cfg = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.898 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.899 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.899 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.899 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.900 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.rbd_secret_uuid = 1939e851-b10c-5c3b-9bb7-8e7f380233e8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.900 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.rbd_user = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.900 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.901 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.901 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.rescue_image_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.901 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.rescue_kernel_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.901 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.rescue_ramdisk_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.902 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.rng_dev_path = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.902 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.rx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.902 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.smbfs_mount_options = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.903 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.903 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.snapshot_compression = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.903 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.snapshot_image_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.904 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.snapshots_directory = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.905 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.905 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.swtpm_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.905 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.swtpm_group = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.905 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.swtpm_user = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.905 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.sysinfo_serial = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.905 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.tx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.905 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.uid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.906 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.906 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.virt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.906 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.volume_clear = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.906 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.volume_clear_size = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.906 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.volume_use_multipath = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.906 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.vzstorage_cache_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.906 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.vzstorage_log_path = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.907 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.vzstorage_mount_group = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.907 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.vzstorage_mount_opts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.907 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.vzstorage_mount_perms = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.907 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.907 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.vzstorage_mount_user = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.907 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.908 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] neutron.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.908 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] neutron.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.908 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] neutron.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.908 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] neutron.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.908 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] neutron.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.908 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] neutron.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.908 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] neutron.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.909 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] neutron.default_floating_pool = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.909 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] neutron.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.909 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.909 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] neutron.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.909 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] neutron.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.909 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] neutron.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.909 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] neutron.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.909 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.910 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] neutron.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.910 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] neutron.ovs_bridge = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.910 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] neutron.physnets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.910 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] neutron.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.910 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.910 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] neutron.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.910 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] neutron.service_type = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.911 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] neutron.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.911 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] neutron.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.911 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.911 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] neutron.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.911 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] neutron.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.911 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] neutron.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.911 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.912 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] notifications.default_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.912 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.912 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.912 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.912 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] pci.alias = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.912 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] pci.device_spec = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.912 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] pci.report_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.913 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.913 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] placement.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.913 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] placement.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.913 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.913 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.913 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.913 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] placement.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.914 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] placement.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.914 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] placement.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.914 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] placement.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.914 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] placement.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.914 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] placement.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.914 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] placement.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.914 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.915 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.915 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] placement.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.915 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] placement.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.915 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] placement.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.915 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] placement.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.915 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] placement.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.916 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] placement.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.916 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] placement.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.916 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] placement.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.916 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] placement.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.916 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] placement.service_type = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.916 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.916 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] placement.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.917 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.917 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] placement.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.917 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.917 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] placement.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.917 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] placement.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.917 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] placement.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.917 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] placement.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.917 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] placement.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.918 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] placement.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.918 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] placement.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.918 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] quota.cores = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.918 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.918 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] quota.driver = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.918 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.919 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.919 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] quota.injected_files = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.919 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] quota.instances = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.919 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] quota.key_pairs = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.919 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] quota.metadata_items = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.919 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] quota.ram = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.919 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] quota.recheck_quota = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.920 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] quota.server_group_members = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.920 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] quota.server_groups = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.920 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] rdp.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.920 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] rdp.html5_proxy_base_url = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.920 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.920 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.921 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.921 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.921 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] scheduler.max_attempts = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.921 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.921 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.921 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.921 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.922 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.922 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] scheduler.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.922 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.922 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.922 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.922 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.922 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.923 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.923 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.923 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.923 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.923 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.923 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.924 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.924 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.924 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.924 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.924 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.924 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.924 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.925 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.925 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.925 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.925 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.925 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.925 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.925 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] metrics.required = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.926 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] metrics.weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.926 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] metrics.weight_of_unavailable = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.926 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] metrics.weight_setting = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.926 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] serial_console.base_url = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.926 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] serial_console.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.926 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] serial_console.port_range = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.926 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.927 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.927 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.927 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.927 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] service_user.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.927 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.927 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.927 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.928 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.928 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.928 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.928 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.928 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.928 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] spice.agent_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.928 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] spice.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.929 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] spice.html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.929 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] spice.html5proxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.929 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] spice.html5proxy_port = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.929 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] spice.image_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.929 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] spice.jpeg_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.929 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] spice.playback_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.929 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] spice.server_listen = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.930 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.930 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] spice.streaming_mode = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.930 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] spice.zlib_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.930 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] upgrade_levels.baseapi = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.930 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] upgrade_levels.cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.930 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] upgrade_levels.compute = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.930 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] upgrade_levels.conductor = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.930 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] upgrade_levels.scheduler = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.931 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.931 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.931 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.931 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.931 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.931 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.931 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.932 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.932 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.932 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.932 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.932 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vmware.cache_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.932 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vmware.cluster_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.932 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vmware.connection_pool_size = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.932 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vmware.console_delay_seconds = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.933 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vmware.datastore_regex = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.933 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vmware.host_ip = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.933 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.933 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.933 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vmware.host_username = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.933 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.933 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vmware.integration_bridge = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.934 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vmware.maximum_objects = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.934 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vmware.pbm_default_policy = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.934 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vmware.pbm_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.934 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vmware.pbm_wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.934 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vmware.serial_log_dir = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.934 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vmware.serial_port_proxy_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.934 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.935 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.935 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vmware.use_linked_clone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.935 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vmware.vnc_keymap = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.935 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vmware.vnc_port = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.935 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vmware.vnc_port_total = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.935 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vnc.auth_schemes = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.936 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vnc.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.936 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vnc.novncproxy_base_url = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.936 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vnc.novncproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.936 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vnc.novncproxy_port = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.936 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vnc.server_listen = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.937 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vnc.server_proxyclient_address = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.937 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vnc.vencrypt_ca_certs = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.937 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vnc.vencrypt_client_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.937 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vnc.vencrypt_client_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.938 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.938 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.938 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.938 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.938 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.938 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] workarounds.disable_rootwrap = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.939 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.939 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.939 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.939 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.939 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.939 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.940 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.940 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.940 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.940 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.940 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.940 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.940 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.941 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.941 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.941 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] wsgi.api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.941 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] wsgi.client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.941 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] wsgi.default_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.941 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] wsgi.keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.941 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] wsgi.max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.942 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] wsgi.secure_proxy_ssl_header = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.942 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] wsgi.ssl_ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.942 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] wsgi.ssl_cert_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.942 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] wsgi.ssl_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.942 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] wsgi.tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.942 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] wsgi.wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.942 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] zvm.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.942 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] zvm.cloud_connector_url = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.943 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] zvm.image_tmp_path = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.943 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] zvm.reachable_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.943 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.943 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_policy.enforce_scope = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.943 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.943 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.944 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.944 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.944 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.944 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.944 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.944 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost systemd[1]: tmp-crun.pXzmmy.mount: Deactivated successfully. Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.944 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.945 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.945 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] remote_debug.host = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.945 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] remote_debug.port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.945 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.945 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.945 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.945 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.946 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.946 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.946 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.946 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.946 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.946 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.946 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.946 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.947 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.947 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.947 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.947 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.947 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.947 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.947 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.948 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.948 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.948 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.948 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.948 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.948 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.949 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.949 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.949 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.949 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.949 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.949 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.949 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.950 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.950 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.950 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.950 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_limit.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.950 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_limit.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.950 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_limit.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.950 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_limit.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.951 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_limit.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.951 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_limit.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.951 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_limit.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.951 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.951 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_limit.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.951 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.951 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_limit.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.952 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_limit.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.952 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_limit.endpoint_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.952 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_limit.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost podman[282227]: 2025-12-06 09:58:19.952053696 +0000 UTC m=+0.109775020 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.952 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_limit.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.952 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_limit.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.952 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_limit.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.952 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_limit.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.953 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_limit.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.953 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_limit.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.953 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.953 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_limit.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.953 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_limit.project_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.953 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_limit.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.954 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_limit.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.954 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_limit.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.954 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_limit.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.954 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.954 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.954 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_limit.system_scope = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.954 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_limit.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.955 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_limit.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.955 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_limit.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.955 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_limit.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.955 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_limit.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.955 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_limit.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.955 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_limit.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.956 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_limit.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.956 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.956 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.956 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] oslo_reports.log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.956 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.957 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.957 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.957 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.957 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.957 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.958 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.958 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vif_plug_ovs_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.958 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.958 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.958 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.958 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] vif_plug_ovs_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.959 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.959 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.959 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.959 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.960 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] os_vif_linux_bridge.iptables_top_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.960 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.960 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] os_vif_linux_bridge.use_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.960 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.960 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] os_vif_ovs.isolate_vif = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.960 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] os_vif_ovs.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.960 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] os_vif_ovs.ovs_vsctl_timeout = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.961 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] os_vif_ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.961 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] os_vif_ovs.ovsdb_interface = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.961 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] os_vif_ovs.per_port_bridge = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.961 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] os_brick.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.961 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.961 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.961 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] privsep_osbrick.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.962 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] privsep_osbrick.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.962 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.962 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] privsep_osbrick.logger_name = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.962 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.962 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] privsep_osbrick.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.962 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] nova_sys_admin.capabilities = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.962 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] nova_sys_admin.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.962 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] nova_sys_admin.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.963 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] nova_sys_admin.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.963 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.963 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] nova_sys_admin.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.963 282197 DEBUG oslo_service.service [None req-26677738-55ed-441a-a4de-14131a4fde81 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.964 282197 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m Dec 6 04:58:19 localhost podman[282227]: 2025-12-06 09:58:19.965269174 +0000 UTC m=+0.122990508 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 04:58:19 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.979 282197 INFO nova.virt.node [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Determined node identity 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad from /var/lib/nova/compute_id#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.979 282197 DEBUG nova.virt.libvirt.host [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.980 282197 DEBUG nova.virt.libvirt.host [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.980 282197 DEBUG nova.virt.libvirt.host [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.980 282197 DEBUG nova.virt.libvirt.host [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.988 282197 DEBUG nova.virt.libvirt.host [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Registering for lifecycle events _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.989 282197 DEBUG nova.virt.libvirt.host [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Registering for connection events: _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.990 282197 INFO nova.virt.libvirt.driver [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Connection event '1' reason 'None'#033[00m Dec 6 04:58:19 localhost nova_compute[282193]: 2025-12-06 09:58:19.994 282197 INFO nova.virt.libvirt.host [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Libvirt host capabilities Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: 0b20d7bd-1341-4912-afa7-eec4e2b0c648 Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: x86_64 Dec 6 04:58:19 localhost nova_compute[282193]: EPYC-Rome-v4 Dec 6 04:58:19 localhost nova_compute[282193]: AMD Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: tcp Dec 6 04:58:19 localhost nova_compute[282193]: rdma Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: 16116612 Dec 6 04:58:19 localhost nova_compute[282193]: 4029153 Dec 6 04:58:19 localhost nova_compute[282193]: 0 Dec 6 04:58:19 localhost nova_compute[282193]: 0 Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: selinux Dec 6 04:58:19 localhost nova_compute[282193]: 0 Dec 6 04:58:19 localhost nova_compute[282193]: system_u:system_r:svirt_t:s0 Dec 6 04:58:19 localhost nova_compute[282193]: system_u:system_r:svirt_tcg_t:s0 Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: dac Dec 6 04:58:19 localhost nova_compute[282193]: 0 Dec 6 04:58:19 localhost nova_compute[282193]: +107:+107 Dec 6 04:58:19 localhost nova_compute[282193]: +107:+107 Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: hvm Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: 32 Dec 6 04:58:19 localhost nova_compute[282193]: /usr/libexec/qemu-kvm Dec 6 04:58:19 localhost nova_compute[282193]: pc-i440fx-rhel7.6.0 Dec 6 04:58:19 localhost nova_compute[282193]: pc Dec 6 04:58:19 localhost nova_compute[282193]: pc-q35-rhel9.8.0 Dec 6 04:58:19 localhost nova_compute[282193]: q35 Dec 6 04:58:19 localhost nova_compute[282193]: pc-q35-rhel9.6.0 Dec 6 04:58:19 localhost nova_compute[282193]: pc-q35-rhel8.6.0 Dec 6 04:58:19 localhost nova_compute[282193]: pc-q35-rhel9.4.0 Dec 6 04:58:19 localhost nova_compute[282193]: pc-q35-rhel8.5.0 Dec 6 04:58:19 localhost nova_compute[282193]: pc-q35-rhel8.3.0 Dec 6 04:58:19 localhost nova_compute[282193]: pc-q35-rhel7.6.0 Dec 6 04:58:19 localhost nova_compute[282193]: pc-q35-rhel8.4.0 Dec 6 04:58:19 localhost nova_compute[282193]: pc-q35-rhel9.2.0 Dec 6 04:58:19 localhost nova_compute[282193]: pc-q35-rhel8.2.0 Dec 6 04:58:19 localhost nova_compute[282193]: pc-q35-rhel9.0.0 Dec 6 04:58:19 localhost nova_compute[282193]: pc-q35-rhel8.0.0 Dec 6 04:58:19 localhost nova_compute[282193]: pc-q35-rhel8.1.0 Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: hvm Dec 6 04:58:19 localhost nova_compute[282193]: Dec 6 04:58:19 localhost nova_compute[282193]: 64 Dec 6 04:58:19 localhost nova_compute[282193]: /usr/libexec/qemu-kvm Dec 6 04:58:19 localhost nova_compute[282193]: pc-i440fx-rhel7.6.0 Dec 6 04:58:19 localhost nova_compute[282193]: pc Dec 6 04:58:19 localhost nova_compute[282193]: pc-q35-rhel9.8.0 Dec 6 04:58:19 localhost nova_compute[282193]: q35 Dec 6 04:58:19 localhost nova_compute[282193]: pc-q35-rhel9.6.0 Dec 6 04:58:19 localhost nova_compute[282193]: pc-q35-rhel8.6.0 Dec 6 04:58:19 localhost nova_compute[282193]: pc-q35-rhel9.4.0 Dec 6 04:58:19 localhost nova_compute[282193]: pc-q35-rhel8.5.0 Dec 6 04:58:19 localhost nova_compute[282193]: pc-q35-rhel8.3.0 Dec 6 04:58:19 localhost nova_compute[282193]: pc-q35-rhel7.6.0 Dec 6 04:58:19 localhost nova_compute[282193]: pc-q35-rhel8.4.0 Dec 6 04:58:19 localhost nova_compute[282193]: pc-q35-rhel9.2.0 Dec 6 04:58:19 localhost nova_compute[282193]: pc-q35-rhel8.2.0 Dec 6 04:58:19 localhost nova_compute[282193]: pc-q35-rhel9.0.0 Dec 6 04:58:19 localhost nova_compute[282193]: pc-q35-rhel8.0.0 Dec 6 04:58:19 localhost nova_compute[282193]: pc-q35-rhel8.1.0 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: #033[00m Dec 6 04:58:20 localhost nova_compute[282193]: 2025-12-06 09:58:20.000 282197 DEBUG nova.virt.libvirt.host [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Dec 6 04:58:20 localhost nova_compute[282193]: 2025-12-06 09:58:20.003 282197 DEBUG nova.virt.libvirt.host [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: /usr/libexec/qemu-kvm Dec 6 04:58:20 localhost nova_compute[282193]: kvm Dec 6 04:58:20 localhost nova_compute[282193]: pc-q35-rhel9.8.0 Dec 6 04:58:20 localhost nova_compute[282193]: i686 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: /usr/share/OVMF/OVMF_CODE.secboot.fd Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: rom Dec 6 04:58:20 localhost nova_compute[282193]: pflash Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: yes Dec 6 04:58:20 localhost nova_compute[282193]: no Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: no Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: on Dec 6 04:58:20 localhost nova_compute[282193]: off Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: on Dec 6 04:58:20 localhost nova_compute[282193]: off Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-Rome Dec 6 04:58:20 localhost nova_compute[282193]: AMD Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: 486 Dec 6 04:58:20 localhost nova_compute[282193]: 486-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Broadwell Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Broadwell-IBRS Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Broadwell-noTSX Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Broadwell-noTSX-IBRS Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Broadwell-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Broadwell-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Broadwell-v3 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Broadwell-v4 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Cascadelake-Server Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Cascadelake-Server-noTSX Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Cascadelake-Server-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Cascadelake-Server-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Cascadelake-Server-v3 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Cascadelake-Server-v4 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Cascadelake-Server-v5 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Conroe Dec 6 04:58:20 localhost nova_compute[282193]: Conroe-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Cooperlake Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Cooperlake-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Cooperlake-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Denverton Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Denverton-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Denverton-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Denverton-v3 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dhyana Dec 6 04:58:20 localhost nova_compute[282193]: Dhyana-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dhyana-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: EPYC Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-Genoa Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-Genoa-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-IBPB Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-Milan Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-Milan-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-Milan-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-Rome Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-Rome-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-Rome-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-Rome-v3 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-Rome-v4 Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-v1 Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-v2 Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-v3 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-v4 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: GraniteRapids Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: GraniteRapids-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: GraniteRapids-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Haswell Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Haswell-IBRS Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Haswell-noTSX Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Haswell-noTSX-IBRS Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Haswell-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Haswell-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Haswell-v3 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Haswell-v4 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Icelake-Server Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Icelake-Server-noTSX Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Icelake-Server-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Icelake-Server-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Icelake-Server-v3 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Icelake-Server-v4 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Icelake-Server-v5 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Icelake-Server-v6 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Icelake-Server-v7 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: IvyBridge Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: IvyBridge-IBRS Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: IvyBridge-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: IvyBridge-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: KnightsMill Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: KnightsMill-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Nehalem Dec 6 04:58:20 localhost nova_compute[282193]: Nehalem-IBRS Dec 6 04:58:20 localhost nova_compute[282193]: Nehalem-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Nehalem-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Opteron_G1 Dec 6 04:58:20 localhost nova_compute[282193]: Opteron_G1-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Opteron_G2 Dec 6 04:58:20 localhost nova_compute[282193]: Opteron_G2-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Opteron_G3 Dec 6 04:58:20 localhost nova_compute[282193]: Opteron_G3-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Opteron_G4 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Opteron_G4-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Opteron_G5 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Opteron_G5-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Penryn Dec 6 04:58:20 localhost nova_compute[282193]: Penryn-v1 Dec 6 04:58:20 localhost nova_compute[282193]: SandyBridge Dec 6 04:58:20 localhost nova_compute[282193]: SandyBridge-IBRS Dec 6 04:58:20 localhost nova_compute[282193]: SandyBridge-v1 Dec 6 04:58:20 localhost nova_compute[282193]: SandyBridge-v2 Dec 6 04:58:20 localhost nova_compute[282193]: SapphireRapids Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: SapphireRapids-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: SapphireRapids-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: SapphireRapids-v3 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: SierraForest Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: SierraForest-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Client Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Client-IBRS Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Client-noTSX-IBRS Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Client-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Client-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Client-v3 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Client-v4 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Server Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Server-IBRS Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Server-noTSX-IBRS Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Server-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Server-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Server-v3 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Server-v4 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Server-v5 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Snowridge Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Snowridge-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Snowridge-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Snowridge-v3 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Snowridge-v4 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Westmere Dec 6 04:58:20 localhost nova_compute[282193]: Westmere-IBRS Dec 6 04:58:20 localhost nova_compute[282193]: Westmere-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Westmere-v2 Dec 6 04:58:20 localhost nova_compute[282193]: athlon Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: athlon-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: core2duo Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: core2duo-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: coreduo Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: coreduo-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: kvm32 Dec 6 04:58:20 localhost nova_compute[282193]: kvm32-v1 Dec 6 04:58:20 localhost nova_compute[282193]: kvm64 Dec 6 04:58:20 localhost nova_compute[282193]: kvm64-v1 Dec 6 04:58:20 localhost nova_compute[282193]: n270 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: n270-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: pentium Dec 6 04:58:20 localhost nova_compute[282193]: pentium-v1 Dec 6 04:58:20 localhost nova_compute[282193]: pentium2 Dec 6 04:58:20 localhost nova_compute[282193]: pentium2-v1 Dec 6 04:58:20 localhost nova_compute[282193]: pentium3 Dec 6 04:58:20 localhost nova_compute[282193]: pentium3-v1 Dec 6 04:58:20 localhost nova_compute[282193]: phenom Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: phenom-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: qemu32 Dec 6 04:58:20 localhost nova_compute[282193]: qemu32-v1 Dec 6 04:58:20 localhost nova_compute[282193]: qemu64 Dec 6 04:58:20 localhost nova_compute[282193]: qemu64-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: file Dec 6 04:58:20 localhost nova_compute[282193]: anonymous Dec 6 04:58:20 localhost nova_compute[282193]: memfd Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: disk Dec 6 04:58:20 localhost nova_compute[282193]: cdrom Dec 6 04:58:20 localhost nova_compute[282193]: floppy Dec 6 04:58:20 localhost nova_compute[282193]: lun Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: fdc Dec 6 04:58:20 localhost nova_compute[282193]: scsi Dec 6 04:58:20 localhost nova_compute[282193]: virtio Dec 6 04:58:20 localhost nova_compute[282193]: usb Dec 6 04:58:20 localhost nova_compute[282193]: sata Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: virtio Dec 6 04:58:20 localhost nova_compute[282193]: virtio-transitional Dec 6 04:58:20 localhost nova_compute[282193]: virtio-non-transitional Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: vnc Dec 6 04:58:20 localhost nova_compute[282193]: egl-headless Dec 6 04:58:20 localhost nova_compute[282193]: dbus Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: subsystem Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: default Dec 6 04:58:20 localhost nova_compute[282193]: mandatory Dec 6 04:58:20 localhost nova_compute[282193]: requisite Dec 6 04:58:20 localhost nova_compute[282193]: optional Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: usb Dec 6 04:58:20 localhost nova_compute[282193]: pci Dec 6 04:58:20 localhost nova_compute[282193]: scsi Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: virtio Dec 6 04:58:20 localhost nova_compute[282193]: virtio-transitional Dec 6 04:58:20 localhost nova_compute[282193]: virtio-non-transitional Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: random Dec 6 04:58:20 localhost nova_compute[282193]: egd Dec 6 04:58:20 localhost nova_compute[282193]: builtin Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: path Dec 6 04:58:20 localhost nova_compute[282193]: handle Dec 6 04:58:20 localhost nova_compute[282193]: virtiofs Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: tpm-tis Dec 6 04:58:20 localhost nova_compute[282193]: tpm-crb Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: emulator Dec 6 04:58:20 localhost nova_compute[282193]: external Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: 2.0 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: usb Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: pty Dec 6 04:58:20 localhost nova_compute[282193]: unix Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: qemu Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: builtin Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: default Dec 6 04:58:20 localhost nova_compute[282193]: passt Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: isa Dec 6 04:58:20 localhost nova_compute[282193]: hyperv Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: null Dec 6 04:58:20 localhost nova_compute[282193]: vc Dec 6 04:58:20 localhost nova_compute[282193]: pty Dec 6 04:58:20 localhost nova_compute[282193]: dev Dec 6 04:58:20 localhost nova_compute[282193]: file Dec 6 04:58:20 localhost nova_compute[282193]: pipe Dec 6 04:58:20 localhost nova_compute[282193]: stdio Dec 6 04:58:20 localhost nova_compute[282193]: udp Dec 6 04:58:20 localhost nova_compute[282193]: tcp Dec 6 04:58:20 localhost nova_compute[282193]: unix Dec 6 04:58:20 localhost nova_compute[282193]: qemu-vdagent Dec 6 04:58:20 localhost nova_compute[282193]: dbus Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: relaxed Dec 6 04:58:20 localhost nova_compute[282193]: vapic Dec 6 04:58:20 localhost nova_compute[282193]: spinlocks Dec 6 04:58:20 localhost nova_compute[282193]: vpindex Dec 6 04:58:20 localhost nova_compute[282193]: runtime Dec 6 04:58:20 localhost nova_compute[282193]: synic Dec 6 04:58:20 localhost nova_compute[282193]: stimer Dec 6 04:58:20 localhost nova_compute[282193]: reset Dec 6 04:58:20 localhost nova_compute[282193]: vendor_id Dec 6 04:58:20 localhost nova_compute[282193]: frequencies Dec 6 04:58:20 localhost nova_compute[282193]: reenlightenment Dec 6 04:58:20 localhost nova_compute[282193]: tlbflush Dec 6 04:58:20 localhost nova_compute[282193]: ipi Dec 6 04:58:20 localhost nova_compute[282193]: avic Dec 6 04:58:20 localhost nova_compute[282193]: emsr_bitmap Dec 6 04:58:20 localhost nova_compute[282193]: xmm_input Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: 4095 Dec 6 04:58:20 localhost nova_compute[282193]: on Dec 6 04:58:20 localhost nova_compute[282193]: off Dec 6 04:58:20 localhost nova_compute[282193]: off Dec 6 04:58:20 localhost nova_compute[282193]: Linux KVM Hv Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: tdx Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Dec 6 04:58:20 localhost nova_compute[282193]: 2025-12-06 09:58:20.006 282197 DEBUG nova.virt.libvirt.host [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: /usr/libexec/qemu-kvm Dec 6 04:58:20 localhost nova_compute[282193]: kvm Dec 6 04:58:20 localhost nova_compute[282193]: pc-i440fx-rhel7.6.0 Dec 6 04:58:20 localhost nova_compute[282193]: i686 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: /usr/share/OVMF/OVMF_CODE.secboot.fd Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: rom Dec 6 04:58:20 localhost nova_compute[282193]: pflash Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: yes Dec 6 04:58:20 localhost nova_compute[282193]: no Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: no Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: on Dec 6 04:58:20 localhost nova_compute[282193]: off Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: on Dec 6 04:58:20 localhost nova_compute[282193]: off Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-Rome Dec 6 04:58:20 localhost nova_compute[282193]: AMD Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: 486 Dec 6 04:58:20 localhost nova_compute[282193]: 486-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Broadwell Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Broadwell-IBRS Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Broadwell-noTSX Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Broadwell-noTSX-IBRS Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Broadwell-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Broadwell-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Broadwell-v3 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Broadwell-v4 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Cascadelake-Server Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Cascadelake-Server-noTSX Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Cascadelake-Server-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Cascadelake-Server-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Cascadelake-Server-v3 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Cascadelake-Server-v4 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Cascadelake-Server-v5 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Conroe Dec 6 04:58:20 localhost nova_compute[282193]: Conroe-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Cooperlake Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Cooperlake-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Cooperlake-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Denverton Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Denverton-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Denverton-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Denverton-v3 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dhyana Dec 6 04:58:20 localhost nova_compute[282193]: Dhyana-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dhyana-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: EPYC Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-Genoa Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-Genoa-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-IBPB Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-Milan Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-Milan-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-Milan-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-Rome Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-Rome-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-Rome-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-Rome-v3 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-Rome-v4 Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-v1 Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-v2 Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-v3 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-v4 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: GraniteRapids Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: GraniteRapids-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: GraniteRapids-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Haswell Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Haswell-IBRS Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Haswell-noTSX Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Haswell-noTSX-IBRS Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Haswell-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Haswell-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Haswell-v3 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Haswell-v4 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Icelake-Server Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Icelake-Server-noTSX Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Icelake-Server-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Icelake-Server-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Icelake-Server-v3 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Icelake-Server-v4 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Icelake-Server-v5 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Icelake-Server-v6 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Icelake-Server-v7 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: IvyBridge Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: IvyBridge-IBRS Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: IvyBridge-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: IvyBridge-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: KnightsMill Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: KnightsMill-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Nehalem Dec 6 04:58:20 localhost nova_compute[282193]: Nehalem-IBRS Dec 6 04:58:20 localhost nova_compute[282193]: Nehalem-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Nehalem-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Opteron_G1 Dec 6 04:58:20 localhost nova_compute[282193]: Opteron_G1-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Opteron_G2 Dec 6 04:58:20 localhost nova_compute[282193]: Opteron_G2-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Opteron_G3 Dec 6 04:58:20 localhost nova_compute[282193]: Opteron_G3-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Opteron_G4 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Opteron_G4-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Opteron_G5 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Opteron_G5-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Penryn Dec 6 04:58:20 localhost nova_compute[282193]: Penryn-v1 Dec 6 04:58:20 localhost nova_compute[282193]: SandyBridge Dec 6 04:58:20 localhost nova_compute[282193]: SandyBridge-IBRS Dec 6 04:58:20 localhost nova_compute[282193]: SandyBridge-v1 Dec 6 04:58:20 localhost nova_compute[282193]: SandyBridge-v2 Dec 6 04:58:20 localhost nova_compute[282193]: SapphireRapids Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: SapphireRapids-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: SapphireRapids-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: SapphireRapids-v3 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: SierraForest Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: SierraForest-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Client Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Client-IBRS Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Client-noTSX-IBRS Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Client-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Client-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Client-v3 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Client-v4 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Server Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Server-IBRS Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Server-noTSX-IBRS Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Server-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Server-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Server-v3 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Server-v4 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Server-v5 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Snowridge Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Snowridge-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Snowridge-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Snowridge-v3 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Snowridge-v4 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Westmere Dec 6 04:58:20 localhost nova_compute[282193]: Westmere-IBRS Dec 6 04:58:20 localhost nova_compute[282193]: Westmere-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Westmere-v2 Dec 6 04:58:20 localhost nova_compute[282193]: athlon Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: athlon-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: core2duo Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: core2duo-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: coreduo Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: coreduo-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: kvm32 Dec 6 04:58:20 localhost nova_compute[282193]: kvm32-v1 Dec 6 04:58:20 localhost nova_compute[282193]: kvm64 Dec 6 04:58:20 localhost nova_compute[282193]: kvm64-v1 Dec 6 04:58:20 localhost nova_compute[282193]: n270 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: n270-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: pentium Dec 6 04:58:20 localhost nova_compute[282193]: pentium-v1 Dec 6 04:58:20 localhost nova_compute[282193]: pentium2 Dec 6 04:58:20 localhost nova_compute[282193]: pentium2-v1 Dec 6 04:58:20 localhost nova_compute[282193]: pentium3 Dec 6 04:58:20 localhost nova_compute[282193]: pentium3-v1 Dec 6 04:58:20 localhost nova_compute[282193]: phenom Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: phenom-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: qemu32 Dec 6 04:58:20 localhost nova_compute[282193]: qemu32-v1 Dec 6 04:58:20 localhost nova_compute[282193]: qemu64 Dec 6 04:58:20 localhost nova_compute[282193]: qemu64-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: file Dec 6 04:58:20 localhost nova_compute[282193]: anonymous Dec 6 04:58:20 localhost nova_compute[282193]: memfd Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: disk Dec 6 04:58:20 localhost nova_compute[282193]: cdrom Dec 6 04:58:20 localhost nova_compute[282193]: floppy Dec 6 04:58:20 localhost nova_compute[282193]: lun Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: ide Dec 6 04:58:20 localhost nova_compute[282193]: fdc Dec 6 04:58:20 localhost nova_compute[282193]: scsi Dec 6 04:58:20 localhost nova_compute[282193]: virtio Dec 6 04:58:20 localhost nova_compute[282193]: usb Dec 6 04:58:20 localhost nova_compute[282193]: sata Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: virtio Dec 6 04:58:20 localhost nova_compute[282193]: virtio-transitional Dec 6 04:58:20 localhost nova_compute[282193]: virtio-non-transitional Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: vnc Dec 6 04:58:20 localhost nova_compute[282193]: egl-headless Dec 6 04:58:20 localhost nova_compute[282193]: dbus Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: subsystem Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: default Dec 6 04:58:20 localhost nova_compute[282193]: mandatory Dec 6 04:58:20 localhost nova_compute[282193]: requisite Dec 6 04:58:20 localhost nova_compute[282193]: optional Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: usb Dec 6 04:58:20 localhost nova_compute[282193]: pci Dec 6 04:58:20 localhost nova_compute[282193]: scsi Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: virtio Dec 6 04:58:20 localhost nova_compute[282193]: virtio-transitional Dec 6 04:58:20 localhost nova_compute[282193]: virtio-non-transitional Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: random Dec 6 04:58:20 localhost nova_compute[282193]: egd Dec 6 04:58:20 localhost nova_compute[282193]: builtin Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: path Dec 6 04:58:20 localhost nova_compute[282193]: handle Dec 6 04:58:20 localhost nova_compute[282193]: virtiofs Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: tpm-tis Dec 6 04:58:20 localhost nova_compute[282193]: tpm-crb Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: emulator Dec 6 04:58:20 localhost nova_compute[282193]: external Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: 2.0 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: usb Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: pty Dec 6 04:58:20 localhost nova_compute[282193]: unix Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: qemu Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: builtin Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: default Dec 6 04:58:20 localhost nova_compute[282193]: passt Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: isa Dec 6 04:58:20 localhost nova_compute[282193]: hyperv Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: null Dec 6 04:58:20 localhost nova_compute[282193]: vc Dec 6 04:58:20 localhost nova_compute[282193]: pty Dec 6 04:58:20 localhost nova_compute[282193]: dev Dec 6 04:58:20 localhost nova_compute[282193]: file Dec 6 04:58:20 localhost nova_compute[282193]: pipe Dec 6 04:58:20 localhost nova_compute[282193]: stdio Dec 6 04:58:20 localhost nova_compute[282193]: udp Dec 6 04:58:20 localhost nova_compute[282193]: tcp Dec 6 04:58:20 localhost nova_compute[282193]: unix Dec 6 04:58:20 localhost nova_compute[282193]: qemu-vdagent Dec 6 04:58:20 localhost nova_compute[282193]: dbus Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: relaxed Dec 6 04:58:20 localhost nova_compute[282193]: vapic Dec 6 04:58:20 localhost nova_compute[282193]: spinlocks Dec 6 04:58:20 localhost nova_compute[282193]: vpindex Dec 6 04:58:20 localhost nova_compute[282193]: runtime Dec 6 04:58:20 localhost nova_compute[282193]: synic Dec 6 04:58:20 localhost nova_compute[282193]: stimer Dec 6 04:58:20 localhost nova_compute[282193]: reset Dec 6 04:58:20 localhost nova_compute[282193]: vendor_id Dec 6 04:58:20 localhost nova_compute[282193]: frequencies Dec 6 04:58:20 localhost nova_compute[282193]: reenlightenment Dec 6 04:58:20 localhost nova_compute[282193]: tlbflush Dec 6 04:58:20 localhost nova_compute[282193]: ipi Dec 6 04:58:20 localhost nova_compute[282193]: avic Dec 6 04:58:20 localhost nova_compute[282193]: emsr_bitmap Dec 6 04:58:20 localhost nova_compute[282193]: xmm_input Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: 4095 Dec 6 04:58:20 localhost nova_compute[282193]: on Dec 6 04:58:20 localhost nova_compute[282193]: off Dec 6 04:58:20 localhost nova_compute[282193]: off Dec 6 04:58:20 localhost nova_compute[282193]: Linux KVM Hv Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: tdx Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Dec 6 04:58:20 localhost nova_compute[282193]: 2025-12-06 09:58:20.032 282197 DEBUG nova.virt.libvirt.host [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Dec 6 04:58:20 localhost nova_compute[282193]: 2025-12-06 09:58:20.034 282197 DEBUG nova.virt.libvirt.volume.mount [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m Dec 6 04:58:20 localhost nova_compute[282193]: 2025-12-06 09:58:20.037 282197 DEBUG nova.virt.libvirt.host [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: /usr/libexec/qemu-kvm Dec 6 04:58:20 localhost nova_compute[282193]: kvm Dec 6 04:58:20 localhost nova_compute[282193]: pc-q35-rhel9.8.0 Dec 6 04:58:20 localhost nova_compute[282193]: x86_64 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: efi Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: /usr/share/edk2/ovmf/OVMF_CODE.secboot.fd Dec 6 04:58:20 localhost nova_compute[282193]: /usr/share/edk2/ovmf/OVMF_CODE.fd Dec 6 04:58:20 localhost nova_compute[282193]: /usr/share/edk2/ovmf/OVMF.amdsev.fd Dec 6 04:58:20 localhost nova_compute[282193]: /usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: rom Dec 6 04:58:20 localhost nova_compute[282193]: pflash Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: yes Dec 6 04:58:20 localhost nova_compute[282193]: no Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: yes Dec 6 04:58:20 localhost nova_compute[282193]: no Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: on Dec 6 04:58:20 localhost nova_compute[282193]: off Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: on Dec 6 04:58:20 localhost nova_compute[282193]: off Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-Rome Dec 6 04:58:20 localhost nova_compute[282193]: AMD Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: 486 Dec 6 04:58:20 localhost nova_compute[282193]: 486-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Broadwell Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Broadwell-IBRS Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Broadwell-noTSX Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Broadwell-noTSX-IBRS Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Broadwell-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Broadwell-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Broadwell-v3 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Broadwell-v4 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Cascadelake-Server Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Cascadelake-Server-noTSX Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Cascadelake-Server-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Cascadelake-Server-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Cascadelake-Server-v3 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Cascadelake-Server-v4 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Cascadelake-Server-v5 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Conroe Dec 6 04:58:20 localhost nova_compute[282193]: Conroe-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Cooperlake Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Cooperlake-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Cooperlake-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Denverton Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Denverton-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Denverton-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Denverton-v3 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dhyana Dec 6 04:58:20 localhost nova_compute[282193]: Dhyana-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dhyana-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: EPYC Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-Genoa Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-Genoa-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-IBPB Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-Milan Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-Milan-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-Milan-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-Rome Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-Rome-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-Rome-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-Rome-v3 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-Rome-v4 Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-v1 Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-v2 Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-v3 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-v4 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: GraniteRapids Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: GraniteRapids-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: GraniteRapids-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Haswell Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Haswell-IBRS Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Haswell-noTSX Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Haswell-noTSX-IBRS Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Haswell-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Haswell-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Haswell-v3 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Haswell-v4 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Icelake-Server Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Icelake-Server-noTSX Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Icelake-Server-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Icelake-Server-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Icelake-Server-v3 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Icelake-Server-v4 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Icelake-Server-v5 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Icelake-Server-v6 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Icelake-Server-v7 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: IvyBridge Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: IvyBridge-IBRS Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: IvyBridge-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: IvyBridge-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: KnightsMill Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: KnightsMill-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Nehalem Dec 6 04:58:20 localhost nova_compute[282193]: Nehalem-IBRS Dec 6 04:58:20 localhost nova_compute[282193]: Nehalem-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Nehalem-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Opteron_G1 Dec 6 04:58:20 localhost nova_compute[282193]: Opteron_G1-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Opteron_G2 Dec 6 04:58:20 localhost nova_compute[282193]: Opteron_G2-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Opteron_G3 Dec 6 04:58:20 localhost nova_compute[282193]: Opteron_G3-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Opteron_G4 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Opteron_G4-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Opteron_G5 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Opteron_G5-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Penryn Dec 6 04:58:20 localhost nova_compute[282193]: Penryn-v1 Dec 6 04:58:20 localhost nova_compute[282193]: SandyBridge Dec 6 04:58:20 localhost nova_compute[282193]: SandyBridge-IBRS Dec 6 04:58:20 localhost nova_compute[282193]: SandyBridge-v1 Dec 6 04:58:20 localhost nova_compute[282193]: SandyBridge-v2 Dec 6 04:58:20 localhost nova_compute[282193]: SapphireRapids Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: SapphireRapids-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: SapphireRapids-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: SapphireRapids-v3 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: SierraForest Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: SierraForest-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Client Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Client-IBRS Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Client-noTSX-IBRS Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Client-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Client-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Client-v3 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Client-v4 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Server Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Server-IBRS Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Server-noTSX-IBRS Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Server-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Server-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Server-v3 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Server-v4 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Server-v5 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Snowridge Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Snowridge-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Snowridge-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Snowridge-v3 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Snowridge-v4 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Westmere Dec 6 04:58:20 localhost nova_compute[282193]: Westmere-IBRS Dec 6 04:58:20 localhost nova_compute[282193]: Westmere-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Westmere-v2 Dec 6 04:58:20 localhost nova_compute[282193]: athlon Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: athlon-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: core2duo Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: core2duo-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: coreduo Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: coreduo-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: kvm32 Dec 6 04:58:20 localhost nova_compute[282193]: kvm32-v1 Dec 6 04:58:20 localhost nova_compute[282193]: kvm64 Dec 6 04:58:20 localhost nova_compute[282193]: kvm64-v1 Dec 6 04:58:20 localhost nova_compute[282193]: n270 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: n270-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: pentium Dec 6 04:58:20 localhost nova_compute[282193]: pentium-v1 Dec 6 04:58:20 localhost nova_compute[282193]: pentium2 Dec 6 04:58:20 localhost nova_compute[282193]: pentium2-v1 Dec 6 04:58:20 localhost nova_compute[282193]: pentium3 Dec 6 04:58:20 localhost nova_compute[282193]: pentium3-v1 Dec 6 04:58:20 localhost nova_compute[282193]: phenom Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: phenom-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: qemu32 Dec 6 04:58:20 localhost nova_compute[282193]: qemu32-v1 Dec 6 04:58:20 localhost nova_compute[282193]: qemu64 Dec 6 04:58:20 localhost nova_compute[282193]: qemu64-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: file Dec 6 04:58:20 localhost nova_compute[282193]: anonymous Dec 6 04:58:20 localhost nova_compute[282193]: memfd Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: disk Dec 6 04:58:20 localhost nova_compute[282193]: cdrom Dec 6 04:58:20 localhost nova_compute[282193]: floppy Dec 6 04:58:20 localhost nova_compute[282193]: lun Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: fdc Dec 6 04:58:20 localhost nova_compute[282193]: scsi Dec 6 04:58:20 localhost nova_compute[282193]: virtio Dec 6 04:58:20 localhost nova_compute[282193]: usb Dec 6 04:58:20 localhost nova_compute[282193]: sata Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: virtio Dec 6 04:58:20 localhost nova_compute[282193]: virtio-transitional Dec 6 04:58:20 localhost nova_compute[282193]: virtio-non-transitional Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: vnc Dec 6 04:58:20 localhost nova_compute[282193]: egl-headless Dec 6 04:58:20 localhost nova_compute[282193]: dbus Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: subsystem Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: default Dec 6 04:58:20 localhost nova_compute[282193]: mandatory Dec 6 04:58:20 localhost nova_compute[282193]: requisite Dec 6 04:58:20 localhost nova_compute[282193]: optional Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: usb Dec 6 04:58:20 localhost nova_compute[282193]: pci Dec 6 04:58:20 localhost nova_compute[282193]: scsi Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: virtio Dec 6 04:58:20 localhost nova_compute[282193]: virtio-transitional Dec 6 04:58:20 localhost nova_compute[282193]: virtio-non-transitional Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: random Dec 6 04:58:20 localhost nova_compute[282193]: egd Dec 6 04:58:20 localhost nova_compute[282193]: builtin Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: path Dec 6 04:58:20 localhost nova_compute[282193]: handle Dec 6 04:58:20 localhost nova_compute[282193]: virtiofs Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: tpm-tis Dec 6 04:58:20 localhost nova_compute[282193]: tpm-crb Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: emulator Dec 6 04:58:20 localhost nova_compute[282193]: external Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: 2.0 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: usb Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: pty Dec 6 04:58:20 localhost nova_compute[282193]: unix Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: qemu Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: builtin Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: default Dec 6 04:58:20 localhost nova_compute[282193]: passt Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: isa Dec 6 04:58:20 localhost nova_compute[282193]: hyperv Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: null Dec 6 04:58:20 localhost nova_compute[282193]: vc Dec 6 04:58:20 localhost nova_compute[282193]: pty Dec 6 04:58:20 localhost nova_compute[282193]: dev Dec 6 04:58:20 localhost nova_compute[282193]: file Dec 6 04:58:20 localhost nova_compute[282193]: pipe Dec 6 04:58:20 localhost nova_compute[282193]: stdio Dec 6 04:58:20 localhost nova_compute[282193]: udp Dec 6 04:58:20 localhost nova_compute[282193]: tcp Dec 6 04:58:20 localhost nova_compute[282193]: unix Dec 6 04:58:20 localhost nova_compute[282193]: qemu-vdagent Dec 6 04:58:20 localhost nova_compute[282193]: dbus Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: relaxed Dec 6 04:58:20 localhost nova_compute[282193]: vapic Dec 6 04:58:20 localhost nova_compute[282193]: spinlocks Dec 6 04:58:20 localhost nova_compute[282193]: vpindex Dec 6 04:58:20 localhost nova_compute[282193]: runtime Dec 6 04:58:20 localhost nova_compute[282193]: synic Dec 6 04:58:20 localhost nova_compute[282193]: stimer Dec 6 04:58:20 localhost nova_compute[282193]: reset Dec 6 04:58:20 localhost nova_compute[282193]: vendor_id Dec 6 04:58:20 localhost nova_compute[282193]: frequencies Dec 6 04:58:20 localhost nova_compute[282193]: reenlightenment Dec 6 04:58:20 localhost nova_compute[282193]: tlbflush Dec 6 04:58:20 localhost nova_compute[282193]: ipi Dec 6 04:58:20 localhost nova_compute[282193]: avic Dec 6 04:58:20 localhost nova_compute[282193]: emsr_bitmap Dec 6 04:58:20 localhost nova_compute[282193]: xmm_input Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: 4095 Dec 6 04:58:20 localhost nova_compute[282193]: on Dec 6 04:58:20 localhost nova_compute[282193]: off Dec 6 04:58:20 localhost nova_compute[282193]: off Dec 6 04:58:20 localhost nova_compute[282193]: Linux KVM Hv Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: tdx Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Dec 6 04:58:20 localhost nova_compute[282193]: 2025-12-06 09:58:20.090 282197 DEBUG nova.virt.libvirt.host [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: /usr/libexec/qemu-kvm Dec 6 04:58:20 localhost nova_compute[282193]: kvm Dec 6 04:58:20 localhost nova_compute[282193]: pc-i440fx-rhel7.6.0 Dec 6 04:58:20 localhost nova_compute[282193]: x86_64 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: /usr/share/OVMF/OVMF_CODE.secboot.fd Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: rom Dec 6 04:58:20 localhost nova_compute[282193]: pflash Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: yes Dec 6 04:58:20 localhost nova_compute[282193]: no Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: no Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: on Dec 6 04:58:20 localhost nova_compute[282193]: off Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: on Dec 6 04:58:20 localhost nova_compute[282193]: off Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-Rome Dec 6 04:58:20 localhost nova_compute[282193]: AMD Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: 486 Dec 6 04:58:20 localhost nova_compute[282193]: 486-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Broadwell Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Broadwell-IBRS Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Broadwell-noTSX Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Broadwell-noTSX-IBRS Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Broadwell-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Broadwell-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Broadwell-v3 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Broadwell-v4 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Cascadelake-Server Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Cascadelake-Server-noTSX Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Cascadelake-Server-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Cascadelake-Server-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Cascadelake-Server-v3 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Cascadelake-Server-v4 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Cascadelake-Server-v5 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Conroe Dec 6 04:58:20 localhost nova_compute[282193]: Conroe-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Cooperlake Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Cooperlake-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Cooperlake-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Denverton Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Denverton-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Denverton-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Denverton-v3 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dhyana Dec 6 04:58:20 localhost nova_compute[282193]: Dhyana-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dhyana-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: EPYC Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-Genoa Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-Genoa-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-IBPB Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-Milan Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-Milan-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-Milan-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-Rome Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-Rome-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-Rome-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-Rome-v3 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-Rome-v4 Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-v1 Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-v2 Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-v3 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: EPYC-v4 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: GraniteRapids Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: GraniteRapids-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: GraniteRapids-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Haswell Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Haswell-IBRS Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Haswell-noTSX Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Haswell-noTSX-IBRS Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Haswell-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Haswell-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Haswell-v3 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Haswell-v4 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Icelake-Server Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Icelake-Server-noTSX Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Icelake-Server-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Icelake-Server-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Icelake-Server-v3 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Icelake-Server-v4 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Icelake-Server-v5 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Icelake-Server-v6 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Icelake-Server-v7 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: IvyBridge Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: IvyBridge-IBRS Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: IvyBridge-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: IvyBridge-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: KnightsMill Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: KnightsMill-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Nehalem Dec 6 04:58:20 localhost nova_compute[282193]: Nehalem-IBRS Dec 6 04:58:20 localhost nova_compute[282193]: Nehalem-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Nehalem-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Opteron_G1 Dec 6 04:58:20 localhost nova_compute[282193]: Opteron_G1-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Opteron_G2 Dec 6 04:58:20 localhost nova_compute[282193]: Opteron_G2-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Opteron_G3 Dec 6 04:58:20 localhost nova_compute[282193]: Opteron_G3-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Opteron_G4 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Opteron_G4-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Opteron_G5 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Opteron_G5-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Penryn Dec 6 04:58:20 localhost nova_compute[282193]: Penryn-v1 Dec 6 04:58:20 localhost nova_compute[282193]: SandyBridge Dec 6 04:58:20 localhost nova_compute[282193]: SandyBridge-IBRS Dec 6 04:58:20 localhost nova_compute[282193]: SandyBridge-v1 Dec 6 04:58:20 localhost nova_compute[282193]: SandyBridge-v2 Dec 6 04:58:20 localhost nova_compute[282193]: SapphireRapids Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: SapphireRapids-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: SapphireRapids-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: SapphireRapids-v3 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: SierraForest Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: SierraForest-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Client Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Client-IBRS Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Client-noTSX-IBRS Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Client-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Client-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Client-v3 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Client-v4 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Server Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Server-IBRS Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Server-noTSX-IBRS Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Server-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Server-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Server-v3 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Server-v4 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Skylake-Server-v5 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Snowridge Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Snowridge-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Snowridge-v2 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Snowridge-v3 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Snowridge-v4 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Westmere Dec 6 04:58:20 localhost nova_compute[282193]: Westmere-IBRS Dec 6 04:58:20 localhost nova_compute[282193]: Westmere-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Westmere-v2 Dec 6 04:58:20 localhost nova_compute[282193]: athlon Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: athlon-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: core2duo Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: core2duo-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: coreduo Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: coreduo-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: kvm32 Dec 6 04:58:20 localhost nova_compute[282193]: kvm32-v1 Dec 6 04:58:20 localhost nova_compute[282193]: kvm64 Dec 6 04:58:20 localhost nova_compute[282193]: kvm64-v1 Dec 6 04:58:20 localhost nova_compute[282193]: n270 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: n270-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: pentium Dec 6 04:58:20 localhost nova_compute[282193]: pentium-v1 Dec 6 04:58:20 localhost nova_compute[282193]: pentium2 Dec 6 04:58:20 localhost nova_compute[282193]: pentium2-v1 Dec 6 04:58:20 localhost nova_compute[282193]: pentium3 Dec 6 04:58:20 localhost nova_compute[282193]: pentium3-v1 Dec 6 04:58:20 localhost nova_compute[282193]: phenom Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: phenom-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: qemu32 Dec 6 04:58:20 localhost nova_compute[282193]: qemu32-v1 Dec 6 04:58:20 localhost nova_compute[282193]: qemu64 Dec 6 04:58:20 localhost nova_compute[282193]: qemu64-v1 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: file Dec 6 04:58:20 localhost nova_compute[282193]: anonymous Dec 6 04:58:20 localhost nova_compute[282193]: memfd Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: disk Dec 6 04:58:20 localhost nova_compute[282193]: cdrom Dec 6 04:58:20 localhost nova_compute[282193]: floppy Dec 6 04:58:20 localhost nova_compute[282193]: lun Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: ide Dec 6 04:58:20 localhost nova_compute[282193]: fdc Dec 6 04:58:20 localhost nova_compute[282193]: scsi Dec 6 04:58:20 localhost nova_compute[282193]: virtio Dec 6 04:58:20 localhost nova_compute[282193]: usb Dec 6 04:58:20 localhost nova_compute[282193]: sata Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: virtio Dec 6 04:58:20 localhost nova_compute[282193]: virtio-transitional Dec 6 04:58:20 localhost nova_compute[282193]: virtio-non-transitional Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: vnc Dec 6 04:58:20 localhost nova_compute[282193]: egl-headless Dec 6 04:58:20 localhost nova_compute[282193]: dbus Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: subsystem Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: default Dec 6 04:58:20 localhost nova_compute[282193]: mandatory Dec 6 04:58:20 localhost nova_compute[282193]: requisite Dec 6 04:58:20 localhost nova_compute[282193]: optional Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: usb Dec 6 04:58:20 localhost nova_compute[282193]: pci Dec 6 04:58:20 localhost nova_compute[282193]: scsi Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: virtio Dec 6 04:58:20 localhost nova_compute[282193]: virtio-transitional Dec 6 04:58:20 localhost nova_compute[282193]: virtio-non-transitional Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: random Dec 6 04:58:20 localhost nova_compute[282193]: egd Dec 6 04:58:20 localhost nova_compute[282193]: builtin Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: path Dec 6 04:58:20 localhost nova_compute[282193]: handle Dec 6 04:58:20 localhost nova_compute[282193]: virtiofs Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: tpm-tis Dec 6 04:58:20 localhost nova_compute[282193]: tpm-crb Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: emulator Dec 6 04:58:20 localhost nova_compute[282193]: external Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: 2.0 Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: usb Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: pty Dec 6 04:58:20 localhost nova_compute[282193]: unix Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: qemu Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: builtin Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: default Dec 6 04:58:20 localhost nova_compute[282193]: passt Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: isa Dec 6 04:58:20 localhost nova_compute[282193]: hyperv Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: null Dec 6 04:58:20 localhost nova_compute[282193]: vc Dec 6 04:58:20 localhost nova_compute[282193]: pty Dec 6 04:58:20 localhost nova_compute[282193]: dev Dec 6 04:58:20 localhost nova_compute[282193]: file Dec 6 04:58:20 localhost nova_compute[282193]: pipe Dec 6 04:58:20 localhost nova_compute[282193]: stdio Dec 6 04:58:20 localhost nova_compute[282193]: udp Dec 6 04:58:20 localhost nova_compute[282193]: tcp Dec 6 04:58:20 localhost nova_compute[282193]: unix Dec 6 04:58:20 localhost nova_compute[282193]: qemu-vdagent Dec 6 04:58:20 localhost nova_compute[282193]: dbus Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: relaxed Dec 6 04:58:20 localhost nova_compute[282193]: vapic Dec 6 04:58:20 localhost nova_compute[282193]: spinlocks Dec 6 04:58:20 localhost nova_compute[282193]: vpindex Dec 6 04:58:20 localhost nova_compute[282193]: runtime Dec 6 04:58:20 localhost nova_compute[282193]: synic Dec 6 04:58:20 localhost nova_compute[282193]: stimer Dec 6 04:58:20 localhost nova_compute[282193]: reset Dec 6 04:58:20 localhost nova_compute[282193]: vendor_id Dec 6 04:58:20 localhost nova_compute[282193]: frequencies Dec 6 04:58:20 localhost nova_compute[282193]: reenlightenment Dec 6 04:58:20 localhost nova_compute[282193]: tlbflush Dec 6 04:58:20 localhost nova_compute[282193]: ipi Dec 6 04:58:20 localhost nova_compute[282193]: avic Dec 6 04:58:20 localhost nova_compute[282193]: emsr_bitmap Dec 6 04:58:20 localhost nova_compute[282193]: xmm_input Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: 4095 Dec 6 04:58:20 localhost nova_compute[282193]: on Dec 6 04:58:20 localhost nova_compute[282193]: off Dec 6 04:58:20 localhost nova_compute[282193]: off Dec 6 04:58:20 localhost nova_compute[282193]: Linux KVM Hv Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: tdx Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: Dec 6 04:58:20 localhost nova_compute[282193]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Dec 6 04:58:20 localhost nova_compute[282193]: 2025-12-06 09:58:20.147 282197 DEBUG nova.virt.libvirt.host [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Dec 6 04:58:20 localhost nova_compute[282193]: 2025-12-06 09:58:20.147 282197 INFO nova.virt.libvirt.host [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Secure Boot support detected#033[00m Dec 6 04:58:20 localhost nova_compute[282193]: 2025-12-06 09:58:20.150 282197 INFO nova.virt.libvirt.driver [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Dec 6 04:58:20 localhost nova_compute[282193]: 2025-12-06 09:58:20.150 282197 INFO nova.virt.libvirt.driver [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Dec 6 04:58:20 localhost nova_compute[282193]: 2025-12-06 09:58:20.163 282197 DEBUG nova.virt.libvirt.driver [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m Dec 6 04:58:20 localhost nova_compute[282193]: 2025-12-06 09:58:20.197 282197 INFO nova.virt.node [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Determined node identity 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad from /var/lib/nova/compute_id#033[00m Dec 6 04:58:20 localhost nova_compute[282193]: 2025-12-06 09:58:20.220 282197 DEBUG nova.compute.manager [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Verified node 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad matches my host np0005548789.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568#033[00m Dec 6 04:58:20 localhost nova_compute[282193]: 2025-12-06 09:58:20.267 282197 DEBUG nova.compute.manager [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 6 04:58:20 localhost nova_compute[282193]: 2025-12-06 09:58:20.272 282197 DEBUG nova.virt.libvirt.vif [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:44:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description='test',display_name='test',ec2_ids=,ephemeral_gb=1,ephemeral_key_uuid=None,fault=,flavor=,hidden=False,host='np0005548789.localdomain',hostname='test',id=2,image_ref='e0d06706-da90-478a-9829-34b75a3ce049',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-12-06T08:44:43Z,launched_on='np0005548789.localdomain',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=,new_flavor=,node='np0005548789.localdomain',numa_topology=None,old_flavor=,os_type=None,pci_devices=,pci_requests=,power_state=1,progress=0,project_id='3d603431c0bb4967bafc7a0aa6108bfe',ramdisk_id='',reservation_id='r-02dpupig',resources=,root_device_name='/dev/vda',root_gb=1,security_groups=,services=,shutdown_terminate=False,system_metadata=,tags=,task_state=None,terminated_at=None,trusted_certs=,updated_at=2025-12-06T08:44:43Z,user_data=None,user_id='ff0049f3313348bdb67886d170c1c765',uuid=b7ed0a2e-9350-4933-9334-4e5e08d3e6aa,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m Dec 6 04:58:20 localhost nova_compute[282193]: 2025-12-06 09:58:20.273 282197 DEBUG nova.network.os_vif_util [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Converting VIF {"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Dec 6 04:58:20 localhost nova_compute[282193]: 2025-12-06 09:58:20.274 282197 DEBUG nova.network.os_vif_util [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:64:77:f3,bridge_name='br-int',has_traffic_filtering=True,id=86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b,network=Network(652b6bdc-40ce-45b7-8aa5-3bca79987993),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86fc0b7a-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Dec 6 04:58:20 localhost nova_compute[282193]: 2025-12-06 09:58:20.275 282197 DEBUG os_vif [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:64:77:f3,bridge_name='br-int',has_traffic_filtering=True,id=86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b,network=Network(652b6bdc-40ce-45b7-8aa5-3bca79987993),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86fc0b7a-fb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m Dec 6 04:58:20 localhost nova_compute[282193]: 2025-12-06 09:58:20.319 282197 DEBUG ovsdbapp.backend.ovs_idl [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 6 04:58:20 localhost nova_compute[282193]: 2025-12-06 09:58:20.319 282197 DEBUG ovsdbapp.backend.ovs_idl [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 6 04:58:20 localhost nova_compute[282193]: 2025-12-06 09:58:20.319 282197 DEBUG ovsdbapp.backend.ovs_idl [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Dec 6 04:58:20 localhost nova_compute[282193]: 2025-12-06 09:58:20.320 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 04:58:20 localhost nova_compute[282193]: 2025-12-06 09:58:20.320 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] [POLLOUT] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:58:20 localhost nova_compute[282193]: 2025-12-06 09:58:20.320 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 04:58:20 localhost nova_compute[282193]: 2025-12-06 09:58:20.321 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:58:20 localhost nova_compute[282193]: 2025-12-06 09:58:20.322 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:58:20 localhost nova_compute[282193]: 2025-12-06 09:58:20.325 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:58:20 localhost nova_compute[282193]: 2025-12-06 09:58:20.342 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:58:20 localhost nova_compute[282193]: 2025-12-06 09:58:20.342 282197 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 04:58:20 localhost nova_compute[282193]: 2025-12-06 09:58:20.342 282197 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 6 04:58:20 localhost nova_compute[282193]: 2025-12-06 09:58:20.343 282197 INFO oslo.privsep.daemon [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpg389lvne/privsep.sock']#033[00m Dec 6 04:58:20 localhost nova_compute[282193]: 2025-12-06 09:58:20.977 282197 INFO oslo.privsep.daemon [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Spawned new privsep daemon via rootwrap#033[00m Dec 6 04:58:20 localhost nova_compute[282193]: 2025-12-06 09:58:20.873 282356 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Dec 6 04:58:20 localhost nova_compute[282193]: 2025-12-06 09:58:20.878 282356 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Dec 6 04:58:20 localhost nova_compute[282193]: 2025-12-06 09:58:20.882 282356 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m Dec 6 04:58:20 localhost nova_compute[282193]: 2025-12-06 09:58:20.882 282356 INFO oslo.privsep.daemon [-] privsep daemon running as pid 282356#033[00m Dec 6 04:58:21 localhost python3.9[282368]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None Dec 6 04:58:21 localhost nova_compute[282193]: 2025-12-06 09:58:21.235 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:58:21 localhost nova_compute[282193]: 2025-12-06 09:58:21.236 282197 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap86fc0b7a-fb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 04:58:21 localhost nova_compute[282193]: 2025-12-06 09:58:21.237 282197 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap86fc0b7a-fb, col_values=(('external_ids', {'iface-id': '86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:64:77:f3', 'vm-uuid': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 04:58:21 localhost nova_compute[282193]: 2025-12-06 09:58:21.238 282197 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 6 04:58:21 localhost nova_compute[282193]: 2025-12-06 09:58:21.238 282197 INFO os_vif [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:64:77:f3,bridge_name='br-int',has_traffic_filtering=True,id=86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b,network=Network(652b6bdc-40ce-45b7-8aa5-3bca79987993),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86fc0b7a-fb')#033[00m Dec 6 04:58:21 localhost nova_compute[282193]: 2025-12-06 09:58:21.239 282197 DEBUG nova.compute.manager [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 6 04:58:21 localhost nova_compute[282193]: 2025-12-06 09:58:21.243 282197 DEBUG nova.compute.manager [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Current state is 1, state in DB is 1. _init_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:1304#033[00m Dec 6 04:58:21 localhost nova_compute[282193]: 2025-12-06 09:58:21.244 282197 INFO nova.compute.manager [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m Dec 6 04:58:21 localhost nova_compute[282193]: 2025-12-06 09:58:21.334 282197 DEBUG oslo_concurrency.lockutils [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 04:58:21 localhost nova_compute[282193]: 2025-12-06 09:58:21.335 282197 DEBUG oslo_concurrency.lockutils [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 04:58:21 localhost nova_compute[282193]: 2025-12-06 09:58:21.335 282197 DEBUG oslo_concurrency.lockutils [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 04:58:21 localhost nova_compute[282193]: 2025-12-06 09:58:21.335 282197 DEBUG nova.compute.resource_tracker [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Auditing locally available compute resources for np0005548789.localdomain (node: np0005548789.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 6 04:58:21 localhost nova_compute[282193]: 2025-12-06 09:58:21.336 282197 DEBUG oslo_concurrency.processutils [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 04:58:21 localhost systemd[1]: Started libpod-conmon-a90088f7335f0424abe9208b181fba6d3fc6d1408325e575f0ba866a5d87ad9b.scope. Dec 6 04:58:21 localhost systemd[1]: Started libcrun container. Dec 6 04:58:21 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb10c7340fd1e23819ce0ca67ef154f878e8bf4b0dc6dca13387cdceeb0c7ee/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff) Dec 6 04:58:21 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb10c7340fd1e23819ce0ca67ef154f878e8bf4b0dc6dca13387cdceeb0c7ee/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Dec 6 04:58:21 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb10c7340fd1e23819ce0ca67ef154f878e8bf4b0dc6dca13387cdceeb0c7ee/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff) Dec 6 04:58:21 localhost podman[282395]: 2025-12-06 09:58:21.509247798 +0000 UTC m=+0.163644224 container init a90088f7335f0424abe9208b181fba6d3fc6d1408325e575f0ba866a5d87ad9b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute_init, config_id=edpm, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible) Dec 6 04:58:21 localhost podman[282395]: 2025-12-06 09:58:21.521069232 +0000 UTC m=+0.175465648 container start a90088f7335f0424abe9208b181fba6d3fc6d1408325e575f0ba866a5d87ad9b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=nova_compute_init, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 6 04:58:21 localhost python3.9[282368]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init Dec 6 04:58:21 localhost nova_compute_init[282434]: INFO:nova_statedir:Applying nova statedir ownership Dec 6 04:58:21 localhost nova_compute_init[282434]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436 Dec 6 04:58:21 localhost nova_compute_init[282434]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/ Dec 6 04:58:21 localhost nova_compute_init[282434]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436 Dec 6 04:58:21 localhost nova_compute_init[282434]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0 Dec 6 04:58:21 localhost nova_compute_init[282434]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/ Dec 6 04:58:21 localhost nova_compute_init[282434]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436 Dec 6 04:58:21 localhost nova_compute_init[282434]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0 Dec 6 04:58:21 localhost nova_compute_init[282434]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/ Dec 6 04:58:21 localhost nova_compute_init[282434]: INFO:nova_statedir:Ownership of /var/lib/nova/instances/b7ed0a2e-9350-4933-9334-4e5e08d3e6aa already 42436:42436 Dec 6 04:58:21 localhost nova_compute_init[282434]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances/b7ed0a2e-9350-4933-9334-4e5e08d3e6aa to system_u:object_r:container_file_t:s0 Dec 6 04:58:21 localhost nova_compute_init[282434]: INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/instances/b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/console.log Dec 6 04:58:21 localhost nova_compute_init[282434]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/_base/ Dec 6 04:58:21 localhost nova_compute_init[282434]: INFO:nova_statedir:Ownership of /var/lib/nova/instances/_base already 42436:42436 Dec 6 04:58:21 localhost nova_compute_init[282434]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances/_base to system_u:object_r:container_file_t:s0 Dec 6 04:58:21 localhost nova_compute_init[282434]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/_base/55d01870b6a0ce0995b6b5844cf47638cdf46fbf Dec 6 04:58:21 localhost nova_compute_init[282434]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/_base/ephemeral_1_0706d66 Dec 6 04:58:21 localhost nova_compute_init[282434]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/locks/ Dec 6 04:58:21 localhost nova_compute_init[282434]: INFO:nova_statedir:Ownership of /var/lib/nova/instances/locks already 42436:42436 Dec 6 04:58:21 localhost nova_compute_init[282434]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances/locks to system_u:object_r:container_file_t:s0 Dec 6 04:58:21 localhost nova_compute_init[282434]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/locks/nova-55d01870b6a0ce0995b6b5844cf47638cdf46fbf Dec 6 04:58:21 localhost nova_compute_init[282434]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/instances/locks/nova-ephemeral_1_0706d66 Dec 6 04:58:21 localhost nova_compute_init[282434]: INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/delay-nova-compute Dec 6 04:58:21 localhost nova_compute_init[282434]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ Dec 6 04:58:21 localhost nova_compute_init[282434]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436 Dec 6 04:58:21 localhost nova_compute_init[282434]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0 Dec 6 04:58:21 localhost nova_compute_init[282434]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey Dec 6 04:58:21 localhost nova_compute_init[282434]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config Dec 6 04:58:21 localhost nova_compute_init[282434]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/ Dec 6 04:58:21 localhost nova_compute_init[282434]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache already 42436:42436 Dec 6 04:58:21 localhost nova_compute_init[282434]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache to system_u:object_r:container_file_t:s0 Dec 6 04:58:21 localhost nova_compute_init[282434]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/ Dec 6 04:58:21 localhost nova_compute_init[282434]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache/python-entrypoints already 42436:42436 Dec 6 04:58:21 localhost nova_compute_init[282434]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache/python-entrypoints to system_u:object_r:container_file_t:s0 Dec 6 04:58:21 localhost nova_compute_init[282434]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/b234715fc878456b41e32c4fbc669b417044dbe6c6684bbc9059e5c93396ffea Dec 6 04:58:21 localhost nova_compute_init[282434]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/20273498b7380904530133bcb3f720bd45f4f00b810dc4597d81d23acd8f9673 Dec 6 04:58:21 localhost nova_compute_init[282434]: INFO:nova_statedir:Nova statedir ownership complete Dec 6 04:58:21 localhost systemd[1]: libpod-a90088f7335f0424abe9208b181fba6d3fc6d1408325e575f0ba866a5d87ad9b.scope: Deactivated successfully. Dec 6 04:58:21 localhost podman[282435]: 2025-12-06 09:58:21.59356329 +0000 UTC m=+0.055397151 container died a90088f7335f0424abe9208b181fba6d3fc6d1408325e575f0ba866a5d87ad9b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 6 04:58:21 localhost podman[282446]: 2025-12-06 09:58:21.679663369 +0000 UTC m=+0.081038243 container cleanup a90088f7335f0424abe9208b181fba6d3fc6d1408325e575f0ba866a5d87ad9b (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}) Dec 6 04:58:21 localhost systemd[1]: libpod-conmon-a90088f7335f0424abe9208b181fba6d3fc6d1408325e575f0ba866a5d87ad9b.scope: Deactivated successfully. Dec 6 04:58:21 localhost nova_compute[282193]: 2025-12-06 09:58:21.773 282197 DEBUG oslo_concurrency.processutils [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 04:58:21 localhost nova_compute[282193]: 2025-12-06 09:58:21.871 282197 DEBUG nova.virt.libvirt.driver [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 04:58:21 localhost nova_compute[282193]: 2025-12-06 09:58:21.872 282197 DEBUG nova.virt.libvirt.driver [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 04:58:22 localhost nova_compute[282193]: 2025-12-06 09:58:22.072 282197 WARNING nova.virt.libvirt.driver [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 6 04:58:22 localhost nova_compute[282193]: 2025-12-06 09:58:22.073 282197 DEBUG nova.compute.resource_tracker [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Hypervisor/Node resource view: name=np0005548789.localdomain free_ram=11849MB free_disk=41.83721923828125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 6 04:58:22 localhost nova_compute[282193]: 2025-12-06 09:58:22.074 282197 DEBUG oslo_concurrency.lockutils [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 04:58:22 localhost nova_compute[282193]: 2025-12-06 09:58:22.074 282197 DEBUG oslo_concurrency.lockutils [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 04:58:22 localhost nova_compute[282193]: 2025-12-06 09:58:22.268 282197 DEBUG nova.compute.resource_tracker [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 6 04:58:22 localhost nova_compute[282193]: 2025-12-06 09:58:22.268 282197 DEBUG nova.compute.resource_tracker [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 6 04:58:22 localhost nova_compute[282193]: 2025-12-06 09:58:22.269 282197 DEBUG nova.compute.resource_tracker [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Final resource view: name=np0005548789.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 6 04:58:22 localhost nova_compute[282193]: 2025-12-06 09:58:22.344 282197 DEBUG nova.scheduler.client.report [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Refreshing inventories for resource provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Dec 6 04:58:22 localhost nova_compute[282193]: 2025-12-06 09:58:22.394 282197 DEBUG nova.scheduler.client.report [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Updating ProviderTree inventory for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Dec 6 04:58:22 localhost nova_compute[282193]: 2025-12-06 09:58:22.394 282197 DEBUG nova.compute.provider_tree [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Updating inventory in ProviderTree for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 6 04:58:22 localhost systemd[1]: var-lib-containers-storage-overlay-ceb10c7340fd1e23819ce0ca67ef154f878e8bf4b0dc6dca13387cdceeb0c7ee-merged.mount: Deactivated successfully. Dec 6 04:58:22 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a90088f7335f0424abe9208b181fba6d3fc6d1408325e575f0ba866a5d87ad9b-userdata-shm.mount: Deactivated successfully. Dec 6 04:58:22 localhost nova_compute[282193]: 2025-12-06 09:58:22.409 282197 DEBUG nova.scheduler.client.report [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Refreshing aggregate associations for resource provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Dec 6 04:58:22 localhost systemd[1]: session-60.scope: Deactivated successfully. Dec 6 04:58:22 localhost systemd[1]: session-60.scope: Consumed 1min 30.613s CPU time. Dec 6 04:58:22 localhost systemd-logind[766]: Session 60 logged out. Waiting for processes to exit. Dec 6 04:58:22 localhost nova_compute[282193]: 2025-12-06 09:58:22.434 282197 DEBUG nova.scheduler.client.report [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Refreshing trait associations for resource provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad, traits: HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_RESCUE_BFV,HW_CPU_X86_AVX2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SHA,HW_CPU_X86_BMI2,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSSE3,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_AVX,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AMD_SVM,HW_CPU_X86_FMA3,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_F16C,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_ABM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Dec 6 04:58:22 localhost systemd-logind[766]: Removed session 60. Dec 6 04:58:22 localhost nova_compute[282193]: 2025-12-06 09:58:22.470 282197 DEBUG oslo_concurrency.processutils [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 04:58:22 localhost nova_compute[282193]: 2025-12-06 09:58:22.908 282197 DEBUG oslo_concurrency.processutils [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 04:58:22 localhost nova_compute[282193]: 2025-12-06 09:58:22.914 282197 DEBUG nova.virt.libvirt.host [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N Dec 6 04:58:22 localhost nova_compute[282193]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m Dec 6 04:58:22 localhost nova_compute[282193]: 2025-12-06 09:58:22.915 282197 INFO nova.virt.libvirt.host [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] kernel doesn't support AMD SEV#033[00m Dec 6 04:58:22 localhost nova_compute[282193]: 2025-12-06 09:58:22.917 282197 DEBUG nova.compute.provider_tree [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Inventory has not changed in ProviderTree for provider: 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 6 04:58:22 localhost nova_compute[282193]: 2025-12-06 09:58:22.917 282197 DEBUG nova.virt.libvirt.driver [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Dec 6 04:58:22 localhost nova_compute[282193]: 2025-12-06 09:58:22.958 282197 DEBUG nova.scheduler.client.report [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Inventory has not changed for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 6 04:58:23 localhost nova_compute[282193]: 2025-12-06 09:58:23.021 282197 DEBUG nova.compute.resource_tracker [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Compute_service record updated for np0005548789.localdomain:np0005548789.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 6 04:58:23 localhost nova_compute[282193]: 2025-12-06 09:58:23.022 282197 DEBUG oslo_concurrency.lockutils [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.948s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 04:58:23 localhost nova_compute[282193]: 2025-12-06 09:58:23.022 282197 DEBUG nova.service [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m Dec 6 04:58:23 localhost nova_compute[282193]: 2025-12-06 09:58:23.056 282197 DEBUG nova.service [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m Dec 6 04:58:23 localhost nova_compute[282193]: 2025-12-06 09:58:23.056 282197 DEBUG nova.servicegroup.drivers.db [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] DB_Driver: join new ServiceGroup member np0005548789.localdomain to the compute group, service = join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m Dec 6 04:58:23 localhost podman[241090]: time="2025-12-06T09:58:23Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 04:58:23 localhost podman[241090]: @ - - [06/Dec/2025:09:58:23 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 150643 "" "Go-http-client/1.1" Dec 6 04:58:23 localhost nova_compute[282193]: 2025-12-06 09:58:23.950 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:58:23 localhost podman[241090]: @ - - [06/Dec/2025:09:58:23 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17716 "" "Go-http-client/1.1" Dec 6 04:58:25 localhost nova_compute[282193]: 2025-12-06 09:58:25.324 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:58:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 04:58:25 localhost podman[282512]: 2025-12-06 09:58:25.939536603 +0000 UTC m=+0.092496776 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 6 04:58:26 localhost podman[282512]: 2025-12-06 09:58:26.050553521 +0000 UTC m=+0.203513664 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Dec 6 04:58:26 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 04:58:28 localhost nova_compute[282193]: 2025-12-06 09:58:28.952 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:58:30 localhost nova_compute[282193]: 2025-12-06 09:58:30.327 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:58:32 localhost sshd[282538]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:58:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 04:58:32 localhost podman[282540]: 2025-12-06 09:58:32.923511523 +0000 UTC m=+0.082695194 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 6 04:58:32 localhost podman[282540]: 2025-12-06 09:58:32.9285901 +0000 UTC m=+0.087773821 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 6 04:58:32 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 04:58:33 localhost sshd[282558]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:58:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8251 DF PROTO=TCP SPT=50026 DPT=9102 SEQ=3864606856 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E5C9970000000001030307) Dec 6 04:58:33 localhost nova_compute[282193]: 2025-12-06 09:58:33.955 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:58:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8252 DF PROTO=TCP SPT=50026 DPT=9102 SEQ=3864606856 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E5CDAF0000000001030307) Dec 6 04:58:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6961 DF PROTO=TCP SPT=55472 DPT=9102 SEQ=2212927204 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E5CFEF0000000001030307) Dec 6 04:58:35 localhost nova_compute[282193]: 2025-12-06 09:58:35.328 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:58:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8253 DF PROTO=TCP SPT=50026 DPT=9102 SEQ=3864606856 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E5D5AF0000000001030307) Dec 6 04:58:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 04:58:36 localhost podman[282560]: 2025-12-06 09:58:36.920870564 +0000 UTC m=+0.082118956 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 6 04:58:36 localhost podman[282560]: 2025-12-06 09:58:36.929898262 +0000 UTC m=+0.091146634 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 04:58:36 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 04:58:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1218 DF PROTO=TCP SPT=55404 DPT=9102 SEQ=3800488960 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E5D9EF0000000001030307) Dec 6 04:58:37 localhost ovn_metadata_agent[160504]: 2025-12-06 09:58:37.914 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:6c:02', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:a8:2f:0c:cb:a1'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 04:58:37 localhost ovn_metadata_agent[160504]: 2025-12-06 09:58:37.915 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 6 04:58:37 localhost nova_compute[282193]: 2025-12-06 09:58:37.914 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:58:38 localhost nova_compute[282193]: 2025-12-06 09:58:38.958 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:58:40 localhost nova_compute[282193]: 2025-12-06 09:58:40.331 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:58:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8254 DF PROTO=TCP SPT=50026 DPT=9102 SEQ=3864606856 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E5E56F0000000001030307) Dec 6 04:58:42 localhost sshd[282651]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:58:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 04:58:42 localhost podman[282653]: 2025-12-06 09:58:42.937709867 +0000 UTC m=+0.088747201 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, config_id=edpm, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 6 04:58:42 localhost podman[282653]: 2025-12-06 09:58:42.974379169 +0000 UTC m=+0.125416503 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 6 04:58:42 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 04:58:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 04:58:43 localhost podman[282672]: 2025-12-06 09:58:43.091709211 +0000 UTC m=+0.075455680 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, version=9.6, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_id=edpm, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, architecture=x86_64, release=1755695350, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Dec 6 04:58:43 localhost podman[282672]: 2025-12-06 09:58:43.106036164 +0000 UTC m=+0.089782613 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, release=1755695350, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, container_name=openstack_network_exporter, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.buildah.version=1.33.7) Dec 6 04:58:43 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 04:58:43 localhost nova_compute[282193]: 2025-12-06 09:58:43.962 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:58:44 localhost sshd[282711]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:58:45 localhost nova_compute[282193]: 2025-12-06 09:58:45.333 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:58:46 localhost openstack_network_exporter[243110]: ERROR 09:58:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 04:58:46 localhost openstack_network_exporter[243110]: ERROR 09:58:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 04:58:46 localhost openstack_network_exporter[243110]: ERROR 09:58:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 04:58:46 localhost openstack_network_exporter[243110]: ERROR 09:58:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 04:58:46 localhost openstack_network_exporter[243110]: Dec 6 04:58:46 localhost openstack_network_exporter[243110]: ERROR 09:58:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 04:58:46 localhost openstack_network_exporter[243110]: Dec 6 04:58:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 04:58:46 localhost ovn_metadata_agent[160504]: 2025-12-06 09:58:46.917 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b142a5ef-fbed-4e92-aa78-e3ad080c6370, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 04:58:46 localhost podman[282713]: 2025-12-06 09:58:46.931851239 +0000 UTC m=+0.093638452 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Dec 6 04:58:46 localhost podman[282713]: 2025-12-06 09:58:46.944145968 +0000 UTC m=+0.105933131 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0) Dec 6 04:58:46 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 04:58:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:58:47.289 160509 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 04:58:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:58:47.289 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 04:58:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:58:47.291 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 04:58:48 localhost nova_compute[282193]: 2025-12-06 09:58:48.965 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:58:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8255 DF PROTO=TCP SPT=50026 DPT=9102 SEQ=3864606856 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E605EF0000000001030307) Dec 6 04:58:50 localhost nova_compute[282193]: 2025-12-06 09:58:50.334 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:58:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 04:58:50 localhost podman[282731]: 2025-12-06 09:58:50.919389756 +0000 UTC m=+0.081553328 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 6 04:58:50 localhost podman[282731]: 2025-12-06 09:58:50.933235954 +0000 UTC m=+0.095399516 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 6 04:58:50 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 04:58:53 localhost podman[241090]: time="2025-12-06T09:58:53Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 04:58:53 localhost podman[241090]: @ - - [06/Dec/2025:09:58:53 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 150643 "" "Go-http-client/1.1" Dec 6 04:58:53 localhost nova_compute[282193]: 2025-12-06 09:58:53.969 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:58:53 localhost podman[241090]: @ - - [06/Dec/2025:09:58:53 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17728 "" "Go-http-client/1.1" Dec 6 04:58:54 localhost sshd[282755]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:58:55 localhost nova_compute[282193]: 2025-12-06 09:58:55.059 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:58:55 localhost nova_compute[282193]: 2025-12-06 09:58:55.337 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:58:55 localhost nova_compute[282193]: 2025-12-06 09:58:55.518 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Triggering sync for uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m Dec 6 04:58:55 localhost nova_compute[282193]: 2025-12-06 09:58:55.519 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 04:58:55 localhost nova_compute[282193]: 2025-12-06 09:58:55.519 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" acquired by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 04:58:55 localhost nova_compute[282193]: 2025-12-06 09:58:55.519 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:58:55 localhost nova_compute[282193]: 2025-12-06 09:58:55.567 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" "released" by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" :: held 0.048s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 04:58:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 04:58:56 localhost systemd[1]: tmp-crun.j9TwJW.mount: Deactivated successfully. Dec 6 04:58:56 localhost podman[282757]: 2025-12-06 09:58:56.453410266 +0000 UTC m=+0.088803003 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 6 04:58:56 localhost podman[282757]: 2025-12-06 09:58:56.490132809 +0000 UTC m=+0.125525526 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2) Dec 6 04:58:56 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 04:58:56 localhost nova_compute[282193]: 2025-12-06 09:58:56.950 282197 DEBUG nova.compute.manager [None req-ac182712-08dd-46f8-8abb-4b803f552cb2 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 6 04:58:56 localhost nova_compute[282193]: 2025-12-06 09:58:56.954 282197 INFO nova.compute.manager [None req-ac182712-08dd-46f8-8abb-4b803f552cb2 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Retrieving diagnostics#033[00m Dec 6 04:58:58 localhost nova_compute[282193]: 2025-12-06 09:58:58.970 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:59:00 localhost nova_compute[282193]: 2025-12-06 09:59:00.338 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:59:03 localhost nova_compute[282193]: 2025-12-06 09:59:03.267 282197 DEBUG oslo_concurrency.lockutils [None req-9994761a-656d-44be-9816-3ec8b7c0a5d2 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] Acquiring lock "b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" by "nova.compute.manager.ComputeManager.stop_instance..do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 04:59:03 localhost nova_compute[282193]: 2025-12-06 09:59:03.268 282197 DEBUG oslo_concurrency.lockutils [None req-9994761a-656d-44be-9816-3ec8b7c0a5d2 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] Lock "b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" acquired by "nova.compute.manager.ComputeManager.stop_instance..do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 04:59:03 localhost nova_compute[282193]: 2025-12-06 09:59:03.268 282197 DEBUG nova.compute.manager [None req-9994761a-656d-44be-9816-3ec8b7c0a5d2 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 6 04:59:03 localhost nova_compute[282193]: 2025-12-06 09:59:03.273 282197 DEBUG nova.compute.manager [None req-9994761a-656d-44be-9816-3ec8b7c0a5d2 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m Dec 6 04:59:03 localhost nova_compute[282193]: 2025-12-06 09:59:03.278 282197 DEBUG nova.objects.instance [None req-9994761a-656d-44be-9816-3ec8b7c0a5d2 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] Lazy-loading 'flavor' on Instance uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 6 04:59:03 localhost nova_compute[282193]: 2025-12-06 09:59:03.323 282197 DEBUG nova.virt.libvirt.driver [None req-9994761a-656d-44be-9816-3ec8b7c0a5d2 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m Dec 6 04:59:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46738 DF PROTO=TCP SPT=54848 DPT=9102 SEQ=227793754 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E63EC70000000001030307) Dec 6 04:59:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 04:59:03 localhost podman[282781]: 2025-12-06 09:59:03.911597285 +0000 UTC m=+0.079253018 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 6 04:59:03 localhost podman[282781]: 2025-12-06 09:59:03.9422102 +0000 UTC m=+0.109866003 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent) Dec 6 04:59:03 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 04:59:03 localhost nova_compute[282193]: 2025-12-06 09:59:03.979 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:59:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46739 DF PROTO=TCP SPT=54848 DPT=9102 SEQ=227793754 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E642EF0000000001030307) Dec 6 04:59:05 localhost nova_compute[282193]: 2025-12-06 09:59:05.340 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:59:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8256 DF PROTO=TCP SPT=50026 DPT=9102 SEQ=3864606856 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E645EF0000000001030307) Dec 6 04:59:05 localhost kernel: device tap86fc0b7a-fb left promiscuous mode Dec 6 04:59:05 localhost NetworkManager[5973]: [1765015145.7730] device (tap86fc0b7a-fb): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed') Dec 6 04:59:05 localhost ovn_controller[154851]: 2025-12-06T09:59:05Z|00052|binding|INFO|Releasing lport 86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b from this chassis (sb_readonly=0) Dec 6 04:59:05 localhost ovn_controller[154851]: 2025-12-06T09:59:05Z|00053|binding|INFO|Setting lport 86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b down in Southbound Dec 6 04:59:05 localhost nova_compute[282193]: 2025-12-06 09:59:05.780 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:59:05 localhost ovn_controller[154851]: 2025-12-06T09:59:05Z|00054|binding|INFO|Removing iface tap86fc0b7a-fb ovn-installed in OVS Dec 6 04:59:05 localhost nova_compute[282193]: 2025-12-06 09:59:05.782 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:59:05 localhost nova_compute[282193]: 2025-12-06 09:59:05.791 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:59:05 localhost systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000002.scope: Deactivated successfully. Dec 6 04:59:05 localhost systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000002.scope: Consumed 3min 49.447s CPU time. Dec 6 04:59:05 localhost systemd-machined[84444]: Machine qemu-1-instance-00000002 terminated. Dec 6 04:59:05 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:05.929 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:77:f3 192.168.0.162'], port_security=['fa:16:3e:64:77:f3 192.168.0.162'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.162/24', 'neutron:device_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'neutron:device_owner': 'compute:nova', 'neutron:host_id': 'np0005548789.localdomain', 'neutron:mtu': '', 'neutron:network_name': 'neutron-652b6bdc-40ce-45b7-8aa5-3bca79987993', 'neutron:port_capabilities': '', 'neutron:port_fip': '192.168.122.20', 'neutron:port_name': '', 'neutron:project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'neutron:revision_number': '7', 'neutron:security_group_ids': '65e67ecb-ffcf-41e6-8b8b-ed491f2580ec 7ce08e20-be94-4509-a371-aa5c036416af', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7872d306-938e-4ee0-be61-57ba3983d747, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 04:59:05 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:05.932 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b in datapath 652b6bdc-40ce-45b7-8aa5-3bca79987993 unbound from our chassis#033[00m Dec 6 04:59:05 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:05.934 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 652b6bdc-40ce-45b7-8aa5-3bca79987993, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 04:59:05 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:05.935 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[39edd12a-91dc-4645-91d6-31a216bde723]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 04:59:05 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:05.936 160509 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-652b6bdc-40ce-45b7-8aa5-3bca79987993 namespace which is not needed anymore#033[00m Dec 6 04:59:06 localhost systemd[1]: libpod-12ba6c9101b1d507aed708fcb0f1f5958064f89edf86d2af4b3dde5856898445.scope: Deactivated successfully. Dec 6 04:59:06 localhost podman[282833]: 2025-12-06 09:59:06.133520607 +0000 UTC m=+0.080613370 container died 12ba6c9101b1d507aed708fcb0f1f5958064f89edf86d2af4b3dde5856898445 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=neutron-haproxy-ovnmeta-652b6bdc-40ce-45b7-8aa5-3bca79987993, vcs-type=git, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., url=https://www.redhat.com, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, release=1761123044, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 6 04:59:06 localhost nova_compute[282193]: 2025-12-06 09:59:06.162 282197 DEBUG nova.compute.manager [req-d65108c9-0142-483f-8f47-53d9963f6f31 req-60ed4ced-25b4-415d-9320-569c668981a5 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Received event network-vif-unplugged-86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 6 04:59:06 localhost nova_compute[282193]: 2025-12-06 09:59:06.164 282197 DEBUG oslo_concurrency.lockutils [req-d65108c9-0142-483f-8f47-53d9963f6f31 req-60ed4ced-25b4-415d-9320-569c668981a5 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Acquiring lock "b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 04:59:06 localhost nova_compute[282193]: 2025-12-06 09:59:06.165 282197 DEBUG oslo_concurrency.lockutils [req-d65108c9-0142-483f-8f47-53d9963f6f31 req-60ed4ced-25b4-415d-9320-569c668981a5 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 04:59:06 localhost nova_compute[282193]: 2025-12-06 09:59:06.165 282197 DEBUG oslo_concurrency.lockutils [req-d65108c9-0142-483f-8f47-53d9963f6f31 req-60ed4ced-25b4-415d-9320-569c668981a5 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 04:59:06 localhost nova_compute[282193]: 2025-12-06 09:59:06.166 282197 DEBUG nova.compute.manager [req-d65108c9-0142-483f-8f47-53d9963f6f31 req-60ed4ced-25b4-415d-9320-569c668981a5 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] No waiting events found dispatching network-vif-unplugged-86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Dec 6 04:59:06 localhost nova_compute[282193]: 2025-12-06 09:59:06.166 282197 WARNING nova.compute.manager [req-d65108c9-0142-483f-8f47-53d9963f6f31 req-60ed4ced-25b4-415d-9320-569c668981a5 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Received unexpected event network-vif-unplugged-86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b for instance with vm_state active and task_state powering-off.#033[00m Dec 6 04:59:06 localhost podman[282833]: 2025-12-06 09:59:06.283673623 +0000 UTC m=+0.230766356 container cleanup 12ba6c9101b1d507aed708fcb0f1f5958064f89edf86d2af4b3dde5856898445 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=neutron-haproxy-ovnmeta-652b6bdc-40ce-45b7-8aa5-3bca79987993, tcib_managed=true, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Dec 6 04:59:06 localhost podman[282847]: 2025-12-06 09:59:06.298420837 +0000 UTC m=+0.154844770 container cleanup 12ba6c9101b1d507aed708fcb0f1f5958064f89edf86d2af4b3dde5856898445 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=neutron-haproxy-ovnmeta-652b6bdc-40ce-45b7-8aa5-3bca79987993, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, release=1761123044, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, version=17.1.12, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c) Dec 6 04:59:06 localhost systemd[1]: libpod-conmon-12ba6c9101b1d507aed708fcb0f1f5958064f89edf86d2af4b3dde5856898445.scope: Deactivated successfully. Dec 6 04:59:06 localhost nova_compute[282193]: 2025-12-06 09:59:06.345 282197 INFO nova.virt.libvirt.driver [None req-9994761a-656d-44be-9816-3ec8b7c0a5d2 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Instance shutdown successfully after 3 seconds.#033[00m Dec 6 04:59:06 localhost nova_compute[282193]: 2025-12-06 09:59:06.352 282197 INFO nova.virt.libvirt.driver [-] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Instance destroyed successfully.#033[00m Dec 6 04:59:06 localhost nova_compute[282193]: 2025-12-06 09:59:06.353 282197 DEBUG nova.objects.instance [None req-9994761a-656d-44be-9816-3ec8b7c0a5d2 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] Lazy-loading 'numa_topology' on Instance uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 6 04:59:06 localhost podman[282864]: 2025-12-06 09:59:06.371904526 +0000 UTC m=+0.067027961 container remove 12ba6c9101b1d507aed708fcb0f1f5958064f89edf86d2af4b3dde5856898445 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=neutron-haproxy-ovnmeta-652b6bdc-40ce-45b7-8aa5-3bca79987993, vcs-type=git, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://www.redhat.com, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, io.buildah.version=1.41.4, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team) Dec 6 04:59:06 localhost nova_compute[282193]: 2025-12-06 09:59:06.377 282197 DEBUG nova.compute.manager [None req-9994761a-656d-44be-9816-3ec8b7c0a5d2 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 6 04:59:06 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:06.377 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[1b594e9c-e502-486a-b75e-4758075ccdba]: (4, ('Sat Dec 6 09:59:06 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-652b6bdc-40ce-45b7-8aa5-3bca79987993 (12ba6c9101b1d507aed708fcb0f1f5958064f89edf86d2af4b3dde5856898445)\n12ba6c9101b1d507aed708fcb0f1f5958064f89edf86d2af4b3dde5856898445\nSat Dec 6 09:59:06 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-652b6bdc-40ce-45b7-8aa5-3bca79987993 (12ba6c9101b1d507aed708fcb0f1f5958064f89edf86d2af4b3dde5856898445)\n12ba6c9101b1d507aed708fcb0f1f5958064f89edf86d2af4b3dde5856898445\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 04:59:06 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:06.383 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[548e6120-0748-4c2a-bfa5-82a9a964071a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 04:59:06 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:06.388 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap652b6bdc-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 04:59:06 localhost nova_compute[282193]: 2025-12-06 09:59:06.391 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:59:06 localhost kernel: device tap652b6bdc-40 left promiscuous mode Dec 6 04:59:06 localhost nova_compute[282193]: 2025-12-06 09:59:06.399 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:59:06 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:06.403 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[f4182768-2be8-4d62-9cb4-9c92c4f47b77]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 04:59:06 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:06.420 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[b7a661a2-56f3-444a-acde-c631a950e5b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 04:59:06 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:06.421 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[4ad10031-8146-4312-bfdc-f04ca7e6d6d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 04:59:06 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:06.435 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[13b2df6a-5855-449b-857e-61cbe3aa8012]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 710075, 'reachable_time': 38110, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282883, 'error': None, 'target': 'ovnmeta-652b6bdc-40ce-45b7-8aa5-3bca79987993', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 04:59:06 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:06.445 160720 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-652b6bdc-40ce-45b7-8aa5-3bca79987993 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Dec 6 04:59:06 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:06.446 160720 DEBUG oslo.privsep.daemon [-] privsep: reply[cf46cea4-461b-4221-892f-87a4b245b8fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 04:59:06 localhost nova_compute[282193]: 2025-12-06 09:59:06.475 282197 DEBUG oslo_concurrency.lockutils [None req-9994761a-656d-44be-9816-3ec8b7c0a5d2 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] Lock "b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" "released" by "nova.compute.manager.ComputeManager.stop_instance..do_stop_instance" :: held 3.207s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 04:59:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46740 DF PROTO=TCP SPT=54848 DPT=9102 SEQ=227793754 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E64AF00000000001030307) Dec 6 04:59:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 04:59:07 localhost systemd[1]: var-lib-containers-storage-overlay-31fbdb956fdb20faf0121dfd2c519c9e748cc292d5fc54ebad7f5d80f477ded1-merged.mount: Deactivated successfully. Dec 6 04:59:07 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-12ba6c9101b1d507aed708fcb0f1f5958064f89edf86d2af4b3dde5856898445-userdata-shm.mount: Deactivated successfully. Dec 6 04:59:07 localhost systemd[1]: run-netns-ovnmeta\x2d652b6bdc\x2d40ce\x2d45b7\x2d8aa5\x2d3bca79987993.mount: Deactivated successfully. Dec 6 04:59:07 localhost podman[282885]: 2025-12-06 09:59:07.17831114 +0000 UTC m=+0.083335974 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 6 04:59:07 localhost podman[282885]: 2025-12-06 09:59:07.186455202 +0000 UTC m=+0.091480066 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 6 04:59:07 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 04:59:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6962 DF PROTO=TCP SPT=55472 DPT=9102 SEQ=2212927204 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E64DEF0000000001030307) Dec 6 04:59:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:59:07.911 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'name': 'test', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005548789.localdomain', 'OS-EXT-STS:vm_state': 'shutdown', 'tenant_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'hostId': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'status': 'stopped', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 6 04:59:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:59:07.912 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 6 04:59:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:59:07.913 12 DEBUG ceilometer.compute.pollsters [-] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa was shut off while getting sample of disk.device.read.bytes: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Dec 6 04:59:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:59:07.913 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 04:59:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:59:07.913 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 6 04:59:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:59:07.914 12 DEBUG ceilometer.compute.pollsters [-] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa was shut off while getting sample of network.incoming.packets.drop: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Dec 6 04:59:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:59:07.914 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 6 04:59:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:59:07.915 12 DEBUG ceilometer.compute.pollsters [-] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa was shut off while getting sample of disk.device.usage: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Dec 6 04:59:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:59:07.915 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 6 04:59:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:59:07.916 12 DEBUG ceilometer.compute.pollsters [-] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa was shut off while getting sample of network.incoming.bytes: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Dec 6 04:59:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:59:07.916 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 04:59:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:59:07.917 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 6 04:59:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:59:07.917 12 DEBUG ceilometer.compute.pollsters [-] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa was shut off while getting sample of disk.device.write.bytes: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Dec 6 04:59:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:59:07.918 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 6 04:59:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:59:07.919 12 DEBUG ceilometer.compute.pollsters [-] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa was shut off while getting sample of network.incoming.packets: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Dec 6 04:59:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:59:07.919 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 04:59:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:59:07.919 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 6 04:59:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:59:07.920 12 DEBUG ceilometer.compute.pollsters [-] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa was shut off while getting sample of memory.usage: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Dec 6 04:59:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:59:07.920 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 6 04:59:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:59:07.921 12 DEBUG ceilometer.compute.pollsters [-] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa was shut off while getting sample of network.outgoing.packets: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Dec 6 04:59:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:59:07.921 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 6 04:59:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:59:07.922 12 DEBUG ceilometer.compute.pollsters [-] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa was shut off while getting sample of network.incoming.bytes.delta: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Dec 6 04:59:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:59:07.922 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 6 04:59:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:59:07.923 12 DEBUG ceilometer.compute.pollsters [-] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa was shut off while getting sample of network.incoming.packets.error: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Dec 6 04:59:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:59:07.924 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 04:59:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:59:07.924 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 6 04:59:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:59:07.925 12 DEBUG ceilometer.compute.pollsters [-] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa was shut off while getting sample of network.outgoing.bytes: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Dec 6 04:59:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:59:07.925 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 6 04:59:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:59:07.926 12 DEBUG ceilometer.compute.pollsters [-] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa was shut off while getting sample of disk.device.capacity: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Dec 6 04:59:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:59:07.926 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 6 04:59:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:59:07.927 12 DEBUG ceilometer.compute.pollsters [-] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa was shut off while getting sample of disk.device.read.latency: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Dec 6 04:59:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:59:07.927 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 6 04:59:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:59:07.928 12 DEBUG ceilometer.compute.pollsters [-] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa was shut off while getting sample of network.outgoing.packets.drop: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Dec 6 04:59:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:59:07.928 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 6 04:59:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:59:07.929 12 DEBUG ceilometer.compute.pollsters [-] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa was shut off while getting sample of disk.device.write.latency: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Dec 6 04:59:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:59:07.929 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 6 04:59:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:59:07.930 12 DEBUG ceilometer.compute.pollsters [-] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa was shut off while getting sample of disk.device.read.requests: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Dec 6 04:59:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:59:07.930 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 6 04:59:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:59:07.931 12 DEBUG ceilometer.compute.pollsters [-] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa was shut off while getting sample of network.outgoing.bytes.delta: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Dec 6 04:59:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:59:07.931 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 6 04:59:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:59:07.932 12 DEBUG ceilometer.compute.pollsters [-] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa was shut off while getting sample of network.outgoing.packets.error: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Dec 6 04:59:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:59:07.932 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 6 04:59:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:59:07.933 12 DEBUG ceilometer.compute.pollsters [-] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa was shut off while getting sample of disk.device.allocation: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Dec 6 04:59:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:59:07.933 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 6 04:59:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:59:07.934 12 DEBUG ceilometer.compute.pollsters [-] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa was shut off while getting sample of disk.device.write.requests: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Dec 6 04:59:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:59:07.934 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 6 04:59:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 09:59:07.935 12 DEBUG ceilometer.compute.pollsters [-] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa was shut off while getting sample of cpu: Failed to inspect data of instance , domain state is SHUTOFF. get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:152 Dec 6 04:59:08 localhost nova_compute[282193]: 2025-12-06 09:59:08.204 282197 DEBUG nova.compute.manager [req-b3db8e6d-df2a-4a07-89dc-5e53ac7d2c74 req-50397831-ef50-49d3-b98b-3969cce41c9f 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Received event network-vif-plugged-86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 6 04:59:08 localhost nova_compute[282193]: 2025-12-06 09:59:08.205 282197 DEBUG oslo_concurrency.lockutils [req-b3db8e6d-df2a-4a07-89dc-5e53ac7d2c74 req-50397831-ef50-49d3-b98b-3969cce41c9f 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Acquiring lock "b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 04:59:08 localhost nova_compute[282193]: 2025-12-06 09:59:08.205 282197 DEBUG oslo_concurrency.lockutils [req-b3db8e6d-df2a-4a07-89dc-5e53ac7d2c74 req-50397831-ef50-49d3-b98b-3969cce41c9f 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 04:59:08 localhost nova_compute[282193]: 2025-12-06 09:59:08.205 282197 DEBUG oslo_concurrency.lockutils [req-b3db8e6d-df2a-4a07-89dc-5e53ac7d2c74 req-50397831-ef50-49d3-b98b-3969cce41c9f 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 04:59:08 localhost nova_compute[282193]: 2025-12-06 09:59:08.206 282197 DEBUG nova.compute.manager [req-b3db8e6d-df2a-4a07-89dc-5e53ac7d2c74 req-50397831-ef50-49d3-b98b-3969cce41c9f 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] No waiting events found dispatching network-vif-plugged-86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Dec 6 04:59:08 localhost nova_compute[282193]: 2025-12-06 09:59:08.206 282197 WARNING nova.compute.manager [req-b3db8e6d-df2a-4a07-89dc-5e53ac7d2c74 req-50397831-ef50-49d3-b98b-3969cce41c9f 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Received unexpected event network-vif-plugged-86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b for instance with vm_state stopped and task_state None.#033[00m Dec 6 04:59:08 localhost nova_compute[282193]: 2025-12-06 09:59:08.713 282197 DEBUG nova.compute.manager [None req-0127c97c-bea6-4504-bf33-a61bf9bd186a ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 6 04:59:08 localhost nova_compute[282193]: 2025-12-06 09:59:08.740 282197 ERROR oslo_messaging.rpc.server [None req-0127c97c-bea6-4504-bf33-a61bf9bd186a ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] Exception during message handling: nova.exception.InstanceInvalidState: Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa in power state shutdown. Cannot get_diagnostics while the instance is in this state. Dec 6 04:59:08 localhost nova_compute[282193]: 2025-12-06 09:59:08.740 282197 ERROR oslo_messaging.rpc.server Traceback (most recent call last): Dec 6 04:59:08 localhost nova_compute[282193]: 2025-12-06 09:59:08.740 282197 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming Dec 6 04:59:08 localhost nova_compute[282193]: 2025-12-06 09:59:08.740 282197 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) Dec 6 04:59:08 localhost nova_compute[282193]: 2025-12-06 09:59:08.740 282197 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch Dec 6 04:59:08 localhost nova_compute[282193]: 2025-12-06 09:59:08.740 282197 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) Dec 6 04:59:08 localhost nova_compute[282193]: 2025-12-06 09:59:08.740 282197 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch Dec 6 04:59:08 localhost nova_compute[282193]: 2025-12-06 09:59:08.740 282197 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) Dec 6 04:59:08 localhost nova_compute[282193]: 2025-12-06 09:59:08.740 282197 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 71, in wrapped Dec 6 04:59:08 localhost nova_compute[282193]: 2025-12-06 09:59:08.740 282197 ERROR oslo_messaging.rpc.server _emit_versioned_exception_notification( Dec 6 04:59:08 localhost nova_compute[282193]: 2025-12-06 09:59:08.740 282197 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__ Dec 6 04:59:08 localhost nova_compute[282193]: 2025-12-06 09:59:08.740 282197 ERROR oslo_messaging.rpc.server self.force_reraise() Dec 6 04:59:08 localhost nova_compute[282193]: 2025-12-06 09:59:08.740 282197 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise Dec 6 04:59:08 localhost nova_compute[282193]: 2025-12-06 09:59:08.740 282197 ERROR oslo_messaging.rpc.server raise self.value Dec 6 04:59:08 localhost nova_compute[282193]: 2025-12-06 09:59:08.740 282197 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 63, in wrapped Dec 6 04:59:08 localhost nova_compute[282193]: 2025-12-06 09:59:08.740 282197 ERROR oslo_messaging.rpc.server return f(self, context, *args, **kw) Dec 6 04:59:08 localhost nova_compute[282193]: 2025-12-06 09:59:08.740 282197 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 214, in decorated_function Dec 6 04:59:08 localhost nova_compute[282193]: 2025-12-06 09:59:08.740 282197 ERROR oslo_messaging.rpc.server compute_utils.add_instance_fault_from_exc(context, Dec 6 04:59:08 localhost nova_compute[282193]: 2025-12-06 09:59:08.740 282197 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__ Dec 6 04:59:08 localhost nova_compute[282193]: 2025-12-06 09:59:08.740 282197 ERROR oslo_messaging.rpc.server self.force_reraise() Dec 6 04:59:08 localhost nova_compute[282193]: 2025-12-06 09:59:08.740 282197 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise Dec 6 04:59:08 localhost nova_compute[282193]: 2025-12-06 09:59:08.740 282197 ERROR oslo_messaging.rpc.server raise self.value Dec 6 04:59:08 localhost nova_compute[282193]: 2025-12-06 09:59:08.740 282197 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 203, in decorated_function Dec 6 04:59:08 localhost nova_compute[282193]: 2025-12-06 09:59:08.740 282197 ERROR oslo_messaging.rpc.server return function(self, context, *args, **kwargs) Dec 6 04:59:08 localhost nova_compute[282193]: 2025-12-06 09:59:08.740 282197 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 6739, in get_instance_diagnostics Dec 6 04:59:08 localhost nova_compute[282193]: 2025-12-06 09:59:08.740 282197 ERROR oslo_messaging.rpc.server raise exception.InstanceInvalidState( Dec 6 04:59:08 localhost nova_compute[282193]: 2025-12-06 09:59:08.740 282197 ERROR oslo_messaging.rpc.server nova.exception.InstanceInvalidState: Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa in power state shutdown. Cannot get_diagnostics while the instance is in this state. Dec 6 04:59:08 localhost nova_compute[282193]: 2025-12-06 09:59:08.740 282197 ERROR oslo_messaging.rpc.server #033[00m Dec 6 04:59:09 localhost nova_compute[282193]: 2025-12-06 09:59:09.008 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:59:10 localhost nova_compute[282193]: 2025-12-06 09:59:10.341 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:59:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46741 DF PROTO=TCP SPT=54848 DPT=9102 SEQ=227793754 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E65AAF0000000001030307) Dec 6 04:59:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 04:59:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 04:59:13 localhost podman[282909]: 2025-12-06 09:59:13.913673015 +0000 UTC m=+0.069923410 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 6 04:59:13 localhost podman[282909]: 2025-12-06 09:59:13.923192329 +0000 UTC m=+0.079405713 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 04:59:13 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 04:59:13 localhost systemd[1]: tmp-crun.2ZLy1p.mount: Deactivated successfully. Dec 6 04:59:13 localhost podman[282908]: 2025-12-06 09:59:13.989006 +0000 UTC m=+0.146855354 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_id=edpm, maintainer=Red Hat, Inc., version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Dec 6 04:59:14 localhost podman[282908]: 2025-12-06 09:59:14.00519206 +0000 UTC m=+0.163041794 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, release=1755695350, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.buildah.version=1.33.7, version=9.6, config_id=edpm, io.openshift.expose-services=, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 6 04:59:14 localhost nova_compute[282193]: 2025-12-06 09:59:14.011 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:59:14 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 04:59:15 localhost nova_compute[282193]: 2025-12-06 09:59:15.342 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:59:16 localhost openstack_network_exporter[243110]: ERROR 09:59:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 04:59:16 localhost openstack_network_exporter[243110]: ERROR 09:59:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 04:59:16 localhost openstack_network_exporter[243110]: ERROR 09:59:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 04:59:16 localhost openstack_network_exporter[243110]: ERROR 09:59:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 04:59:16 localhost openstack_network_exporter[243110]: Dec 6 04:59:16 localhost openstack_network_exporter[243110]: ERROR 09:59:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 04:59:16 localhost openstack_network_exporter[243110]: Dec 6 04:59:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 04:59:17 localhost systemd[1]: tmp-crun.43vKQk.mount: Deactivated successfully. Dec 6 04:59:17 localhost podman[282947]: 2025-12-06 09:59:17.729519363 +0000 UTC m=+0.066088942 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 6 04:59:17 localhost podman[282947]: 2025-12-06 09:59:17.771151627 +0000 UTC m=+0.107721176 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 6 04:59:17 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 04:59:19 localhost nova_compute[282193]: 2025-12-06 09:59:19.053 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:59:19 localhost nova_compute[282193]: 2025-12-06 09:59:19.213 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:59:19 localhost nova_compute[282193]: 2025-12-06 09:59:19.213 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:59:19 localhost nova_compute[282193]: 2025-12-06 09:59:19.214 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 6 04:59:19 localhost nova_compute[282193]: 2025-12-06 09:59:19.214 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 6 04:59:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46742 DF PROTO=TCP SPT=54848 DPT=9102 SEQ=227793754 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E67BEF0000000001030307) Dec 6 04:59:20 localhost nova_compute[282193]: 2025-12-06 09:59:20.343 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:59:20 localhost nova_compute[282193]: 2025-12-06 09:59:20.711 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 6 04:59:20 localhost nova_compute[282193]: 2025-12-06 09:59:20.711 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquired lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 6 04:59:20 localhost nova_compute[282193]: 2025-12-06 09:59:20.712 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 6 04:59:20 localhost nova_compute[282193]: 2025-12-06 09:59:20.713 282197 DEBUG nova.objects.instance [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 6 04:59:21 localhost nova_compute[282193]: 2025-12-06 09:59:21.018 282197 DEBUG nova.virt.driver [-] Emitting event Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 6 04:59:21 localhost nova_compute[282193]: 2025-12-06 09:59:21.019 282197 INFO nova.compute.manager [-] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] VM Stopped (Lifecycle Event)#033[00m Dec 6 04:59:21 localhost nova_compute[282193]: 2025-12-06 09:59:21.150 282197 DEBUG nova.compute.manager [None req-e2983e19-cca8-48fc-8597-763cb6b84e91 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 6 04:59:21 localhost nova_compute[282193]: 2025-12-06 09:59:21.153 282197 DEBUG nova.compute.manager [None req-e2983e19-cca8-48fc-8597-763cb6b84e91 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: stopped, current task_state: None, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Dec 6 04:59:21 localhost nova_compute[282193]: 2025-12-06 09:59:21.700 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updating instance_info_cache with network_info: [{"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 6 04:59:21 localhost nova_compute[282193]: 2025-12-06 09:59:21.730 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Releasing lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 6 04:59:21 localhost nova_compute[282193]: 2025-12-06 09:59:21.730 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 6 04:59:21 localhost nova_compute[282193]: 2025-12-06 09:59:21.731 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:59:21 localhost nova_compute[282193]: 2025-12-06 09:59:21.732 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:59:21 localhost nova_compute[282193]: 2025-12-06 09:59:21.732 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:59:21 localhost nova_compute[282193]: 2025-12-06 09:59:21.733 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:59:21 localhost nova_compute[282193]: 2025-12-06 09:59:21.733 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:59:21 localhost nova_compute[282193]: 2025-12-06 09:59:21.734 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:59:21 localhost nova_compute[282193]: 2025-12-06 09:59:21.734 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 6 04:59:21 localhost nova_compute[282193]: 2025-12-06 09:59:21.734 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 04:59:21 localhost nova_compute[282193]: 2025-12-06 09:59:21.750 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 04:59:21 localhost nova_compute[282193]: 2025-12-06 09:59:21.750 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 04:59:21 localhost nova_compute[282193]: 2025-12-06 09:59:21.751 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 04:59:21 localhost nova_compute[282193]: 2025-12-06 09:59:21.751 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Auditing locally available compute resources for np0005548789.localdomain (node: np0005548789.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 6 04:59:21 localhost nova_compute[282193]: 2025-12-06 09:59:21.752 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 04:59:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 04:59:21 localhost podman[282967]: 2025-12-06 09:59:21.922888704 +0000 UTC m=+0.087354637 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 6 04:59:21 localhost podman[282967]: 2025-12-06 09:59:21.930142628 +0000 UTC m=+0.094608581 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 04:59:21 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 04:59:22 localhost nova_compute[282193]: 2025-12-06 09:59:22.229 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 04:59:22 localhost nova_compute[282193]: 2025-12-06 09:59:22.313 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 04:59:22 localhost nova_compute[282193]: 2025-12-06 09:59:22.313 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 04:59:22 localhost nova_compute[282193]: 2025-12-06 09:59:22.522 282197 WARNING nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 6 04:59:22 localhost nova_compute[282193]: 2025-12-06 09:59:22.523 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Hypervisor/Node resource view: name=np0005548789.localdomain free_ram=12294MB free_disk=41.837059020996094GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 6 04:59:22 localhost nova_compute[282193]: 2025-12-06 09:59:22.524 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 04:59:22 localhost nova_compute[282193]: 2025-12-06 09:59:22.524 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 04:59:22 localhost nova_compute[282193]: 2025-12-06 09:59:22.627 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 6 04:59:22 localhost nova_compute[282193]: 2025-12-06 09:59:22.628 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 6 04:59:22 localhost nova_compute[282193]: 2025-12-06 09:59:22.628 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Final resource view: name=np0005548789.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 6 04:59:22 localhost nova_compute[282193]: 2025-12-06 09:59:22.686 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 04:59:23 localhost nova_compute[282193]: 2025-12-06 09:59:23.157 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 04:59:23 localhost nova_compute[282193]: 2025-12-06 09:59:23.164 282197 DEBUG nova.compute.provider_tree [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed in ProviderTree for provider: 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 6 04:59:23 localhost nova_compute[282193]: 2025-12-06 09:59:23.185 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 6 04:59:23 localhost nova_compute[282193]: 2025-12-06 09:59:23.212 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Compute_service record updated for np0005548789.localdomain:np0005548789.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 6 04:59:23 localhost nova_compute[282193]: 2025-12-06 09:59:23.212 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 04:59:23 localhost podman[241090]: time="2025-12-06T09:59:23Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 04:59:23 localhost podman[241090]: @ - - [06/Dec/2025:09:59:23 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 148368 "" "Go-http-client/1.1" Dec 6 04:59:23 localhost podman[241090]: @ - - [06/Dec/2025:09:59:23 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17247 "" "Go-http-client/1.1" Dec 6 04:59:24 localhost nova_compute[282193]: 2025-12-06 09:59:24.056 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:59:25 localhost nova_compute[282193]: 2025-12-06 09:59:25.346 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:59:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 04:59:26 localhost podman[283033]: 2025-12-06 09:59:26.919581206 +0000 UTC m=+0.081135716 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true) Dec 6 04:59:26 localhost podman[283033]: 2025-12-06 09:59:26.952868553 +0000 UTC m=+0.114423003 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 6 04:59:26 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 04:59:28 localhost nova_compute[282193]: 2025-12-06 09:59:28.760 282197 DEBUG nova.compute.manager [None req-52434a90-0e32-4809-b1dd-44e10953cee5 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 6 04:59:28 localhost nova_compute[282193]: 2025-12-06 09:59:28.784 282197 ERROR oslo_messaging.rpc.server [None req-52434a90-0e32-4809-b1dd-44e10953cee5 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] Exception during message handling: nova.exception.InstanceInvalidState: Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa in power state shutdown. Cannot get_diagnostics while the instance is in this state. Dec 6 04:59:28 localhost nova_compute[282193]: 2025-12-06 09:59:28.784 282197 ERROR oslo_messaging.rpc.server Traceback (most recent call last): Dec 6 04:59:28 localhost nova_compute[282193]: 2025-12-06 09:59:28.784 282197 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming Dec 6 04:59:28 localhost nova_compute[282193]: 2025-12-06 09:59:28.784 282197 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) Dec 6 04:59:28 localhost nova_compute[282193]: 2025-12-06 09:59:28.784 282197 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch Dec 6 04:59:28 localhost nova_compute[282193]: 2025-12-06 09:59:28.784 282197 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) Dec 6 04:59:28 localhost nova_compute[282193]: 2025-12-06 09:59:28.784 282197 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch Dec 6 04:59:28 localhost nova_compute[282193]: 2025-12-06 09:59:28.784 282197 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) Dec 6 04:59:28 localhost nova_compute[282193]: 2025-12-06 09:59:28.784 282197 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 71, in wrapped Dec 6 04:59:28 localhost nova_compute[282193]: 2025-12-06 09:59:28.784 282197 ERROR oslo_messaging.rpc.server _emit_versioned_exception_notification( Dec 6 04:59:28 localhost nova_compute[282193]: 2025-12-06 09:59:28.784 282197 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__ Dec 6 04:59:28 localhost nova_compute[282193]: 2025-12-06 09:59:28.784 282197 ERROR oslo_messaging.rpc.server self.force_reraise() Dec 6 04:59:28 localhost nova_compute[282193]: 2025-12-06 09:59:28.784 282197 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise Dec 6 04:59:28 localhost nova_compute[282193]: 2025-12-06 09:59:28.784 282197 ERROR oslo_messaging.rpc.server raise self.value Dec 6 04:59:28 localhost nova_compute[282193]: 2025-12-06 09:59:28.784 282197 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 63, in wrapped Dec 6 04:59:28 localhost nova_compute[282193]: 2025-12-06 09:59:28.784 282197 ERROR oslo_messaging.rpc.server return f(self, context, *args, **kw) Dec 6 04:59:28 localhost nova_compute[282193]: 2025-12-06 09:59:28.784 282197 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 214, in decorated_function Dec 6 04:59:28 localhost nova_compute[282193]: 2025-12-06 09:59:28.784 282197 ERROR oslo_messaging.rpc.server compute_utils.add_instance_fault_from_exc(context, Dec 6 04:59:28 localhost nova_compute[282193]: 2025-12-06 09:59:28.784 282197 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__ Dec 6 04:59:28 localhost nova_compute[282193]: 2025-12-06 09:59:28.784 282197 ERROR oslo_messaging.rpc.server self.force_reraise() Dec 6 04:59:28 localhost nova_compute[282193]: 2025-12-06 09:59:28.784 282197 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise Dec 6 04:59:28 localhost nova_compute[282193]: 2025-12-06 09:59:28.784 282197 ERROR oslo_messaging.rpc.server raise self.value Dec 6 04:59:28 localhost nova_compute[282193]: 2025-12-06 09:59:28.784 282197 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 203, in decorated_function Dec 6 04:59:28 localhost nova_compute[282193]: 2025-12-06 09:59:28.784 282197 ERROR oslo_messaging.rpc.server return function(self, context, *args, **kwargs) Dec 6 04:59:28 localhost nova_compute[282193]: 2025-12-06 09:59:28.784 282197 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 6739, in get_instance_diagnostics Dec 6 04:59:28 localhost nova_compute[282193]: 2025-12-06 09:59:28.784 282197 ERROR oslo_messaging.rpc.server raise exception.InstanceInvalidState( Dec 6 04:59:28 localhost nova_compute[282193]: 2025-12-06 09:59:28.784 282197 ERROR oslo_messaging.rpc.server nova.exception.InstanceInvalidState: Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa in power state shutdown. Cannot get_diagnostics while the instance is in this state. Dec 6 04:59:28 localhost nova_compute[282193]: 2025-12-06 09:59:28.784 282197 ERROR oslo_messaging.rpc.server #033[00m Dec 6 04:59:29 localhost nova_compute[282193]: 2025-12-06 09:59:29.093 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:59:30 localhost nova_compute[282193]: 2025-12-06 09:59:30.349 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:59:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13457 DF PROTO=TCP SPT=54536 DPT=9102 SEQ=735695711 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E6B3F60000000001030307) Dec 6 04:59:33 localhost nova_compute[282193]: 2025-12-06 09:59:33.900 282197 DEBUG nova.objects.instance [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] Lazy-loading 'flavor' on Instance uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 6 04:59:33 localhost nova_compute[282193]: 2025-12-06 09:59:33.926 282197 DEBUG oslo_concurrency.lockutils [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] Acquiring lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 6 04:59:33 localhost nova_compute[282193]: 2025-12-06 09:59:33.927 282197 DEBUG oslo_concurrency.lockutils [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] Acquired lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 6 04:59:33 localhost nova_compute[282193]: 2025-12-06 09:59:33.927 282197 DEBUG nova.network.neutron [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Dec 6 04:59:33 localhost nova_compute[282193]: 2025-12-06 09:59:33.928 282197 DEBUG nova.objects.instance [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] Lazy-loading 'info_cache' on Instance uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 6 04:59:34 localhost nova_compute[282193]: 2025-12-06 09:59:34.138 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:59:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13458 DF PROTO=TCP SPT=54536 DPT=9102 SEQ=735695711 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E6B7EF0000000001030307) Dec 6 04:59:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 04:59:34 localhost podman[283059]: 2025-12-06 09:59:34.918060884 +0000 UTC m=+0.082181068 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent) Dec 6 04:59:34 localhost podman[283059]: 2025-12-06 09:59:34.956381537 +0000 UTC m=+0.120501791 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2) Dec 6 04:59:34 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 04:59:35 localhost nova_compute[282193]: 2025-12-06 09:59:35.351 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:59:35 localhost nova_compute[282193]: 2025-12-06 09:59:35.674 282197 DEBUG nova.network.neutron [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updating instance_info_cache with network_info: [{"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 6 04:59:35 localhost nova_compute[282193]: 2025-12-06 09:59:35.697 282197 DEBUG oslo_concurrency.lockutils [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] Releasing lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 6 04:59:35 localhost nova_compute[282193]: 2025-12-06 09:59:35.727 282197 INFO nova.virt.libvirt.driver [-] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Instance destroyed successfully.#033[00m Dec 6 04:59:35 localhost nova_compute[282193]: 2025-12-06 09:59:35.727 282197 DEBUG nova.objects.instance [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] Lazy-loading 'numa_topology' on Instance uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 6 04:59:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46743 DF PROTO=TCP SPT=54848 DPT=9102 SEQ=227793754 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E6BBEF0000000001030307) Dec 6 04:59:35 localhost nova_compute[282193]: 2025-12-06 09:59:35.742 282197 DEBUG nova.objects.instance [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] Lazy-loading 'resources' on Instance uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 6 04:59:35 localhost nova_compute[282193]: 2025-12-06 09:59:35.777 282197 DEBUG nova.virt.libvirt.vif [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:44:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description='test',display_name='test',ec2_ids=,ephemeral_gb=1,ephemeral_key_uuid=None,fault=,flavor=Flavor(2),hidden=False,host='np0005548789.localdomain',hostname='test',id=2,image_ref='e0d06706-da90-478a-9829-34b75a3ce049',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-12-06T08:44:43Z,launched_on='np0005548789.localdomain',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=,new_flavor=None,node='np0005548789.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=,power_state=4,progress=0,project_id='3d603431c0bb4967bafc7a0aa6108bfe',ramdisk_id='',reservation_id='r-02dpupig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=,services=,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='e0d06706-da90-478a-9829-34b75a3ce049',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='pc-q35-rhel9.0.0',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=,task_state='powering-on',terminated_at=None,trusted_certs=,updated_at=2025-12-06T09:59:06Z,user_data=None,user_id='ff0049f3313348bdb67886d170c1c765',uuid=b7ed0a2e-9350-4933-9334-4e5e08d3e6aa,vcpu_model=,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m Dec 6 04:59:35 localhost nova_compute[282193]: 2025-12-06 09:59:35.778 282197 DEBUG nova.network.os_vif_util [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] Converting VIF {"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Dec 6 04:59:35 localhost nova_compute[282193]: 2025-12-06 09:59:35.779 282197 DEBUG nova.network.os_vif_util [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:77:f3,bridge_name='br-int',has_traffic_filtering=True,id=86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b,network=Network(652b6bdc-40ce-45b7-8aa5-3bca79987993),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86fc0b7a-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Dec 6 04:59:35 localhost nova_compute[282193]: 2025-12-06 09:59:35.780 282197 DEBUG os_vif [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:77:f3,bridge_name='br-int',has_traffic_filtering=True,id=86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b,network=Network(652b6bdc-40ce-45b7-8aa5-3bca79987993),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86fc0b7a-fb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m Dec 6 04:59:35 localhost nova_compute[282193]: 2025-12-06 09:59:35.782 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:59:35 localhost nova_compute[282193]: 2025-12-06 09:59:35.783 282197 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap86fc0b7a-fb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 04:59:35 localhost nova_compute[282193]: 2025-12-06 09:59:35.785 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:59:35 localhost nova_compute[282193]: 2025-12-06 09:59:35.788 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:59:35 localhost nova_compute[282193]: 2025-12-06 09:59:35.791 282197 INFO os_vif [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:77:f3,bridge_name='br-int',has_traffic_filtering=True,id=86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b,network=Network(652b6bdc-40ce-45b7-8aa5-3bca79987993),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86fc0b7a-fb')#033[00m Dec 6 04:59:35 localhost nova_compute[282193]: 2025-12-06 09:59:35.795 282197 DEBUG nova.virt.libvirt.host [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m Dec 6 04:59:35 localhost nova_compute[282193]: 2025-12-06 09:59:35.795 282197 INFO nova.virt.libvirt.host [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] UEFI support detected#033[00m Dec 6 04:59:35 localhost nova_compute[282193]: 2025-12-06 09:59:35.803 282197 DEBUG nova.virt.libvirt.driver [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Start _get_guest_xml network_info=[{"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} image_meta=ImageMeta(checksum=,container_format='bare',created_at=,direct_url=,disk_format='qcow2',id=e0d06706-da90-478a-9829-34b75a3ce049,min_disk=1,min_ram=0,name=,owner=,properties=ImageMetaProps,protected=,size=,status=,tags=,updated_at=,virtual_size=,visibility=) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'boot_index': 0, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_format': None, 'device_name': '/dev/vda', 'size': 0, 'device_type': 'disk', 'image_id': 'e0d06706-da90-478a-9829-34b75a3ce049'}], 'ephemerals': [{'encryption_options': None, 'encryption_secret_uuid': None, 'guest_format': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_format': None, 'device_name': '/dev/vdb', 'size': 1, 'device_type': 'disk'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m Dec 6 04:59:35 localhost nova_compute[282193]: 2025-12-06 09:59:35.808 282197 WARNING nova.virt.libvirt.driver [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 6 04:59:35 localhost nova_compute[282193]: 2025-12-06 09:59:35.810 282197 DEBUG nova.virt.libvirt.host [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] Searching host: 'np0005548789.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m Dec 6 04:59:35 localhost nova_compute[282193]: 2025-12-06 09:59:35.811 282197 DEBUG nova.virt.libvirt.host [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m Dec 6 04:59:35 localhost nova_compute[282193]: 2025-12-06 09:59:35.813 282197 DEBUG nova.virt.libvirt.host [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] Searching host: 'np0005548789.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m Dec 6 04:59:35 localhost nova_compute[282193]: 2025-12-06 09:59:35.814 282197 DEBUG nova.virt.libvirt.host [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m Dec 6 04:59:35 localhost nova_compute[282193]: 2025-12-06 09:59:35.815 282197 DEBUG nova.virt.libvirt.driver [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Dec 6 04:59:35 localhost nova_compute[282193]: 2025-12-06 09:59:35.816 282197 DEBUG nova.virt.hardware [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T08:43:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='3b9dcd46-fa1b-4714-ba2b-665da2f67af6',id=2,is_public=True,memory_mb=512,name='m1.small',projects=,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=,container_format='bare',created_at=,direct_url=,disk_format='qcow2',id=e0d06706-da90-478a-9829-34b75a3ce049,min_disk=1,min_ram=0,name=,owner=,properties=ImageMetaProps,protected=,size=,status=,tags=,updated_at=,virtual_size=,visibility=), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m Dec 6 04:59:35 localhost nova_compute[282193]: 2025-12-06 09:59:35.817 282197 DEBUG nova.virt.hardware [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m Dec 6 04:59:35 localhost nova_compute[282193]: 2025-12-06 09:59:35.817 282197 DEBUG nova.virt.hardware [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m Dec 6 04:59:35 localhost nova_compute[282193]: 2025-12-06 09:59:35.818 282197 DEBUG nova.virt.hardware [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m Dec 6 04:59:35 localhost nova_compute[282193]: 2025-12-06 09:59:35.818 282197 DEBUG nova.virt.hardware [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m Dec 6 04:59:35 localhost nova_compute[282193]: 2025-12-06 09:59:35.819 282197 DEBUG nova.virt.hardware [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m Dec 6 04:59:35 localhost nova_compute[282193]: 2025-12-06 09:59:35.819 282197 DEBUG nova.virt.hardware [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m Dec 6 04:59:35 localhost nova_compute[282193]: 2025-12-06 09:59:35.820 282197 DEBUG nova.virt.hardware [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m Dec 6 04:59:35 localhost nova_compute[282193]: 2025-12-06 09:59:35.820 282197 DEBUG nova.virt.hardware [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m Dec 6 04:59:35 localhost nova_compute[282193]: 2025-12-06 09:59:35.820 282197 DEBUG nova.virt.hardware [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m Dec 6 04:59:35 localhost nova_compute[282193]: 2025-12-06 09:59:35.821 282197 DEBUG nova.virt.hardware [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m Dec 6 04:59:35 localhost nova_compute[282193]: 2025-12-06 09:59:35.821 282197 DEBUG nova.objects.instance [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] Lazy-loading 'vcpu_model' on Instance uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 6 04:59:35 localhost nova_compute[282193]: 2025-12-06 09:59:35.844 282197 DEBUG nova.privsep.utils [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m Dec 6 04:59:35 localhost nova_compute[282193]: 2025-12-06 09:59:35.844 282197 DEBUG oslo_concurrency.processutils [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 04:59:35 localhost ovn_controller[154851]: 2025-12-06T09:59:35Z|00055|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory Dec 6 04:59:36 localhost nova_compute[282193]: 2025-12-06 09:59:36.267 282197 DEBUG oslo_concurrency.processutils [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 04:59:36 localhost nova_compute[282193]: 2025-12-06 09:59:36.268 282197 DEBUG oslo_concurrency.processutils [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 04:59:36 localhost nova_compute[282193]: 2025-12-06 09:59:36.722 282197 DEBUG oslo_concurrency.processutils [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 04:59:36 localhost nova_compute[282193]: 2025-12-06 09:59:36.724 282197 DEBUG nova.virt.libvirt.vif [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:44:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description='test',display_name='test',ec2_ids=,ephemeral_gb=1,ephemeral_key_uuid=None,fault=,flavor=Flavor(2),hidden=False,host='np0005548789.localdomain',hostname='test',id=2,image_ref='e0d06706-da90-478a-9829-34b75a3ce049',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-12-06T08:44:43Z,launched_on='np0005548789.localdomain',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=,new_flavor=None,node='np0005548789.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=,power_state=4,progress=0,project_id='3d603431c0bb4967bafc7a0aa6108bfe',ramdisk_id='',reservation_id='r-02dpupig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=,services=,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='e0d06706-da90-478a-9829-34b75a3ce049',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='pc-q35-rhel9.0.0',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=,task_state='powering-on',terminated_at=None,trusted_certs=,updated_at=2025-12-06T09:59:06Z,user_data=None,user_id='ff0049f3313348bdb67886d170c1c765',uuid=b7ed0a2e-9350-4933-9334-4e5e08d3e6aa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m Dec 6 04:59:36 localhost nova_compute[282193]: 2025-12-06 09:59:36.725 282197 DEBUG nova.network.os_vif_util [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] Converting VIF {"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Dec 6 04:59:36 localhost nova_compute[282193]: 2025-12-06 09:59:36.725 282197 DEBUG nova.network.os_vif_util [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:77:f3,bridge_name='br-int',has_traffic_filtering=True,id=86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b,network=Network(652b6bdc-40ce-45b7-8aa5-3bca79987993),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86fc0b7a-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Dec 6 04:59:36 localhost nova_compute[282193]: 2025-12-06 09:59:36.727 282197 DEBUG nova.objects.instance [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] Lazy-loading 'pci_devices' on Instance uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 6 04:59:36 localhost nova_compute[282193]: 2025-12-06 09:59:36.744 282197 DEBUG nova.virt.libvirt.driver [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] End _get_guest_xml xml= Dec 6 04:59:36 localhost nova_compute[282193]: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa Dec 6 04:59:36 localhost nova_compute[282193]: instance-00000002 Dec 6 04:59:36 localhost nova_compute[282193]: 524288 Dec 6 04:59:36 localhost nova_compute[282193]: 1 Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: test Dec 6 04:59:36 localhost nova_compute[282193]: 2025-12-06 09:59:35 Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: 512 Dec 6 04:59:36 localhost nova_compute[282193]: 1 Dec 6 04:59:36 localhost nova_compute[282193]: 0 Dec 6 04:59:36 localhost nova_compute[282193]: 1 Dec 6 04:59:36 localhost nova_compute[282193]: 1 Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: admin Dec 6 04:59:36 localhost nova_compute[282193]: admin Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: RDO Dec 6 04:59:36 localhost nova_compute[282193]: OpenStack Compute Dec 6 04:59:36 localhost nova_compute[282193]: 27.5.2-0.20250829104910.6f8decf.el9 Dec 6 04:59:36 localhost nova_compute[282193]: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa Dec 6 04:59:36 localhost nova_compute[282193]: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa Dec 6 04:59:36 localhost nova_compute[282193]: Virtual Machine Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: hvm Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: /dev/urandom Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: Dec 6 04:59:36 localhost nova_compute[282193]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m Dec 6 04:59:36 localhost nova_compute[282193]: 2025-12-06 09:59:36.745 282197 DEBUG nova.virt.libvirt.driver [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 04:59:36 localhost nova_compute[282193]: 2025-12-06 09:59:36.746 282197 DEBUG nova.virt.libvirt.driver [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 04:59:36 localhost nova_compute[282193]: 2025-12-06 09:59:36.747 282197 DEBUG nova.virt.libvirt.vif [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:44:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description='test',display_name='test',ec2_ids=,ephemeral_gb=1,ephemeral_key_uuid=None,fault=,flavor=Flavor(2),hidden=False,host='np0005548789.localdomain',hostname='test',id=2,image_ref='e0d06706-da90-478a-9829-34b75a3ce049',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-12-06T08:44:43Z,launched_on='np0005548789.localdomain',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=,new_flavor=None,node='np0005548789.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=,power_state=4,progress=0,project_id='3d603431c0bb4967bafc7a0aa6108bfe',ramdisk_id='',reservation_id='r-02dpupig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=,services=,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='e0d06706-da90-478a-9829-34b75a3ce049',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='pc-q35-rhel9.0.0',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=,task_state='powering-on',terminated_at=None,trusted_certs=,updated_at=2025-12-06T09:59:06Z,user_data=None,user_id='ff0049f3313348bdb67886d170c1c765',uuid=b7ed0a2e-9350-4933-9334-4e5e08d3e6aa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m Dec 6 04:59:36 localhost nova_compute[282193]: 2025-12-06 09:59:36.747 282197 DEBUG nova.network.os_vif_util [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] Converting VIF {"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Dec 6 04:59:36 localhost nova_compute[282193]: 2025-12-06 09:59:36.748 282197 DEBUG nova.network.os_vif_util [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:77:f3,bridge_name='br-int',has_traffic_filtering=True,id=86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b,network=Network(652b6bdc-40ce-45b7-8aa5-3bca79987993),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86fc0b7a-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Dec 6 04:59:36 localhost nova_compute[282193]: 2025-12-06 09:59:36.748 282197 DEBUG os_vif [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:77:f3,bridge_name='br-int',has_traffic_filtering=True,id=86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b,network=Network(652b6bdc-40ce-45b7-8aa5-3bca79987993),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86fc0b7a-fb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m Dec 6 04:59:36 localhost nova_compute[282193]: 2025-12-06 09:59:36.749 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:59:36 localhost nova_compute[282193]: 2025-12-06 09:59:36.749 282197 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 04:59:36 localhost nova_compute[282193]: 2025-12-06 09:59:36.749 282197 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 6 04:59:36 localhost nova_compute[282193]: 2025-12-06 09:59:36.752 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:59:36 localhost nova_compute[282193]: 2025-12-06 09:59:36.752 282197 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap86fc0b7a-fb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 04:59:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13459 DF PROTO=TCP SPT=54536 DPT=9102 SEQ=735695711 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E6BFEF0000000001030307) Dec 6 04:59:36 localhost nova_compute[282193]: 2025-12-06 09:59:36.752 282197 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap86fc0b7a-fb, col_values=(('external_ids', {'iface-id': '86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:64:77:f3', 'vm-uuid': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 04:59:36 localhost nova_compute[282193]: 2025-12-06 09:59:36.754 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:59:36 localhost nova_compute[282193]: 2025-12-06 09:59:36.756 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 04:59:36 localhost nova_compute[282193]: 2025-12-06 09:59:36.760 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:59:36 localhost nova_compute[282193]: 2025-12-06 09:59:36.760 282197 INFO os_vif [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:77:f3,bridge_name='br-int',has_traffic_filtering=True,id=86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b,network=Network(652b6bdc-40ce-45b7-8aa5-3bca79987993),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86fc0b7a-fb')#033[00m Dec 6 04:59:36 localhost systemd[1]: Started libvirt secret daemon. Dec 6 04:59:36 localhost kernel: device tap86fc0b7a-fb entered promiscuous mode Dec 6 04:59:36 localhost ovn_controller[154851]: 2025-12-06T09:59:36Z|00056|binding|INFO|Claiming lport 86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b for this chassis. Dec 6 04:59:36 localhost ovn_controller[154851]: 2025-12-06T09:59:36Z|00057|binding|INFO|86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b: Claiming fa:16:3e:64:77:f3 192.168.0.162 Dec 6 04:59:36 localhost nova_compute[282193]: 2025-12-06 09:59:36.871 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:59:36 localhost NetworkManager[5973]: [1765015176.8739] manager: (tap86fc0b7a-fb): new Tun device (/org/freedesktop/NetworkManager/Devices/16) Dec 6 04:59:36 localhost systemd-udevd[283151]: Network interface NamePolicy= disabled on kernel command line. Dec 6 04:59:36 localhost ovn_controller[154851]: 2025-12-06T09:59:36Z|00058|binding|INFO|Setting lport 86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b ovn-installed in OVS Dec 6 04:59:36 localhost nova_compute[282193]: 2025-12-06 09:59:36.888 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:59:36 localhost ovn_controller[154851]: 2025-12-06T09:59:36Z|00059|binding|INFO|Setting lport 86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b up in Southbound Dec 6 04:59:36 localhost nova_compute[282193]: 2025-12-06 09:59:36.889 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:59:36 localhost nova_compute[282193]: 2025-12-06 09:59:36.889 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:59:36 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:36.887 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:77:f3 192.168.0.162'], port_security=['fa:16:3e:64:77:f3 192.168.0.162'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.162/24', 'neutron:device_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-652b6bdc-40ce-45b7-8aa5-3bca79987993', 'neutron:port_capabilities': '', 'neutron:port_fip': '192.168.122.20', 'neutron:port_name': '', 'neutron:project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'neutron:revision_number': '8', 'neutron:security_group_ids': '65e67ecb-ffcf-41e6-8b8b-ed491f2580ec 7ce08e20-be94-4509-a371-aa5c036416af', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7872d306-938e-4ee0-be61-57ba3983d747, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 04:59:36 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:36.890 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b in datapath 652b6bdc-40ce-45b7-8aa5-3bca79987993 bound to our chassis#033[00m Dec 6 04:59:36 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:36.892 160509 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 652b6bdc-40ce-45b7-8aa5-3bca79987993#033[00m Dec 6 04:59:36 localhost NetworkManager[5973]: [1765015176.8987] device (tap86fc0b7a-fb): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Dec 6 04:59:36 localhost NetworkManager[5973]: [1765015176.8994] device (tap86fc0b7a-fb): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external') Dec 6 04:59:36 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:36.901 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[4477694e-f633-4a6e-896d-8f816e3a3a80]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 04:59:36 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:36.903 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap652b6bdc-41 in ovnmeta-652b6bdc-40ce-45b7-8aa5-3bca79987993 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Dec 6 04:59:36 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:36.904 160674 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap652b6bdc-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Dec 6 04:59:36 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:36.905 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[93134fe5-4989-4f27-bbb8-bf0289360e0e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 04:59:36 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:36.906 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[7a81297a-5cb9-4c25-9050-8d2de27fa905]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 04:59:36 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:36.919 160720 DEBUG oslo.privsep.daemon [-] privsep: reply[1e61e7b0-bc1a-44af-a307-cb90a96c3609]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 04:59:36 localhost nova_compute[282193]: 2025-12-06 09:59:36.923 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:59:36 localhost nova_compute[282193]: 2025-12-06 09:59:36.931 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:59:36 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:36.933 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[bb124a56-b3d0-435e-8e6a-9e2d1de501de]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 04:59:36 localhost systemd-machined[84444]: New machine qemu-2-instance-00000002. Dec 6 04:59:36 localhost systemd[1]: Started Virtual Machine qemu-2-instance-00000002. Dec 6 04:59:36 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:36.961 160700 DEBUG oslo.privsep.daemon [-] privsep: reply[62dfe517-ba9a-440a-9031-c1104aeb992f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 04:59:36 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:36.968 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[5b3ada8c-5467-496d-9efd-5fe02cec1b18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 04:59:36 localhost NetworkManager[5973]: [1765015176.9704] manager: (tap652b6bdc-40): new Veth device (/org/freedesktop/NetworkManager/Devices/17) Dec 6 04:59:37 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:37.001 160700 DEBUG oslo.privsep.daemon [-] privsep: reply[8868444f-3a65-4b55-adc2-911de08d4d5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 04:59:37 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:37.006 160700 DEBUG oslo.privsep.daemon [-] privsep: reply[3bb230e0-83b3-437a-9fc0-7aded87f7305]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 04:59:37 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap652b6bdc-41: link becomes ready Dec 6 04:59:37 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap652b6bdc-40: link becomes ready Dec 6 04:59:37 localhost NetworkManager[5973]: [1765015177.0270] device (tap652b6bdc-40): carrier: link connected Dec 6 04:59:37 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:37.032 160700 DEBUG oslo.privsep.daemon [-] privsep: reply[1065fe97-5b45-4138-be02-3423e131d2dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 04:59:37 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:37.052 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[75949468-8ef4-4f8b-859d-b99db8ac5f35]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap652b6bdc-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:b4:a7:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1159520, 'reachable_time': 26990, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283188, 'error': None, 'target': 'ovnmeta-652b6bdc-40ce-45b7-8aa5-3bca79987993', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 04:59:37 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:37.065 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[61f3dcf6-7a49-4e0c-ae38-43a51e2f640f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb4:a70c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1159520, 'tstamp': 1159520}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283189, 'error': None, 'target': 'ovnmeta-652b6bdc-40ce-45b7-8aa5-3bca79987993', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 04:59:37 localhost nova_compute[282193]: 2025-12-06 09:59:37.065 282197 DEBUG nova.compute.manager [req-8684fbc5-7c31-402f-831f-167241e84181 req-802597e0-7f32-4c4c-9049-f6b1a959ea2d 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Received event network-vif-plugged-86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 6 04:59:37 localhost nova_compute[282193]: 2025-12-06 09:59:37.066 282197 DEBUG oslo_concurrency.lockutils [req-8684fbc5-7c31-402f-831f-167241e84181 req-802597e0-7f32-4c4c-9049-f6b1a959ea2d 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Acquiring lock "b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 04:59:37 localhost nova_compute[282193]: 2025-12-06 09:59:37.067 282197 DEBUG oslo_concurrency.lockutils [req-8684fbc5-7c31-402f-831f-167241e84181 req-802597e0-7f32-4c4c-9049-f6b1a959ea2d 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 04:59:37 localhost nova_compute[282193]: 2025-12-06 09:59:37.067 282197 DEBUG oslo_concurrency.lockutils [req-8684fbc5-7c31-402f-831f-167241e84181 req-802597e0-7f32-4c4c-9049-f6b1a959ea2d 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 04:59:37 localhost nova_compute[282193]: 2025-12-06 09:59:37.067 282197 DEBUG nova.compute.manager [req-8684fbc5-7c31-402f-831f-167241e84181 req-802597e0-7f32-4c4c-9049-f6b1a959ea2d 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] No waiting events found dispatching network-vif-plugged-86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Dec 6 04:59:37 localhost nova_compute[282193]: 2025-12-06 09:59:37.068 282197 WARNING nova.compute.manager [req-8684fbc5-7c31-402f-831f-167241e84181 req-802597e0-7f32-4c4c-9049-f6b1a959ea2d 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Received unexpected event network-vif-plugged-86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b for instance with vm_state stopped and task_state powering-on.#033[00m Dec 6 04:59:37 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:37.083 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[8d0d2403-0160-4bed-95b9-e7145865de01]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap652b6bdc-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:b4:a7:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1159520, 'reachable_time': 26990, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 283197, 'error': None, 'target': 'ovnmeta-652b6bdc-40ce-45b7-8aa5-3bca79987993', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 04:59:37 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:37.113 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[676e856f-f794-4ae6-8736-df26f1af5038]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 04:59:37 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:37.179 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[9d3d588d-7bce-46eb-b163-55de9d6d705f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 04:59:37 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:37.181 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap652b6bdc-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 04:59:37 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:37.182 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 6 04:59:37 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:37.183 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap652b6bdc-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 04:59:37 localhost nova_compute[282193]: 2025-12-06 09:59:37.186 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:59:37 localhost kernel: device tap652b6bdc-40 entered promiscuous mode Dec 6 04:59:37 localhost nova_compute[282193]: 2025-12-06 09:59:37.190 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:59:37 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:37.196 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap652b6bdc-40, col_values=(('external_ids', {'iface-id': '4fb81ffd-e198-4628-9bd0-0c0f0c89c33a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 04:59:37 localhost ovn_controller[154851]: 2025-12-06T09:59:37Z|00060|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 04:59:37 localhost nova_compute[282193]: 2025-12-06 09:59:37.199 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:59:37 localhost nova_compute[282193]: 2025-12-06 09:59:37.212 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:59:37 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:37.214 160509 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/652b6bdc-40ce-45b7-8aa5-3bca79987993.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/652b6bdc-40ce-45b7-8aa5-3bca79987993.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Dec 6 04:59:37 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:37.216 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[33da2e55-6231-4c66-aa7e-a1dfd131a766]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 04:59:37 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:37.217 160509 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Dec 6 04:59:37 localhost ovn_metadata_agent[160504]: global Dec 6 04:59:37 localhost ovn_metadata_agent[160504]: log /dev/log local0 debug Dec 6 04:59:37 localhost ovn_metadata_agent[160504]: log-tag haproxy-metadata-proxy-652b6bdc-40ce-45b7-8aa5-3bca79987993 Dec 6 04:59:37 localhost ovn_metadata_agent[160504]: user root Dec 6 04:59:37 localhost ovn_metadata_agent[160504]: group root Dec 6 04:59:37 localhost ovn_metadata_agent[160504]: maxconn 1024 Dec 6 04:59:37 localhost ovn_metadata_agent[160504]: pidfile /var/lib/neutron/external/pids/652b6bdc-40ce-45b7-8aa5-3bca79987993.pid.haproxy Dec 6 04:59:37 localhost ovn_metadata_agent[160504]: daemon Dec 6 04:59:37 localhost ovn_metadata_agent[160504]: Dec 6 04:59:37 localhost ovn_metadata_agent[160504]: defaults Dec 6 04:59:37 localhost ovn_metadata_agent[160504]: log global Dec 6 04:59:37 localhost ovn_metadata_agent[160504]: mode http Dec 6 04:59:37 localhost ovn_metadata_agent[160504]: option httplog Dec 6 04:59:37 localhost ovn_metadata_agent[160504]: option dontlognull Dec 6 04:59:37 localhost ovn_metadata_agent[160504]: option http-server-close Dec 6 04:59:37 localhost ovn_metadata_agent[160504]: option forwardfor Dec 6 04:59:37 localhost ovn_metadata_agent[160504]: retries 3 Dec 6 04:59:37 localhost ovn_metadata_agent[160504]: timeout http-request 30s Dec 6 04:59:37 localhost ovn_metadata_agent[160504]: timeout connect 30s Dec 6 04:59:37 localhost ovn_metadata_agent[160504]: timeout client 32s Dec 6 04:59:37 localhost ovn_metadata_agent[160504]: timeout server 32s Dec 6 04:59:37 localhost ovn_metadata_agent[160504]: timeout http-keep-alive 30s Dec 6 04:59:37 localhost ovn_metadata_agent[160504]: Dec 6 04:59:37 localhost ovn_metadata_agent[160504]: Dec 6 04:59:37 localhost ovn_metadata_agent[160504]: listen listener Dec 6 04:59:37 localhost ovn_metadata_agent[160504]: bind 169.254.169.254:80 Dec 6 04:59:37 localhost ovn_metadata_agent[160504]: server metadata /var/lib/neutron/metadata_proxy Dec 6 04:59:37 localhost ovn_metadata_agent[160504]: http-request add-header X-OVN-Network-ID 652b6bdc-40ce-45b7-8aa5-3bca79987993 Dec 6 04:59:37 localhost ovn_metadata_agent[160504]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Dec 6 04:59:37 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:37.218 160509 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-652b6bdc-40ce-45b7-8aa5-3bca79987993', 'env', 'PROCESS_TAG=haproxy-652b6bdc-40ce-45b7-8aa5-3bca79987993', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/652b6bdc-40ce-45b7-8aa5-3bca79987993.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Dec 6 04:59:37 localhost nova_compute[282193]: 2025-12-06 09:59:37.330 282197 DEBUG nova.virt.driver [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Emitting event Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 6 04:59:37 localhost nova_compute[282193]: 2025-12-06 09:59:37.330 282197 INFO nova.compute.manager [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] VM Resumed (Lifecycle Event)#033[00m Dec 6 04:59:37 localhost nova_compute[282193]: 2025-12-06 09:59:37.354 282197 DEBUG nova.compute.manager [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 6 04:59:37 localhost nova_compute[282193]: 2025-12-06 09:59:37.359 282197 DEBUG nova.compute.manager [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Dec 6 04:59:37 localhost nova_compute[282193]: 2025-12-06 09:59:37.360 282197 DEBUG nova.compute.manager [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Instance event wait completed in 0 seconds for wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m Dec 6 04:59:37 localhost nova_compute[282193]: 2025-12-06 09:59:37.364 282197 INFO nova.virt.libvirt.driver [-] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Instance rebooted successfully.#033[00m Dec 6 04:59:37 localhost nova_compute[282193]: 2025-12-06 09:59:37.364 282197 DEBUG nova.compute.manager [None req-ca3c1125-e182-4a1b-89a6-1f4388da9e90 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 6 04:59:37 localhost nova_compute[282193]: 2025-12-06 09:59:37.398 282197 INFO nova.compute.manager [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] During sync_power_state the instance has a pending task (powering-on). Skip.#033[00m Dec 6 04:59:37 localhost nova_compute[282193]: 2025-12-06 09:59:37.398 282197 DEBUG nova.virt.driver [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Emitting event Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 6 04:59:37 localhost nova_compute[282193]: 2025-12-06 09:59:37.398 282197 INFO nova.compute.manager [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] VM Started (Lifecycle Event)#033[00m Dec 6 04:59:37 localhost nova_compute[282193]: 2025-12-06 09:59:37.437 282197 DEBUG nova.compute.manager [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 6 04:59:37 localhost nova_compute[282193]: 2025-12-06 09:59:37.441 282197 DEBUG nova.compute.manager [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Synchronizing instance power state after lifecycle event "Started"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Dec 6 04:59:37 localhost podman[283265]: Dec 6 04:59:37 localhost podman[283265]: 2025-12-06 09:59:37.708853657 +0000 UTC m=+0.094238429 container create 09754e96bffe808c203c680a69a65deae010c5f97ae8e7bcaef645b11fa10ca7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-652b6bdc-40ce-45b7-8aa5-3bca79987993, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2) Dec 6 04:59:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 04:59:37 localhost podman[283265]: 2025-12-06 09:59:37.66260197 +0000 UTC m=+0.047986792 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Dec 6 04:59:37 localhost systemd[1]: Started libpod-conmon-09754e96bffe808c203c680a69a65deae010c5f97ae8e7bcaef645b11fa10ca7.scope. Dec 6 04:59:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8257 DF PROTO=TCP SPT=50026 DPT=9102 SEQ=3864606856 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E6C3EF0000000001030307) Dec 6 04:59:37 localhost systemd[1]: Started libcrun container. Dec 6 04:59:37 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c614c2452f63581ed05d0d387559645c496c93a80ca0ed66fe42b66557922bf7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 04:59:37 localhost podman[283278]: 2025-12-06 09:59:37.845812735 +0000 UTC m=+0.103472935 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 6 04:59:37 localhost podman[283265]: 2025-12-06 09:59:37.855784374 +0000 UTC m=+0.241169156 container init 09754e96bffe808c203c680a69a65deae010c5f97ae8e7bcaef645b11fa10ca7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-652b6bdc-40ce-45b7-8aa5-3bca79987993, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team) Dec 6 04:59:37 localhost podman[283265]: 2025-12-06 09:59:37.866881996 +0000 UTC m=+0.252266778 container start 09754e96bffe808c203c680a69a65deae010c5f97ae8e7bcaef645b11fa10ca7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-652b6bdc-40ce-45b7-8aa5-3bca79987993, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 04:59:37 localhost podman[283278]: 2025-12-06 09:59:37.883096877 +0000 UTC m=+0.140757057 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 6 04:59:37 localhost neutron-haproxy-ovnmeta-652b6bdc-40ce-45b7-8aa5-3bca79987993[283289]: [NOTICE] (283303) : New worker (283305) forked Dec 6 04:59:37 localhost neutron-haproxy-ovnmeta-652b6bdc-40ce-45b7-8aa5-3bca79987993[283289]: [NOTICE] (283303) : Loading success. Dec 6 04:59:37 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 04:59:39 localhost nova_compute[282193]: 2025-12-06 09:59:39.107 282197 DEBUG nova.compute.manager [req-67f2f013-ad7a-4a0d-93ae-076de7106831 req-c73f1e4d-3992-4560-aef4-596064f1b75c 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Received event network-vif-plugged-86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 6 04:59:39 localhost nova_compute[282193]: 2025-12-06 09:59:39.108 282197 DEBUG oslo_concurrency.lockutils [req-67f2f013-ad7a-4a0d-93ae-076de7106831 req-c73f1e4d-3992-4560-aef4-596064f1b75c 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Acquiring lock "b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 04:59:39 localhost nova_compute[282193]: 2025-12-06 09:59:39.109 282197 DEBUG oslo_concurrency.lockutils [req-67f2f013-ad7a-4a0d-93ae-076de7106831 req-c73f1e4d-3992-4560-aef4-596064f1b75c 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 04:59:39 localhost nova_compute[282193]: 2025-12-06 09:59:39.109 282197 DEBUG oslo_concurrency.lockutils [req-67f2f013-ad7a-4a0d-93ae-076de7106831 req-c73f1e4d-3992-4560-aef4-596064f1b75c 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 04:59:39 localhost nova_compute[282193]: 2025-12-06 09:59:39.110 282197 DEBUG nova.compute.manager [req-67f2f013-ad7a-4a0d-93ae-076de7106831 req-c73f1e4d-3992-4560-aef4-596064f1b75c 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] No waiting events found dispatching network-vif-plugged-86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Dec 6 04:59:39 localhost nova_compute[282193]: 2025-12-06 09:59:39.110 282197 WARNING nova.compute.manager [req-67f2f013-ad7a-4a0d-93ae-076de7106831 req-c73f1e4d-3992-4560-aef4-596064f1b75c 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Received unexpected event network-vif-plugged-86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b for instance with vm_state active and task_state None.#033[00m Dec 6 04:59:39 localhost nova_compute[282193]: 2025-12-06 09:59:39.197 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:59:39 localhost snmpd[67279]: IfIndex of an interface changed. Such interfaces will appear multiple times in IF-MIB. Dec 6 04:59:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13460 DF PROTO=TCP SPT=54536 DPT=9102 SEQ=735695711 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E6CFAF0000000001030307) Dec 6 04:59:41 localhost nova_compute[282193]: 2025-12-06 09:59:41.756 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:59:44 localhost nova_compute[282193]: 2025-12-06 09:59:44.226 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:59:44 localhost sshd[283392]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:59:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 04:59:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 04:59:44 localhost podman[283404]: 2025-12-06 09:59:44.717832649 +0000 UTC m=+0.082831448 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, version=9.6, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, vcs-type=git, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Dec 6 04:59:44 localhost podman[283404]: 2025-12-06 09:59:44.734399721 +0000 UTC m=+0.099398570 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, version=9.6, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, managed_by=edpm_ansible, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_id=edpm, release=1755695350, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git) Dec 6 04:59:44 localhost systemd[1]: tmp-crun.8mKM99.mount: Deactivated successfully. Dec 6 04:59:44 localhost podman[283405]: 2025-12-06 09:59:44.754288304 +0000 UTC m=+0.118663814 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 04:59:44 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 04:59:44 localhost podman[283405]: 2025-12-06 09:59:44.792217686 +0000 UTC m=+0.156593226 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 6 04:59:44 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 04:59:46 localhost openstack_network_exporter[243110]: ERROR 09:59:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 04:59:46 localhost openstack_network_exporter[243110]: ERROR 09:59:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 04:59:46 localhost openstack_network_exporter[243110]: ERROR 09:59:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 04:59:46 localhost openstack_network_exporter[243110]: ERROR 09:59:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 04:59:46 localhost openstack_network_exporter[243110]: Dec 6 04:59:46 localhost openstack_network_exporter[243110]: ERROR 09:59:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 04:59:46 localhost openstack_network_exporter[243110]: Dec 6 04:59:46 localhost nova_compute[282193]: 2025-12-06 09:59:46.758 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:59:46 localhost sshd[283476]: main: sshd: ssh-rsa algorithm is disabled Dec 6 04:59:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:47.290 160509 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 04:59:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:47.292 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 04:59:47 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:47.293 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 04:59:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 04:59:47 localhost podman[283478]: 2025-12-06 09:59:47.92251519 +0000 UTC m=+0.076916075 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 6 04:59:47 localhost podman[283478]: 2025-12-06 09:59:47.940013511 +0000 UTC m=+0.094414416 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3) Dec 6 04:59:47 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 04:59:49 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13461 DF PROTO=TCP SPT=54536 DPT=9102 SEQ=735695711 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E6EFEF0000000001030307) Dec 6 04:59:49 localhost nova_compute[282193]: 2025-12-06 09:59:49.271 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:59:50 localhost ovn_controller[154851]: 2025-12-06T09:59:50Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:64:77:f3 192.168.0.162 Dec 6 04:59:51 localhost nova_compute[282193]: 2025-12-06 09:59:51.806 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:59:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 04:59:52 localhost podman[283515]: 2025-12-06 09:59:52.927304681 +0000 UTC m=+0.078757513 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 6 04:59:52 localhost podman[283515]: 2025-12-06 09:59:52.96322085 +0000 UTC m=+0.114673702 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 6 04:59:52 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 04:59:53 localhost podman[241090]: time="2025-12-06T09:59:53Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 04:59:53 localhost podman[241090]: @ - - [06/Dec/2025:09:59:53 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149555 "" "Go-http-client/1.1" Dec 6 04:59:53 localhost podman[241090]: @ - - [06/Dec/2025:09:59:53 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17733 "" "Go-http-client/1.1" Dec 6 04:59:54 localhost nova_compute[282193]: 2025-12-06 09:59:54.302 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:59:55 localhost nova_compute[282193]: 2025-12-06 09:59:55.914 282197 DEBUG nova.compute.manager [None req-8782af70-4f70-461c-ad40-bbf0accb7649 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 6 04:59:55 localhost nova_compute[282193]: 2025-12-06 09:59:55.919 282197 INFO nova.compute.manager [None req-8782af70-4f70-461c-ad40-bbf0accb7649 ff0049f3313348bdb67886d170c1c765 3d603431c0bb4967bafc7a0aa6108bfe - - default default] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Retrieving diagnostics#033[00m Dec 6 04:59:56 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:56.090 160637 DEBUG eventlet.wsgi.server [-] (160637) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Dec 6 04:59:56 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:56.093 160637 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/instance-id HTTP/1.0#015 Dec 6 04:59:56 localhost ovn_metadata_agent[160504]: Accept: */*#015 Dec 6 04:59:56 localhost ovn_metadata_agent[160504]: Connection: close#015 Dec 6 04:59:56 localhost ovn_metadata_agent[160504]: Content-Type: text/plain#015 Dec 6 04:59:56 localhost ovn_metadata_agent[160504]: Host: 169.254.169.254#015 Dec 6 04:59:56 localhost ovn_metadata_agent[160504]: User-Agent: curl/7.84.0#015 Dec 6 04:59:56 localhost ovn_metadata_agent[160504]: X-Forwarded-For: 192.168.0.162#015 Dec 6 04:59:56 localhost ovn_metadata_agent[160504]: X-Ovn-Network-Id: 652b6bdc-40ce-45b7-8aa5-3bca79987993 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Dec 6 04:59:56 localhost nova_compute[282193]: 2025-12-06 09:59:56.839 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 04:59:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:57.831 160637 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:57.832 160637 INFO eventlet.wsgi.server [-] 192.168.0.162, "GET /2009-04-04/meta-data/instance-id HTTP/1.1" status: 200 len: 146 time: 1.7391040#033[00m Dec 6 04:59:57 localhost haproxy-metadata-proxy-652b6bdc-40ce-45b7-8aa5-3bca79987993[283305]: 192.168.0.162:32776 [06/Dec/2025:09:59:56.089] listener listener/metadata 0/0/0/1742/1742 200 130 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/instance-id HTTP/1.1" Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:57.850 160637 DEBUG eventlet.wsgi.server [-] (160637) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:57.852 160637 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/public-keys HTTP/1.0#015 Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: Accept: */*#015 Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: Connection: close#015 Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: Content-Type: text/plain#015 Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: Host: 169.254.169.254#015 Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: User-Agent: curl/7.84.0#015 Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: X-Forwarded-For: 192.168.0.162#015 Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: X-Ovn-Network-Id: 652b6bdc-40ce-45b7-8aa5-3bca79987993 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Dec 6 04:59:57 localhost haproxy-metadata-proxy-652b6bdc-40ce-45b7-8aa5-3bca79987993[283305]: 192.168.0.162:32790 [06/Dec/2025:09:59:57.850] listener listener/metadata 0/0/0/24/24 404 281 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/public-keys HTTP/1.1" Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:57.874 160637 INFO eventlet.wsgi.server [-] 192.168.0.162, "GET /2009-04-04/meta-data/public-keys HTTP/1.1" status: 404 len: 297 time: 0.0227432#033[00m Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:57.894 160637 DEBUG eventlet.wsgi.server [-] (160637) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:57.895 160637 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/instance-id HTTP/1.0#015 Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: Accept: */*#015 Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: Connection: close#015 Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: Content-Type: text/plain#015 Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: Host: 169.254.169.254#015 Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: User-Agent: curl/7.84.0#015 Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: X-Forwarded-For: 192.168.0.162#015 Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: X-Ovn-Network-Id: 652b6bdc-40ce-45b7-8aa5-3bca79987993 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:57.908 160637 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:57.908 160637 INFO eventlet.wsgi.server [-] 192.168.0.162, "GET /2009-04-04/meta-data/instance-id HTTP/1.1" status: 200 len: 146 time: 0.0138922#033[00m Dec 6 04:59:57 localhost haproxy-metadata-proxy-652b6bdc-40ce-45b7-8aa5-3bca79987993[283305]: 192.168.0.162:32792 [06/Dec/2025:09:59:57.893] listener listener/metadata 0/0/0/15/15 200 130 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/instance-id HTTP/1.1" Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:57.919 160637 DEBUG eventlet.wsgi.server [-] (160637) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:57.920 160637 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/ami-launch-index HTTP/1.0#015 Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: Accept: */*#015 Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: Connection: close#015 Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: Content-Type: text/plain#015 Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: Host: 169.254.169.254#015 Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: User-Agent: curl/7.84.0#015 Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: X-Forwarded-For: 192.168.0.162#015 Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: X-Ovn-Network-Id: 652b6bdc-40ce-45b7-8aa5-3bca79987993 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Dec 6 04:59:57 localhost podman[283539]: 2025-12-06 09:59:57.923503285 +0000 UTC m=+0.081852169 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:57.935 160637 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:57.936 160637 INFO eventlet.wsgi.server [-] 192.168.0.162, "GET /2009-04-04/meta-data/ami-launch-index HTTP/1.1" status: 200 len: 136 time: 0.0155873#033[00m Dec 6 04:59:57 localhost haproxy-metadata-proxy-652b6bdc-40ce-45b7-8aa5-3bca79987993[283305]: 192.168.0.162:32802 [06/Dec/2025:09:59:57.918] listener listener/metadata 0/0/0/17/17 200 120 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/ami-launch-index HTTP/1.1" Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:57.940 160637 DEBUG eventlet.wsgi.server [-] (160637) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:57.940 160637 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/instance-type HTTP/1.0#015 Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: Accept: */*#015 Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: Connection: close#015 Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: Content-Type: text/plain#015 Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: Host: 169.254.169.254#015 Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: User-Agent: curl/7.84.0#015 Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: X-Forwarded-For: 192.168.0.162#015 Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: X-Ovn-Network-Id: 652b6bdc-40ce-45b7-8aa5-3bca79987993 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:57.955 160637 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Dec 6 04:59:57 localhost haproxy-metadata-proxy-652b6bdc-40ce-45b7-8aa5-3bca79987993[283305]: 192.168.0.162:32810 [06/Dec/2025:09:59:57.939] listener listener/metadata 0/0/0/16/16 200 127 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/instance-type HTTP/1.1" Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:57.956 160637 INFO eventlet.wsgi.server [-] 192.168.0.162, "GET /2009-04-04/meta-data/instance-type HTTP/1.1" status: 200 len: 143 time: 0.0151703#033[00m Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:57.959 160637 DEBUG eventlet.wsgi.server [-] (160637) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:57.960 160637 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/local-ipv4 HTTP/1.0#015 Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: Accept: */*#015 Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: Connection: close#015 Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: Content-Type: text/plain#015 Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: Host: 169.254.169.254#015 Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: User-Agent: curl/7.84.0#015 Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: X-Forwarded-For: 192.168.0.162#015 Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: X-Ovn-Network-Id: 652b6bdc-40ce-45b7-8aa5-3bca79987993 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:57.980 160637 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Dec 6 04:59:57 localhost haproxy-metadata-proxy-652b6bdc-40ce-45b7-8aa5-3bca79987993[283305]: 192.168.0.162:32820 [06/Dec/2025:09:59:57.959] listener listener/metadata 0/0/0/21/21 200 133 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/local-ipv4 HTTP/1.1" Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:57.980 160637 INFO eventlet.wsgi.server [-] 192.168.0.162, "GET /2009-04-04/meta-data/local-ipv4 HTTP/1.1" status: 200 len: 149 time: 0.0202470#033[00m Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:57.984 160637 DEBUG eventlet.wsgi.server [-] (160637) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:57.985 160637 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/public-ipv4 HTTP/1.0#015 Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: Accept: */*#015 Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: Connection: close#015 Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: Content-Type: text/plain#015 Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: Host: 169.254.169.254#015 Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: User-Agent: curl/7.84.0#015 Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: X-Forwarded-For: 192.168.0.162#015 Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: X-Ovn-Network-Id: 652b6bdc-40ce-45b7-8aa5-3bca79987993 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:57.998 160637 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Dec 6 04:59:57 localhost haproxy-metadata-proxy-652b6bdc-40ce-45b7-8aa5-3bca79987993[283305]: 192.168.0.162:32824 [06/Dec/2025:09:59:57.983] listener listener/metadata 0/0/0/15/15 200 134 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/public-ipv4 HTTP/1.1" Dec 6 04:59:57 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:57.999 160637 INFO eventlet.wsgi.server [-] 192.168.0.162, "GET /2009-04-04/meta-data/public-ipv4 HTTP/1.1" status: 200 len: 150 time: 0.0142181#033[00m Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:58.002 160637 DEBUG eventlet.wsgi.server [-] (160637) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:58.003 160637 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/hostname HTTP/1.0#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: Accept: */*#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: Connection: close#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: Content-Type: text/plain#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: Host: 169.254.169.254#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: User-Agent: curl/7.84.0#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: X-Forwarded-For: 192.168.0.162#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: X-Ovn-Network-Id: 652b6bdc-40ce-45b7-8aa5-3bca79987993 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:58.015 160637 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:58.016 160637 INFO eventlet.wsgi.server [-] 192.168.0.162, "GET /2009-04-04/meta-data/hostname HTTP/1.1" status: 200 len: 139 time: 0.0128081#033[00m Dec 6 04:59:58 localhost haproxy-metadata-proxy-652b6bdc-40ce-45b7-8aa5-3bca79987993[283305]: 192.168.0.162:32828 [06/Dec/2025:09:59:58.002] listener listener/metadata 0/0/0/13/13 200 123 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/hostname HTTP/1.1" Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:58.019 160637 DEBUG eventlet.wsgi.server [-] (160637) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:58.020 160637 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/local-hostname HTTP/1.0#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: Accept: */*#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: Connection: close#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: Content-Type: text/plain#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: Host: 169.254.169.254#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: User-Agent: curl/7.84.0#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: X-Forwarded-For: 192.168.0.162#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: X-Ovn-Network-Id: 652b6bdc-40ce-45b7-8aa5-3bca79987993 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Dec 6 04:59:58 localhost podman[283539]: 2025-12-06 09:59:58.022378939 +0000 UTC m=+0.180727823 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:58.033 160637 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:58.034 160637 INFO eventlet.wsgi.server [-] 192.168.0.162, "GET /2009-04-04/meta-data/local-hostname HTTP/1.1" status: 200 len: 139 time: 0.0140586#033[00m Dec 6 04:59:58 localhost haproxy-metadata-proxy-652b6bdc-40ce-45b7-8aa5-3bca79987993[283305]: 192.168.0.162:32844 [06/Dec/2025:09:59:58.019] listener listener/metadata 0/0/0/15/15 200 123 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/local-hostname HTTP/1.1" Dec 6 04:59:58 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:58.038 160637 DEBUG eventlet.wsgi.server [-] (160637) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:58.039 160637 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/user-data HTTP/1.0#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: Accept: */*#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: Connection: close#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: Content-Type: text/plain#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: Host: 169.254.169.254#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: User-Agent: curl/7.84.0#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: X-Forwarded-For: 192.168.0.162#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: X-Ovn-Network-Id: 652b6bdc-40ce-45b7-8aa5-3bca79987993 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Dec 6 04:59:58 localhost haproxy-metadata-proxy-652b6bdc-40ce-45b7-8aa5-3bca79987993[283305]: 192.168.0.162:32858 [06/Dec/2025:09:59:58.037] listener listener/metadata 0/0/0/17/17 404 281 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/user-data HTTP/1.1" Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:58.055 160637 INFO eventlet.wsgi.server [-] 192.168.0.162, "GET /2009-04-04/user-data HTTP/1.1" status: 404 len: 297 time: 0.0154448#033[00m Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:58.061 160637 DEBUG eventlet.wsgi.server [-] (160637) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:58.062 160637 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/block-device-mapping HTTP/1.0#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: Accept: */*#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: Connection: close#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: Content-Type: text/plain#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: Host: 169.254.169.254#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: User-Agent: curl/7.84.0#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: X-Forwarded-For: 192.168.0.162#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: X-Ovn-Network-Id: 652b6bdc-40ce-45b7-8aa5-3bca79987993 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:58.080 160637 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Dec 6 04:59:58 localhost haproxy-metadata-proxy-652b6bdc-40ce-45b7-8aa5-3bca79987993[283305]: 192.168.0.162:32870 [06/Dec/2025:09:59:58.061] listener listener/metadata 0/0/0/19/19 200 139 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/block-device-mapping HTTP/1.1" Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:58.080 160637 INFO eventlet.wsgi.server [-] 192.168.0.162, "GET /2009-04-04/meta-data/block-device-mapping HTTP/1.1" status: 200 len: 155 time: 0.0182438#033[00m Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:58.086 160637 DEBUG eventlet.wsgi.server [-] (160637) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:58.087 160637 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/block-device-mapping/ami HTTP/1.0#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: Accept: */*#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: Connection: close#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: Content-Type: text/plain#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: Host: 169.254.169.254#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: User-Agent: curl/7.84.0#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: X-Forwarded-For: 192.168.0.162#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: X-Ovn-Network-Id: 652b6bdc-40ce-45b7-8aa5-3bca79987993 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:58.100 160637 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Dec 6 04:59:58 localhost haproxy-metadata-proxy-652b6bdc-40ce-45b7-8aa5-3bca79987993[283305]: 192.168.0.162:32872 [06/Dec/2025:09:59:58.085] listener listener/metadata 0/0/0/15/15 200 122 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/block-device-mapping/ami HTTP/1.1" Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:58.101 160637 INFO eventlet.wsgi.server [-] 192.168.0.162, "GET /2009-04-04/meta-data/block-device-mapping/ami HTTP/1.1" status: 200 len: 138 time: 0.0138178#033[00m Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:58.106 160637 DEBUG eventlet.wsgi.server [-] (160637) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:58.107 160637 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/block-device-mapping/ephemeral0 HTTP/1.0#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: Accept: */*#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: Connection: close#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: Content-Type: text/plain#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: Host: 169.254.169.254#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: User-Agent: curl/7.84.0#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: X-Forwarded-For: 192.168.0.162#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: X-Ovn-Network-Id: 652b6bdc-40ce-45b7-8aa5-3bca79987993 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:58.121 160637 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Dec 6 04:59:58 localhost haproxy-metadata-proxy-652b6bdc-40ce-45b7-8aa5-3bca79987993[283305]: 192.168.0.162:32878 [06/Dec/2025:09:59:58.106] listener listener/metadata 0/0/0/15/15 200 127 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/block-device-mapping/ephemeral0 HTTP/1.1" Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:58.121 160637 INFO eventlet.wsgi.server [-] 192.168.0.162, "GET /2009-04-04/meta-data/block-device-mapping/ephemeral0 HTTP/1.1" status: 200 len: 143 time: 0.0143259#033[00m Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:58.127 160637 DEBUG eventlet.wsgi.server [-] (160637) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:58.127 160637 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/block-device-mapping/root HTTP/1.0#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: Accept: */*#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: Connection: close#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: Content-Type: text/plain#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: Host: 169.254.169.254#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: User-Agent: curl/7.84.0#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: X-Forwarded-For: 192.168.0.162#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: X-Ovn-Network-Id: 652b6bdc-40ce-45b7-8aa5-3bca79987993 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:58.141 160637 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Dec 6 04:59:58 localhost haproxy-metadata-proxy-652b6bdc-40ce-45b7-8aa5-3bca79987993[283305]: 192.168.0.162:32886 [06/Dec/2025:09:59:58.126] listener listener/metadata 0/0/0/15/15 200 127 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/block-device-mapping/root HTTP/1.1" Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:58.141 160637 INFO eventlet.wsgi.server [-] 192.168.0.162, "GET /2009-04-04/meta-data/block-device-mapping/root HTTP/1.1" status: 200 len: 143 time: 0.0139818#033[00m Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:58.148 160637 DEBUG eventlet.wsgi.server [-] (160637) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:58.149 160637 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/public-hostname HTTP/1.0#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: Accept: */*#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: Connection: close#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: Content-Type: text/plain#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: Host: 169.254.169.254#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: User-Agent: curl/7.84.0#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: X-Forwarded-For: 192.168.0.162#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: X-Ovn-Network-Id: 652b6bdc-40ce-45b7-8aa5-3bca79987993 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:58.163 160637 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Dec 6 04:59:58 localhost haproxy-metadata-proxy-652b6bdc-40ce-45b7-8aa5-3bca79987993[283305]: 192.168.0.162:32890 [06/Dec/2025:09:59:58.148] listener listener/metadata 0/0/0/15/15 200 123 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/public-hostname HTTP/1.1" Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:58.164 160637 INFO eventlet.wsgi.server [-] 192.168.0.162, "GET /2009-04-04/meta-data/public-hostname HTTP/1.1" status: 200 len: 139 time: 0.0147219#033[00m Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:58.170 160637 DEBUG eventlet.wsgi.server [-] (160637) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:58.171 160637 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /2009-04-04/meta-data/placement/availability-zone HTTP/1.0#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: Accept: */*#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: Connection: close#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: Content-Type: text/plain#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: Host: 169.254.169.254#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: User-Agent: curl/7.84.0#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: X-Forwarded-For: 192.168.0.162#015 Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: X-Ovn-Network-Id: 652b6bdc-40ce-45b7-8aa5-3bca79987993 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:58.188 160637 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m Dec 6 04:59:58 localhost haproxy-metadata-proxy-652b6bdc-40ce-45b7-8aa5-3bca79987993[283305]: 192.168.0.162:32906 [06/Dec/2025:09:59:58.170] listener listener/metadata 0/0/0/18/18 200 123 - - ---- 1/1/0/0/0 0/0 "GET /2009-04-04/meta-data/placement/availability-zone HTTP/1.1" Dec 6 04:59:58 localhost ovn_metadata_agent[160504]: 2025-12-06 09:59:58.189 160637 INFO eventlet.wsgi.server [-] 192.168.0.162, "GET /2009-04-04/meta-data/placement/availability-zone HTTP/1.1" status: 200 len: 139 time: 0.0174277#033[00m Dec 6 04:59:59 localhost nova_compute[282193]: 2025-12-06 09:59:59.339 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:00:01 localhost nova_compute[282193]: 2025-12-06 10:00:01.879 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:00:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6026 DF PROTO=TCP SPT=43920 DPT=9102 SEQ=1837799133 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E729270000000001030307) Dec 6 05:00:04 localhost nova_compute[282193]: 2025-12-06 10:00:04.376 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:00:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6027 DF PROTO=TCP SPT=43920 DPT=9102 SEQ=1837799133 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E72D2F0000000001030307) Dec 6 05:00:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13462 DF PROTO=TCP SPT=54536 DPT=9102 SEQ=735695711 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E72FEF0000000001030307) Dec 6 05:00:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:00:05 localhost podman[283565]: 2025-12-06 10:00:05.904919862 +0000 UTC m=+0.068230688 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125) Dec 6 05:00:05 localhost podman[283565]: 2025-12-06 10:00:05.914025544 +0000 UTC m=+0.077336380 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:00:05 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:00:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6028 DF PROTO=TCP SPT=43920 DPT=9102 SEQ=1837799133 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E7352F0000000001030307) Dec 6 05:00:06 localhost ovn_controller[154851]: 2025-12-06T10:00:06Z|00061|memory_trim|INFO|Detected inactivity (last active 30010 ms ago): trimming memory Dec 6 05:00:06 localhost nova_compute[282193]: 2025-12-06 10:00:06.926 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:00:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46744 DF PROTO=TCP SPT=54848 DPT=9102 SEQ=227793754 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E739EF0000000001030307) Dec 6 05:00:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:00:08 localhost podman[283584]: 2025-12-06 10:00:08.916025581 +0000 UTC m=+0.078010821 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 6 05:00:08 localhost podman[283584]: 2025-12-06 10:00:08.952200299 +0000 UTC m=+0.114185539 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 6 05:00:08 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:00:09 localhost ceph-osd[31726]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 6 05:00:09 localhost ceph-osd[31726]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 5849 writes, 25K keys, 5849 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5849 writes, 797 syncs, 7.34 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 88 writes, 255 keys, 88 commit groups, 1.0 writes per commit group, ingest: 0.39 MB, 0.00 MB/s#012Interval WAL: 88 writes, 37 syncs, 2.38 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 6 05:00:09 localhost nova_compute[282193]: 2025-12-06 10:00:09.415 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:00:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6029 DF PROTO=TCP SPT=43920 DPT=9102 SEQ=1837799133 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E744EF0000000001030307) Dec 6 05:00:11 localhost nova_compute[282193]: 2025-12-06 10:00:11.962 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:00:12 localhost sshd[283607]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:00:12 localhost ceph-osd[32665]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 6 05:00:12 localhost ceph-osd[32665]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.2 total, 600.0 interval#012Cumulative writes: 4914 writes, 22K keys, 4914 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 4914 writes, 686 syncs, 7.16 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 35 writes, 105 keys, 35 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s#012Interval WAL: 35 writes, 17 syncs, 2.06 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 6 05:00:14 localhost snmpd[67279]: empty variable list in _query Dec 6 05:00:14 localhost snmpd[67279]: empty variable list in _query Dec 6 05:00:14 localhost nova_compute[282193]: 2025-12-06 10:00:14.467 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:00:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:00:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:00:14 localhost podman[283609]: 2025-12-06 10:00:14.931317049 +0000 UTC m=+0.082584662 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, config_id=edpm, build-date=2025-08-20T13:12:41, release=1755695350, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, com.redhat.component=ubi9-minimal-container, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter) Dec 6 05:00:14 localhost podman[283609]: 2025-12-06 10:00:14.946320442 +0000 UTC m=+0.097588135 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_id=edpm, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, managed_by=edpm_ansible, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., name=ubi9-minimal) Dec 6 05:00:14 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:00:15 localhost systemd[1]: tmp-crun.CbqpIS.mount: Deactivated successfully. Dec 6 05:00:15 localhost podman[283610]: 2025-12-06 10:00:15.043944578 +0000 UTC m=+0.192856718 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true) Dec 6 05:00:15 localhost podman[283610]: 2025-12-06 10:00:15.056155965 +0000 UTC m=+0.205068115 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:00:15 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:00:16 localhost openstack_network_exporter[243110]: ERROR 10:00:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:00:16 localhost openstack_network_exporter[243110]: ERROR 10:00:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:00:16 localhost openstack_network_exporter[243110]: ERROR 10:00:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:00:16 localhost openstack_network_exporter[243110]: ERROR 10:00:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:00:16 localhost openstack_network_exporter[243110]: Dec 6 05:00:16 localhost openstack_network_exporter[243110]: ERROR 10:00:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:00:16 localhost openstack_network_exporter[243110]: Dec 6 05:00:17 localhost nova_compute[282193]: 2025-12-06 10:00:17.007 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:00:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:00:18 localhost podman[283648]: 2025-12-06 10:00:18.903550528 +0000 UTC m=+0.070574911 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 6 05:00:18 localhost podman[283648]: 2025-12-06 10:00:18.916270751 +0000 UTC m=+0.083295134 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:00:18 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:00:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6030 DF PROTO=TCP SPT=43920 DPT=9102 SEQ=1837799133 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E765EF0000000001030307) Dec 6 05:00:19 localhost nova_compute[282193]: 2025-12-06 10:00:19.505 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:00:22 localhost nova_compute[282193]: 2025-12-06 10:00:22.049 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:00:22 localhost sshd[283667]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:00:23 localhost nova_compute[282193]: 2025-12-06 10:00:23.175 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:00:23 localhost nova_compute[282193]: 2025-12-06 10:00:23.177 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:00:23 localhost nova_compute[282193]: 2025-12-06 10:00:23.252 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:00:23 localhost nova_compute[282193]: 2025-12-06 10:00:23.253 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 6 05:00:23 localhost nova_compute[282193]: 2025-12-06 10:00:23.253 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 6 05:00:23 localhost sshd[283669]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:00:23 localhost nova_compute[282193]: 2025-12-06 10:00:23.798 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 6 05:00:23 localhost nova_compute[282193]: 2025-12-06 10:00:23.798 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquired lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 6 05:00:23 localhost nova_compute[282193]: 2025-12-06 10:00:23.799 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 6 05:00:23 localhost nova_compute[282193]: 2025-12-06 10:00:23.799 282197 DEBUG nova.objects.instance [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 6 05:00:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:00:23 localhost podman[241090]: time="2025-12-06T10:00:23Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:00:23 localhost podman[283671]: 2025-12-06 10:00:23.917537574 +0000 UTC m=+0.079996782 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:00:23 localhost podman[241090]: @ - - [06/Dec/2025:10:00:23 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149555 "" "Go-http-client/1.1" Dec 6 05:00:23 localhost podman[283671]: 2025-12-06 10:00:23.999495456 +0000 UTC m=+0.161954614 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 6 05:00:24 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:00:24 localhost podman[241090]: @ - - [06/Dec/2025:10:00:23 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17731 "" "Go-http-client/1.1" Dec 6 05:00:24 localhost nova_compute[282193]: 2025-12-06 10:00:24.556 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:00:25 localhost nova_compute[282193]: 2025-12-06 10:00:25.085 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updating instance_info_cache with network_info: [{"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 6 05:00:25 localhost nova_compute[282193]: 2025-12-06 10:00:25.118 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Releasing lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 6 05:00:25 localhost nova_compute[282193]: 2025-12-06 10:00:25.119 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 6 05:00:25 localhost nova_compute[282193]: 2025-12-06 10:00:25.119 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:00:25 localhost nova_compute[282193]: 2025-12-06 10:00:25.120 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:00:25 localhost nova_compute[282193]: 2025-12-06 10:00:25.120 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:00:25 localhost nova_compute[282193]: 2025-12-06 10:00:25.121 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:00:25 localhost nova_compute[282193]: 2025-12-06 10:00:25.121 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:00:25 localhost nova_compute[282193]: 2025-12-06 10:00:25.122 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:00:25 localhost nova_compute[282193]: 2025-12-06 10:00:25.122 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 6 05:00:25 localhost nova_compute[282193]: 2025-12-06 10:00:25.123 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:00:25 localhost nova_compute[282193]: 2025-12-06 10:00:25.148 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:00:25 localhost nova_compute[282193]: 2025-12-06 10:00:25.149 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:00:25 localhost nova_compute[282193]: 2025-12-06 10:00:25.149 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:00:25 localhost nova_compute[282193]: 2025-12-06 10:00:25.149 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Auditing locally available compute resources for np0005548789.localdomain (node: np0005548789.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 6 05:00:25 localhost nova_compute[282193]: 2025-12-06 10:00:25.150 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:00:25 localhost nova_compute[282193]: 2025-12-06 10:00:25.612 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:00:25 localhost nova_compute[282193]: 2025-12-06 10:00:25.698 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:00:25 localhost nova_compute[282193]: 2025-12-06 10:00:25.698 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:00:25 localhost nova_compute[282193]: 2025-12-06 10:00:25.909 282197 WARNING nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 6 05:00:25 localhost nova_compute[282193]: 2025-12-06 10:00:25.910 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Hypervisor/Node resource view: name=np0005548789.localdomain free_ram=12019MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 6 05:00:25 localhost nova_compute[282193]: 2025-12-06 10:00:25.911 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:00:25 localhost nova_compute[282193]: 2025-12-06 10:00:25.911 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:00:26 localhost nova_compute[282193]: 2025-12-06 10:00:26.150 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 6 05:00:26 localhost nova_compute[282193]: 2025-12-06 10:00:26.151 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 6 05:00:26 localhost nova_compute[282193]: 2025-12-06 10:00:26.151 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Final resource view: name=np0005548789.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 6 05:00:26 localhost nova_compute[282193]: 2025-12-06 10:00:26.201 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:00:26 localhost nova_compute[282193]: 2025-12-06 10:00:26.636 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:00:26 localhost nova_compute[282193]: 2025-12-06 10:00:26.644 282197 DEBUG nova.compute.provider_tree [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed in ProviderTree for provider: 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 6 05:00:26 localhost nova_compute[282193]: 2025-12-06 10:00:26.688 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 6 05:00:26 localhost nova_compute[282193]: 2025-12-06 10:00:26.716 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Compute_service record updated for np0005548789.localdomain:np0005548789.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 6 05:00:26 localhost nova_compute[282193]: 2025-12-06 10:00:26.716 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.805s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:00:27 localhost nova_compute[282193]: 2025-12-06 10:00:27.087 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:00:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:00:28 localhost systemd[1]: tmp-crun.JDWJJz.mount: Deactivated successfully. Dec 6 05:00:28 localhost podman[283742]: 2025-12-06 10:00:28.900390239 +0000 UTC m=+0.065641378 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller) Dec 6 05:00:28 localhost podman[283742]: 2025-12-06 10:00:28.962444976 +0000 UTC m=+0.127696085 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Dec 6 05:00:28 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:00:29 localhost nova_compute[282193]: 2025-12-06 10:00:29.558 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:00:32 localhost nova_compute[282193]: 2025-12-06 10:00:32.123 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:00:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39150 DF PROTO=TCP SPT=53032 DPT=9102 SEQ=1349501459 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E79E570000000001030307) Dec 6 05:00:34 localhost nova_compute[282193]: 2025-12-06 10:00:34.560 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:00:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39151 DF PROTO=TCP SPT=53032 DPT=9102 SEQ=1349501459 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E7A26F0000000001030307) Dec 6 05:00:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6031 DF PROTO=TCP SPT=43920 DPT=9102 SEQ=1837799133 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E7A5EF0000000001030307) Dec 6 05:00:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39152 DF PROTO=TCP SPT=53032 DPT=9102 SEQ=1349501459 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E7AA6F0000000001030307) Dec 6 05:00:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:00:36 localhost podman[283765]: 2025-12-06 10:00:36.920854663 +0000 UTC m=+0.083233673 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 6 05:00:36 localhost podman[283765]: 2025-12-06 10:00:36.952068727 +0000 UTC m=+0.114447737 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:00:36 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:00:37 localhost nova_compute[282193]: 2025-12-06 10:00:37.178 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:00:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13463 DF PROTO=TCP SPT=54536 DPT=9102 SEQ=735695711 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E7ADEF0000000001030307) Dec 6 05:00:39 localhost nova_compute[282193]: 2025-12-06 10:00:39.581 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:00:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:00:39 localhost systemd[1]: tmp-crun.hJbOKn.mount: Deactivated successfully. Dec 6 05:00:39 localhost podman[283783]: 2025-12-06 10:00:39.914631088 +0000 UTC m=+0.081430006 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:00:39 localhost podman[283783]: 2025-12-06 10:00:39.924169722 +0000 UTC m=+0.090968600 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 6 05:00:39 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:00:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39153 DF PROTO=TCP SPT=53032 DPT=9102 SEQ=1349501459 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E7BA2F0000000001030307) Dec 6 05:00:42 localhost nova_compute[282193]: 2025-12-06 10:00:42.214 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:00:44 localhost nova_compute[282193]: 2025-12-06 10:00:44.584 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:00:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:00:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:00:45 localhost podman[283806]: 2025-12-06 10:00:45.917773939 +0000 UTC m=+0.083733118 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., config_id=edpm, managed_by=edpm_ansible, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, distribution-scope=public, build-date=2025-08-20T13:12:41, vcs-type=git, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.component=ubi9-minimal-container, release=1755695350) Dec 6 05:00:45 localhost podman[283806]: 2025-12-06 10:00:45.933291157 +0000 UTC m=+0.099249936 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, maintainer=Red Hat, Inc., config_id=edpm, version=9.6, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, name=ubi9-minimal, architecture=x86_64, distribution-scope=public, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Dec 6 05:00:45 localhost systemd[1]: tmp-crun.8tQovD.mount: Deactivated successfully. Dec 6 05:00:45 localhost podman[283807]: 2025-12-06 10:00:45.970558578 +0000 UTC m=+0.130631255 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm) Dec 6 05:00:45 localhost podman[283807]: 2025-12-06 10:00:45.982095605 +0000 UTC m=+0.142168242 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 6 05:00:45 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:00:46 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:00:46 localhost openstack_network_exporter[243110]: ERROR 10:00:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:00:46 localhost openstack_network_exporter[243110]: ERROR 10:00:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:00:46 localhost openstack_network_exporter[243110]: ERROR 10:00:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:00:46 localhost openstack_network_exporter[243110]: ERROR 10:00:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:00:46 localhost openstack_network_exporter[243110]: Dec 6 05:00:46 localhost openstack_network_exporter[243110]: ERROR 10:00:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:00:46 localhost openstack_network_exporter[243110]: Dec 6 05:00:47 localhost nova_compute[282193]: 2025-12-06 10:00:47.266 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:00:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:00:47.291 160509 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:00:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:00:47.291 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:00:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:00:47.291 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:00:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39154 DF PROTO=TCP SPT=53032 DPT=9102 SEQ=1349501459 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E7D9EF0000000001030307) Dec 6 05:00:49 localhost nova_compute[282193]: 2025-12-06 10:00:49.619 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:00:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:00:49 localhost systemd[1]: tmp-crun.y6F8uA.mount: Deactivated successfully. Dec 6 05:00:49 localhost podman[283912]: 2025-12-06 10:00:49.945970836 +0000 UTC m=+0.100269869 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:00:49 localhost podman[283912]: 2025-12-06 10:00:49.961155744 +0000 UTC m=+0.115454817 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:00:49 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:00:50 localhost sshd[283931]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:00:52 localhost nova_compute[282193]: 2025-12-06 10:00:52.299 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:00:53 localhost podman[241090]: time="2025-12-06T10:00:53Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:00:53 localhost podman[241090]: @ - - [06/Dec/2025:10:00:53 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149555 "" "Go-http-client/1.1" Dec 6 05:00:53 localhost podman[241090]: @ - - [06/Dec/2025:10:00:53 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17736 "" "Go-http-client/1.1" Dec 6 05:00:54 localhost nova_compute[282193]: 2025-12-06 10:00:54.621 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:00:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:00:54 localhost podman[283951]: 2025-12-06 10:00:54.917006485 +0000 UTC m=+0.081554771 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 6 05:00:54 localhost podman[283951]: 2025-12-06 10:00:54.926900571 +0000 UTC m=+0.091448807 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 6 05:00:54 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:00:57 localhost nova_compute[282193]: 2025-12-06 10:00:57.345 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:00:59 localhost nova_compute[282193]: 2025-12-06 10:00:59.624 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:00:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:00:59 localhost podman[283975]: 2025-12-06 10:00:59.935946995 +0000 UTC m=+0.096347318 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:01:00 localhost podman[283975]: 2025-12-06 10:01:00.00185382 +0000 UTC m=+0.162254103 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true) Dec 6 05:01:00 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:01:02 localhost nova_compute[282193]: 2025-12-06 10:01:02.401 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:01:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22833 DF PROTO=TCP SPT=52292 DPT=9102 SEQ=1660943468 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E813860000000001030307) Dec 6 05:01:04 localhost nova_compute[282193]: 2025-12-06 10:01:04.627 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:01:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22834 DF PROTO=TCP SPT=52292 DPT=9102 SEQ=1660943468 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E817AF0000000001030307) Dec 6 05:01:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39155 DF PROTO=TCP SPT=53032 DPT=9102 SEQ=1349501459 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E819EF0000000001030307) Dec 6 05:01:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22835 DF PROTO=TCP SPT=52292 DPT=9102 SEQ=1660943468 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E81FB00000000001030307) Dec 6 05:01:07 localhost nova_compute[282193]: 2025-12-06 10:01:07.447 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:01:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:01:07 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6032 DF PROTO=TCP SPT=43920 DPT=9102 SEQ=1837799133 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E823EF0000000001030307) Dec 6 05:01:07 localhost podman[284011]: 2025-12-06 10:01:07.910550802 +0000 UTC m=+0.070409625 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.911 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'name': 'test', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005548789.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'hostId': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.912 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 6 05:01:07 localhost podman[284011]: 2025-12-06 10:01:07.915133984 +0000 UTC m=+0.074992747 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.916 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '258a44cc-0198-4e0d-aa9d-e683f9c0f5e2', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:01:07.912407', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '7c2d8e8c-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11686.161738003, 'message_signature': '218ef46d3f8b5b1adca4585be53b3fd032ee0b13d5db33b80438104f618e5130'}]}, 'timestamp': '2025-12-06 10:01:07.916841', '_unique_id': '38b03ccf8e754a4f9275c59fe3754dc6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.918 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.919 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets volume: 114 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c05dbeb8-8783-4317-a66d-46d4d9fe6313', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 114, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:01:07.920004', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '7c2e1b86-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11686.161738003, 'message_signature': 'e1cc059dfbcf2dbda5dbd81dc37502002ff56ce05390c8159f9f584adafe5585'}]}, 'timestamp': '2025-12-06 10:01:07.920325', '_unique_id': 'fd2f61003f0044c8be74f976da5e1a62'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.920 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.921 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 6 05:01:07 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.953 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.requests volume: 47 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.954 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'bc84c27e-e785-4019-bec4-9ecb35fc5741', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 47, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:01:07.921793', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '7c3353ee-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11686.171142004, 'message_signature': 'e75a594705bf3e98bbebba385732c12fb028a60f8767387d20e88fb12e161ca1'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:01:07.921793', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '7c335ede-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11686.171142004, 'message_signature': '8dcae01d3ae4720f4c192e96f7b36dad358257978dfd77310b83f896923cb4a3'}]}, 'timestamp': '2025-12-06 10:01:07.954789', '_unique_id': '8727fa6882d041ef919bdcbd514202b9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.955 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.956 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.956 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.bytes volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e4f45fb8-de66-4fa1-93c8-01f1e2da95d8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:01:07.956385', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '7c33a6f0-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11686.161738003, 'message_signature': '22dbe1f09cf79d61e4804913fa382f87b6b905b8a859ed99be277388e14abd1f'}]}, 'timestamp': '2025-12-06 10:01:07.956646', '_unique_id': '6e4f6b65901d4a4d888625877cee88ce'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.957 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.973 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/memory.usage volume: 51.80859375 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0ec405a9-9bcd-4760-8e65-4cf14da70755', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.80859375, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'timestamp': '2025-12-06T10:01:07.957655', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '7c365436-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11686.222934014, 'message_signature': 'f5fe25f0a97990b5ed58da243ab42732494746e38d0953790f84c23538e36667'}]}, 'timestamp': '2025-12-06 10:01:07.974161', '_unique_id': '85bf1305c7254a8889c2717cbb1c71dc'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.974 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.bytes.delta volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e96dfaf2-16c2-47bf-89ca-5862ce23c416', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:01:07.975344', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '7c368b2c-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11686.161738003, 'message_signature': '31490a084a3d1d0029c9e6c3619b08757223386c34f28b6c09fa5352d136549c'}]}, 'timestamp': '2025-12-06 10:01:07.975562', '_unique_id': 'd95017d72184418f94a3204624264dba'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.975 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.976 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.976 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.976 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f7d2c608-a94b-43b6-aea5-573a84894a00', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:01:07.976602', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '7c36bc0a-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11686.171142004, 'message_signature': '5c05028a6669809bfe6fa9d08db579e2df435fdfaffed62db04667538cb39dda'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:01:07.976602', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '7c36c420-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11686.171142004, 'message_signature': 'eb060a96ea608b113031fe0cf588102812bd06ba6813e9909d945bef16251c9b'}]}, 'timestamp': '2025-12-06 10:01:07.977005', '_unique_id': '63835b498f714a458b725daae53ef434'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.977 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '977803ec-e483-463b-907d-f2ae1160d114', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:01:07.977985', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '7c36f224-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11686.161738003, 'message_signature': 'e41074f8b085de0fcc4b675c0abdf0e4f55f7300b24dec06b73dc51d2becc8c3'}]}, 'timestamp': '2025-12-06 10:01:07.978195', '_unique_id': '87521b1ec2e2452f9de42e014eef9e5e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.978 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets volume: 60 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4338034a-9555-47fd-8eeb-a738b220558f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 60, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:01:07.979179', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '7c3720c8-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11686.161738003, 'message_signature': '01abf6026e8f7dad628f9285ee9b974534a69b42b71ce676559634b3ed72de4d'}]}, 'timestamp': '2025-12-06 10:01:07.979389', '_unique_id': 'ecbb7cae1df64de78d0596f472a91bd2'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.979 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.980 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.980 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.980 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.bytes volume: 389120 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.980 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f8c83166-16de-497a-86c2-b9180b910ce7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 389120, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:01:07.980537', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '7c375598-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11686.171142004, 'message_signature': 'a8b9208f8f470b06e90281c7ec42162bd8339eefb4e346fd3c460d27c64c9402'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:01:07.980537', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '7c375d68-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11686.171142004, 'message_signature': '0bb702d7990f755920f394245e0e3c13376f486ee1af4a8a3be04e4b5415965d'}]}, 'timestamp': '2025-12-06 10:01:07.980929', '_unique_id': 'c2ad183fec0d49a8b295bf560776c8d4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.981 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.982 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.982 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.982 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0df406af-c3d5-410d-8173-83926715636b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:01:07.982091', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '7c379364-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11686.171142004, 'message_signature': '2f5a8c024e7628e0ee59fbd6b5faf58d655d13e79ae66ca811b68b90f6247374'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:01:07.982091', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '7c379cc4-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11686.171142004, 'message_signature': '3b09e4f648a11454ef75c57b9d2e1009a85851bf985db4936eeecb003a876f1b'}]}, 'timestamp': '2025-12-06 10:01:07.982566', '_unique_id': '19f4ce6a674a45f3b40f4d972839aec5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.983 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.990 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.990 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4899d366-c016-4e18-bb47-12debfabc43a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:01:07.983723', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '7c38d8dc-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11686.233069796, 'message_signature': 'b27ba5a66df63be68303264a188a6d249a5445c0a0b9994114187733d4c568f1'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:01:07.983723', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '7c38e084-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11686.233069796, 'message_signature': '6e0cd521f6ac0a1b939677a4f3f86f879bdfc561a3ec19a3fc53c19481a6bad2'}]}, 'timestamp': '2025-12-06 10:01:07.990853', '_unique_id': 'e21da0b3e8394cda918c99b9bd99e093'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.991 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'fce535c4-a4ce-46e7-92e9-18e62254ca13', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:01:07.991940', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '7c39134c-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11686.161738003, 'message_signature': '9f265208bfe4156d43391e13f0ef8ebdb8fba468357ec59f76dc951690302d27'}]}, 'timestamp': '2025-12-06 10:01:07.992152', '_unique_id': '214d15edb8fe4412b61de00fc0133448'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.992 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.bytes volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8e6cbfbb-a981-4168-8793-906089c3d5ca', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:01:07.993282', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '7c39479a-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11686.161738003, 'message_signature': '378bd04092d04ba23a4c0a4a001a684cf7cca52e5a132e7a1d7be34e5dc545e6'}]}, 'timestamp': '2025-12-06 10:01:07.993491', '_unique_id': '5781091bd6fd4632b53952730e938604'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:01:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.993 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.994 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.994 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.latency volume: 1525105336 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.994 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.latency volume: 106716064 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '89490718-648d-41fb-b555-d0820b76c29c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1525105336, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:01:07.994663', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '7c397e4a-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11686.171142004, 'message_signature': '7c77c99c2554394e226ebc600603a8da765e212f443fa58a166991595e5ad7f9'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 106716064, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:01:07.994663', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '7c398598-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11686.171142004, 'message_signature': '2aafd242e34f07fd3d9a96c543a1819502552bb9500cb3bbad0261696a838b33'}]}, 'timestamp': '2025-12-06 10:01:07.995065', '_unique_id': '402fe09a43324b4fa4cf5dba15df3e15'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.995 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.bytes.delta volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '086e0e08-1baa-4ac2-91b1-c62d3587cb9b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:01:07.996050', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '7c39b3ec-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11686.161738003, 'message_signature': '965963769ae0c57eacd8d001051f8f37c85ad970da5fb50c9a689257a4a68e01'}]}, 'timestamp': '2025-12-06 10:01:07.996265', '_unique_id': '6e61b0f000d34aeb8b0943a2a7b10c92'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.996 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f44b54da-1cdd-4984-9c52-5c964c83f225', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:01:07.997227', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '7c39e1c8-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11686.161738003, 'message_signature': 'ac0f84c4b99eb1849e702da2b72899f1baff7179024792496a1c66cea9fea687'}]}, 'timestamp': '2025-12-06 10:01:07.997438', '_unique_id': '47d5b35acd3d4c5985af2e79b9938453'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.997 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.998 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.998 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/cpu volume: 11760000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '50307a0f-958d-4550-b21c-ff02bbd50f79', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 11760000000, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'timestamp': '2025-12-06T10:01:07.998415', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '7c3a101c-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11686.222934014, 'message_signature': '4c76916e4c317cb7f7cc1d2ae8379b0eefc5191030432e6111fe410941fec034'}]}, 'timestamp': '2025-12-06 10:01:07.998617', '_unique_id': 'c3eb6bf6e4fc4d84825eb6c98c282f22'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.latency volume: 1252245154 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:07.999 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.latency volume: 27668224 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '528f5847-a6f1-4f29-ba44-e033ecd120d1', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1252245154, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:01:07.999575', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '7c3a3d62-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11686.171142004, 'message_signature': '91fb821197ecde32b6c4260ff8162193e20d2a3264531c10167ba58eb41edd98'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 27668224, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:01:07.999575', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '7c3a45d2-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11686.171142004, 'message_signature': 'c5f4d17585cdd705d52193bc022b2f1e6df1b8cf825909fcea021b794d49cdc0'}]}, 'timestamp': '2025-12-06 10:01:07.999987', '_unique_id': '2f081456da324ea4a0fcfe6f7c7de996'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.000 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'bcb9829c-8c9b-4557-bc62-2a77966dde46', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:01:08.000966', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '7c3a73cc-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11686.233069796, 'message_signature': '1d85d4e8afb7f15a440576d9233a2e124abca719dbe6dffbcfafb7493bba799f'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:01:08.000966', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '7c3a7b74-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11686.233069796, 'message_signature': '6341ba641e9c03787021d727108f1453868f09df3a77804854e9b6aabfbf5c2c'}]}, 'timestamp': '2025-12-06 10:01:08.001360', '_unique_id': '39899824bb7b4aa9b0bfc61be9be778f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.001 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.002 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.002 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.002 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2083999c-c44d-4804-8041-16e94c43e55d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:01:08.002364', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '7c3aaa68-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11686.233069796, 'message_signature': '7d03e7738215526997dcdbe242e5860ea60c67106c7e2c43fcd9f2d126133fb5'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:01:08.002364', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '7c3ab1ac-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11686.233069796, 'message_signature': '607dd674feaaf3079d7fb824a5311c1289c4fa3d381cb1b8eff5c0f3c2b2d47b'}]}, 'timestamp': '2025-12-06 10:01:08.002747', '_unique_id': 'f501c860a9534fa889c6e57ca7463a69'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 ERROR oslo_messaging.notify.messaging Dec 6 05:01:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:01:08.003 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:01:09 localhost nova_compute[282193]: 2025-12-06 10:01:09.630 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:01:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:01:10 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22836 DF PROTO=TCP SPT=52292 DPT=9102 SEQ=1660943468 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E82F6F0000000001030307) Dec 6 05:01:10 localhost podman[284030]: 2025-12-06 10:01:10.921015374 +0000 UTC m=+0.082148568 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 6 05:01:10 localhost podman[284030]: 2025-12-06 10:01:10.929743784 +0000 UTC m=+0.090877048 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 6 05:01:10 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:01:12 localhost sshd[284053]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:01:12 localhost systemd-logind[766]: New session 62 of user zuul. Dec 6 05:01:12 localhost systemd[1]: Started Session 62 of User zuul. Dec 6 05:01:12 localhost nova_compute[282193]: 2025-12-06 10:01:12.493 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:01:12 localhost python3[284075]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager unregister _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 05:01:13 localhost subscription-manager[284076]: Unregistered machine with identity: 49b9d3d6-359c-4738-9880-6751941cc8f8 Dec 6 05:01:13 localhost systemd-journald[47810]: Field hash table of /run/log/journal/4b30904fc4748c16d0c72dbebcabab49/system.journal has a fill level at 75.4 (251 of 333 items), suggesting rotation. Dec 6 05:01:13 localhost systemd-journald[47810]: /run/log/journal/4b30904fc4748c16d0c72dbebcabab49/system.journal: Journal header limits reached or header out-of-date, rotating. Dec 6 05:01:13 localhost rsyslogd[760]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 6 05:01:13 localhost rsyslogd[760]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 6 05:01:14 localhost nova_compute[282193]: 2025-12-06 10:01:14.634 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:01:16 localhost openstack_network_exporter[243110]: ERROR 10:01:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:01:16 localhost openstack_network_exporter[243110]: ERROR 10:01:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:01:16 localhost openstack_network_exporter[243110]: ERROR 10:01:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:01:16 localhost openstack_network_exporter[243110]: ERROR 10:01:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:01:16 localhost openstack_network_exporter[243110]: Dec 6 05:01:16 localhost openstack_network_exporter[243110]: ERROR 10:01:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:01:16 localhost openstack_network_exporter[243110]: Dec 6 05:01:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:01:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:01:16 localhost systemd[1]: tmp-crun.PAJDiN.mount: Deactivated successfully. Dec 6 05:01:16 localhost podman[284079]: 2025-12-06 10:01:16.920860854 +0000 UTC m=+0.079426055 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., release=1755695350, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container) Dec 6 05:01:16 localhost podman[284079]: 2025-12-06 10:01:16.932961508 +0000 UTC m=+0.091526649 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, vcs-type=git, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, version=9.6, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, vendor=Red Hat, Inc., config_id=edpm, distribution-scope=public, release=1755695350, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Dec 6 05:01:16 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:01:16 localhost podman[284080]: 2025-12-06 10:01:16.991119215 +0000 UTC m=+0.145238628 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Dec 6 05:01:17 localhost podman[284080]: 2025-12-06 10:01:17.005154398 +0000 UTC m=+0.159273821 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 6 05:01:17 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:01:17 localhost nova_compute[282193]: 2025-12-06 10:01:17.539 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:01:19 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22837 DF PROTO=TCP SPT=52292 DPT=9102 SEQ=1660943468 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E84FEF0000000001030307) Dec 6 05:01:19 localhost nova_compute[282193]: 2025-12-06 10:01:19.635 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:01:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:01:20 localhost podman[284119]: 2025-12-06 10:01:20.915835845 +0000 UTC m=+0.065098203 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd) Dec 6 05:01:20 localhost podman[284119]: 2025-12-06 10:01:20.955273433 +0000 UTC m=+0.104535821 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible) Dec 6 05:01:20 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:01:21 localhost sshd[284139]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:01:22 localhost nova_compute[282193]: 2025-12-06 10:01:22.576 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:01:23 localhost podman[241090]: time="2025-12-06T10:01:23Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:01:23 localhost podman[241090]: @ - - [06/Dec/2025:10:01:23 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149555 "" "Go-http-client/1.1" Dec 6 05:01:23 localhost podman[241090]: @ - - [06/Dec/2025:10:01:23 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17742 "" "Go-http-client/1.1" Dec 6 05:01:24 localhost nova_compute[282193]: 2025-12-06 10:01:24.638 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:01:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:01:25 localhost podman[284141]: 2025-12-06 10:01:25.138680934 +0000 UTC m=+0.079689892 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 6 05:01:25 localhost podman[284141]: 2025-12-06 10:01:25.151372376 +0000 UTC m=+0.092381334 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 6 05:01:25 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:01:26 localhost nova_compute[282193]: 2025-12-06 10:01:26.719 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:01:26 localhost nova_compute[282193]: 2025-12-06 10:01:26.719 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:01:26 localhost nova_compute[282193]: 2025-12-06 10:01:26.720 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 6 05:01:26 localhost nova_compute[282193]: 2025-12-06 10:01:26.720 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 6 05:01:26 localhost nova_compute[282193]: 2025-12-06 10:01:26.837 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 6 05:01:26 localhost nova_compute[282193]: 2025-12-06 10:01:26.838 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquired lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 6 05:01:26 localhost nova_compute[282193]: 2025-12-06 10:01:26.838 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 6 05:01:26 localhost nova_compute[282193]: 2025-12-06 10:01:26.839 282197 DEBUG nova.objects.instance [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 6 05:01:27 localhost nova_compute[282193]: 2025-12-06 10:01:27.236 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updating instance_info_cache with network_info: [{"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 6 05:01:27 localhost nova_compute[282193]: 2025-12-06 10:01:27.255 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Releasing lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 6 05:01:27 localhost nova_compute[282193]: 2025-12-06 10:01:27.256 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 6 05:01:27 localhost nova_compute[282193]: 2025-12-06 10:01:27.257 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:01:27 localhost nova_compute[282193]: 2025-12-06 10:01:27.258 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:01:27 localhost nova_compute[282193]: 2025-12-06 10:01:27.258 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:01:27 localhost nova_compute[282193]: 2025-12-06 10:01:27.259 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:01:27 localhost nova_compute[282193]: 2025-12-06 10:01:27.260 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:01:27 localhost nova_compute[282193]: 2025-12-06 10:01:27.260 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:01:27 localhost nova_compute[282193]: 2025-12-06 10:01:27.260 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 6 05:01:27 localhost nova_compute[282193]: 2025-12-06 10:01:27.261 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:01:27 localhost nova_compute[282193]: 2025-12-06 10:01:27.276 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:01:27 localhost nova_compute[282193]: 2025-12-06 10:01:27.276 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:01:27 localhost nova_compute[282193]: 2025-12-06 10:01:27.277 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:01:27 localhost nova_compute[282193]: 2025-12-06 10:01:27.277 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Auditing locally available compute resources for np0005548789.localdomain (node: np0005548789.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 6 05:01:27 localhost nova_compute[282193]: 2025-12-06 10:01:27.278 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:01:27 localhost nova_compute[282193]: 2025-12-06 10:01:27.615 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:01:27 localhost nova_compute[282193]: 2025-12-06 10:01:27.741 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:01:27 localhost nova_compute[282193]: 2025-12-06 10:01:27.957 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:01:27 localhost nova_compute[282193]: 2025-12-06 10:01:27.959 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:01:28 localhost nova_compute[282193]: 2025-12-06 10:01:28.147 282197 WARNING nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 6 05:01:28 localhost nova_compute[282193]: 2025-12-06 10:01:28.148 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Hypervisor/Node resource view: name=np0005548789.localdomain free_ram=12006MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 6 05:01:28 localhost nova_compute[282193]: 2025-12-06 10:01:28.148 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:01:28 localhost nova_compute[282193]: 2025-12-06 10:01:28.149 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:01:28 localhost nova_compute[282193]: 2025-12-06 10:01:28.228 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 6 05:01:28 localhost nova_compute[282193]: 2025-12-06 10:01:28.228 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 6 05:01:28 localhost nova_compute[282193]: 2025-12-06 10:01:28.228 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Final resource view: name=np0005548789.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 6 05:01:28 localhost nova_compute[282193]: 2025-12-06 10:01:28.303 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:01:28 localhost nova_compute[282193]: 2025-12-06 10:01:28.709 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:01:28 localhost nova_compute[282193]: 2025-12-06 10:01:28.714 282197 DEBUG nova.compute.provider_tree [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed in ProviderTree for provider: 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 6 05:01:28 localhost nova_compute[282193]: 2025-12-06 10:01:28.736 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 6 05:01:28 localhost nova_compute[282193]: 2025-12-06 10:01:28.737 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Compute_service record updated for np0005548789.localdomain:np0005548789.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 6 05:01:28 localhost nova_compute[282193]: 2025-12-06 10:01:28.738 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:01:29 localhost nova_compute[282193]: 2025-12-06 10:01:29.641 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:01:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:01:30 localhost systemd[1]: tmp-crun.kNGkvn.mount: Deactivated successfully. Dec 6 05:01:30 localhost podman[284208]: 2025-12-06 10:01:30.928608978 +0000 UTC m=+0.093198730 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:01:30 localhost podman[284208]: 2025-12-06 10:01:30.988385045 +0000 UTC m=+0.152974836 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:01:31 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:01:32 localhost nova_compute[282193]: 2025-12-06 10:01:32.661 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:01:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26396 DF PROTO=TCP SPT=38450 DPT=9102 SEQ=4212794354 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E888B70000000001030307) Dec 6 05:01:34 localhost nova_compute[282193]: 2025-12-06 10:01:34.643 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:01:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26397 DF PROTO=TCP SPT=38450 DPT=9102 SEQ=4212794354 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E88CAF0000000001030307) Dec 6 05:01:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22838 DF PROTO=TCP SPT=52292 DPT=9102 SEQ=1660943468 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E88FEF0000000001030307) Dec 6 05:01:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26398 DF PROTO=TCP SPT=38450 DPT=9102 SEQ=4212794354 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E894AF0000000001030307) Dec 6 05:01:36 localhost systemd[1]: virtsecretd.service: Deactivated successfully. Dec 6 05:01:37 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39156 DF PROTO=TCP SPT=53032 DPT=9102 SEQ=1349501459 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E897EF0000000001030307) Dec 6 05:01:37 localhost nova_compute[282193]: 2025-12-06 10:01:37.703 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:01:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:01:38 localhost podman[284234]: 2025-12-06 10:01:38.893574628 +0000 UTC m=+0.055224518 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2) Dec 6 05:01:38 localhost podman[284234]: 2025-12-06 10:01:38.902445581 +0000 UTC m=+0.064095531 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2) Dec 6 05:01:38 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:01:39 localhost nova_compute[282193]: 2025-12-06 10:01:39.646 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:01:40 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26399 DF PROTO=TCP SPT=38450 DPT=9102 SEQ=4212794354 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E8A46F0000000001030307) Dec 6 05:01:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:01:41 localhost podman[284252]: 2025-12-06 10:01:41.913325714 +0000 UTC m=+0.077301319 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 6 05:01:41 localhost podman[284252]: 2025-12-06 10:01:41.950262595 +0000 UTC m=+0.114238180 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 6 05:01:41 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:01:42 localhost nova_compute[282193]: 2025-12-06 10:01:42.753 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:01:44 localhost nova_compute[282193]: 2025-12-06 10:01:44.650 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:01:46 localhost openstack_network_exporter[243110]: ERROR 10:01:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:01:46 localhost openstack_network_exporter[243110]: ERROR 10:01:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:01:46 localhost openstack_network_exporter[243110]: ERROR 10:01:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:01:46 localhost openstack_network_exporter[243110]: ERROR 10:01:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:01:46 localhost openstack_network_exporter[243110]: Dec 6 05:01:46 localhost openstack_network_exporter[243110]: ERROR 10:01:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:01:46 localhost openstack_network_exporter[243110]: Dec 6 05:01:47 localhost podman[284350]: Dec 6 05:01:47 localhost podman[284350]: 2025-12-06 10:01:47.134669867 +0000 UTC m=+0.095450060 container create 3ed2c67f4a727043f3c5d7aec159250db8465307db8d8db8ed110a35ac357203 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_rhodes, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, vcs-type=git, version=7, GIT_BRANCH=main, release=1763362218, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, name=rhceph, RELEASE=main, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph) Dec 6 05:01:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:01:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:01:47 localhost systemd[1]: Started libpod-conmon-3ed2c67f4a727043f3c5d7aec159250db8465307db8d8db8ed110a35ac357203.scope. Dec 6 05:01:47 localhost systemd[1]: Started libcrun container. Dec 6 05:01:47 localhost podman[284350]: 2025-12-06 10:01:47.096337802 +0000 UTC m=+0.057118005 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:01:47 localhost podman[284350]: 2025-12-06 10:01:47.196905719 +0000 UTC m=+0.157685892 container init 3ed2c67f4a727043f3c5d7aec159250db8465307db8d8db8ed110a35ac357203 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_rhodes, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , RELEASE=main, release=1763362218, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4) Dec 6 05:01:47 localhost podman[284350]: 2025-12-06 10:01:47.207171316 +0000 UTC m=+0.167951469 container start 3ed2c67f4a727043f3c5d7aec159250db8465307db8d8db8ed110a35ac357203 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_rhodes, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , GIT_BRANCH=main, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, com.redhat.component=rhceph-container, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 6 05:01:47 localhost podman[284350]: 2025-12-06 10:01:47.207375833 +0000 UTC m=+0.168156016 container attach 3ed2c67f4a727043f3c5d7aec159250db8465307db8d8db8ed110a35ac357203 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_rhodes, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, distribution-scope=public, version=7, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, ceph=True, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., vcs-type=git, name=rhceph, io.buildah.version=1.41.4, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 6 05:01:47 localhost cranky_rhodes[284370]: 167 167 Dec 6 05:01:47 localhost systemd[1]: libpod-3ed2c67f4a727043f3c5d7aec159250db8465307db8d8db8ed110a35ac357203.scope: Deactivated successfully. Dec 6 05:01:47 localhost podman[284350]: 2025-12-06 10:01:47.210244281 +0000 UTC m=+0.171024454 container died 3ed2c67f4a727043f3c5d7aec159250db8465307db8d8db8ed110a35ac357203 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_rhodes, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., GIT_BRANCH=main, name=rhceph, version=7, release=1763362218, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph) Dec 6 05:01:47 localhost podman[284365]: 2025-12-06 10:01:47.253725314 +0000 UTC m=+0.078369022 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, vendor=Red Hat, Inc., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41) Dec 6 05:01:47 localhost podman[284366]: 2025-12-06 10:01:47.288828719 +0000 UTC m=+0.117250913 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3) Dec 6 05:01:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:01:47.291 160509 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:01:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:01:47.291 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:01:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:01:47.292 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:01:47 localhost podman[284392]: 2025-12-06 10:01:47.344491208 +0000 UTC m=+0.124157397 container remove 3ed2c67f4a727043f3c5d7aec159250db8465307db8d8db8ed110a35ac357203 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_rhodes, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, name=rhceph, vcs-type=git, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, architecture=x86_64, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 6 05:01:47 localhost systemd[1]: libpod-conmon-3ed2c67f4a727043f3c5d7aec159250db8465307db8d8db8ed110a35ac357203.scope: Deactivated successfully. Dec 6 05:01:47 localhost podman[284366]: 2025-12-06 10:01:47.377638972 +0000 UTC m=+0.206061166 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 6 05:01:47 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:01:47 localhost podman[284365]: 2025-12-06 10:01:47.394046858 +0000 UTC m=+0.218690546 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, managed_by=edpm_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.buildah.version=1.33.7, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., config_id=edpm, vendor=Red Hat, Inc., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Dec 6 05:01:47 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:01:47 localhost podman[284432]: Dec 6 05:01:47 localhost podman[284432]: 2025-12-06 10:01:47.514017074 +0000 UTC m=+0.062442089 container create 96ec1a47279a6e4babe7c1a9375b54f2f4d4bbf7b494b570b422b5bd442aa01d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_noyce, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, distribution-scope=public, name=rhceph, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, ceph=True, RELEASE=main, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, version=7, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, vcs-type=git, description=Red Hat Ceph Storage 7, architecture=x86_64) Dec 6 05:01:47 localhost systemd[1]: Started libpod-conmon-96ec1a47279a6e4babe7c1a9375b54f2f4d4bbf7b494b570b422b5bd442aa01d.scope. Dec 6 05:01:47 localhost systemd[1]: Started libcrun container. Dec 6 05:01:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/809ccad5b5d736bb19e7356ddf27631debaa53da8e797dc65df204e66d710152/merged/rootfs supports timestamps until 2038 (0x7fffffff) Dec 6 05:01:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/809ccad5b5d736bb19e7356ddf27631debaa53da8e797dc65df204e66d710152/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 6 05:01:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/809ccad5b5d736bb19e7356ddf27631debaa53da8e797dc65df204e66d710152/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 6 05:01:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/809ccad5b5d736bb19e7356ddf27631debaa53da8e797dc65df204e66d710152/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 6 05:01:47 localhost podman[284432]: 2025-12-06 10:01:47.571789589 +0000 UTC m=+0.120214644 container init 96ec1a47279a6e4babe7c1a9375b54f2f4d4bbf7b494b570b422b5bd442aa01d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_noyce, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, GIT_BRANCH=main, version=7, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, ceph=True, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, name=rhceph, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main) Dec 6 05:01:47 localhost podman[284432]: 2025-12-06 10:01:47.582557032 +0000 UTC m=+0.130982077 container start 96ec1a47279a6e4babe7c1a9375b54f2f4d4bbf7b494b570b422b5bd442aa01d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_noyce, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, name=rhceph, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, version=7, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, release=1763362218, ceph=True, CEPH_POINT_RELEASE=) Dec 6 05:01:47 localhost podman[284432]: 2025-12-06 10:01:47.582879852 +0000 UTC m=+0.131304907 container attach 96ec1a47279a6e4babe7c1a9375b54f2f4d4bbf7b494b570b422b5bd442aa01d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_noyce, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, com.redhat.component=rhceph-container, name=rhceph, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_BRANCH=main, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, distribution-scope=public, vcs-type=git, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 05:01:47 localhost podman[284432]: 2025-12-06 10:01:47.494110499 +0000 UTC m=+0.042535524 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:01:47 localhost nova_compute[282193]: 2025-12-06 10:01:47.788 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:01:48 localhost systemd[1]: var-lib-containers-storage-overlay-310621d5af1248ba667dc29e3bd966a624ee0a93c11980cc5e219990815d7a9c-merged.mount: Deactivated successfully. Dec 6 05:01:48 localhost strange_noyce[284447]: [ Dec 6 05:01:48 localhost strange_noyce[284447]: { Dec 6 05:01:48 localhost strange_noyce[284447]: "available": false, Dec 6 05:01:48 localhost strange_noyce[284447]: "ceph_device": false, Dec 6 05:01:48 localhost strange_noyce[284447]: "device_id": "QEMU_DVD-ROM_QM00001", Dec 6 05:01:48 localhost strange_noyce[284447]: "lsm_data": {}, Dec 6 05:01:48 localhost strange_noyce[284447]: "lvs": [], Dec 6 05:01:48 localhost strange_noyce[284447]: "path": "/dev/sr0", Dec 6 05:01:48 localhost strange_noyce[284447]: "rejected_reasons": [ Dec 6 05:01:48 localhost strange_noyce[284447]: "Insufficient space (<5GB)", Dec 6 05:01:48 localhost strange_noyce[284447]: "Has a FileSystem" Dec 6 05:01:48 localhost strange_noyce[284447]: ], Dec 6 05:01:48 localhost strange_noyce[284447]: "sys_api": { Dec 6 05:01:48 localhost strange_noyce[284447]: "actuators": null, Dec 6 05:01:48 localhost strange_noyce[284447]: "device_nodes": "sr0", Dec 6 05:01:48 localhost strange_noyce[284447]: "human_readable_size": "482.00 KB", Dec 6 05:01:48 localhost strange_noyce[284447]: "id_bus": "ata", Dec 6 05:01:48 localhost strange_noyce[284447]: "model": "QEMU DVD-ROM", Dec 6 05:01:48 localhost strange_noyce[284447]: "nr_requests": "2", Dec 6 05:01:48 localhost strange_noyce[284447]: "partitions": {}, Dec 6 05:01:48 localhost strange_noyce[284447]: "path": "/dev/sr0", Dec 6 05:01:48 localhost strange_noyce[284447]: "removable": "1", Dec 6 05:01:48 localhost strange_noyce[284447]: "rev": "2.5+", Dec 6 05:01:48 localhost strange_noyce[284447]: "ro": "0", Dec 6 05:01:48 localhost strange_noyce[284447]: "rotational": "1", Dec 6 05:01:48 localhost strange_noyce[284447]: "sas_address": "", Dec 6 05:01:48 localhost strange_noyce[284447]: "sas_device_handle": "", Dec 6 05:01:48 localhost strange_noyce[284447]: "scheduler_mode": "mq-deadline", Dec 6 05:01:48 localhost strange_noyce[284447]: "sectors": 0, Dec 6 05:01:48 localhost strange_noyce[284447]: "sectorsize": "2048", Dec 6 05:01:48 localhost strange_noyce[284447]: "size": 493568.0, Dec 6 05:01:48 localhost strange_noyce[284447]: "support_discard": "0", Dec 6 05:01:48 localhost strange_noyce[284447]: "type": "disk", Dec 6 05:01:48 localhost strange_noyce[284447]: "vendor": "QEMU" Dec 6 05:01:48 localhost strange_noyce[284447]: } Dec 6 05:01:48 localhost strange_noyce[284447]: } Dec 6 05:01:48 localhost strange_noyce[284447]: ] Dec 6 05:01:48 localhost systemd[1]: libpod-96ec1a47279a6e4babe7c1a9375b54f2f4d4bbf7b494b570b422b5bd442aa01d.scope: Deactivated successfully. Dec 6 05:01:48 localhost systemd[1]: libpod-96ec1a47279a6e4babe7c1a9375b54f2f4d4bbf7b494b570b422b5bd442aa01d.scope: Consumed 1.127s CPU time. Dec 6 05:01:48 localhost podman[284432]: 2025-12-06 10:01:48.66703335 +0000 UTC m=+1.215458435 container died 96ec1a47279a6e4babe7c1a9375b54f2f4d4bbf7b494b570b422b5bd442aa01d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_noyce, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.expose-services=, version=7, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, RELEASE=main, distribution-scope=public, ceph=True, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph) Dec 6 05:01:48 localhost systemd[1]: var-lib-containers-storage-overlay-809ccad5b5d736bb19e7356ddf27631debaa53da8e797dc65df204e66d710152-merged.mount: Deactivated successfully. Dec 6 05:01:48 localhost podman[286393]: 2025-12-06 10:01:48.768716211 +0000 UTC m=+0.085669807 container remove 96ec1a47279a6e4babe7c1a9375b54f2f4d4bbf7b494b570b422b5bd442aa01d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_noyce, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, vcs-type=git, version=7, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , distribution-scope=public, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, RELEASE=main, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 6 05:01:48 localhost systemd[1]: libpod-conmon-96ec1a47279a6e4babe7c1a9375b54f2f4d4bbf7b494b570b422b5bd442aa01d.scope: Deactivated successfully. Dec 6 05:01:48 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:66:7f:12 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26400 DF PROTO=TCP SPT=38450 DPT=9102 SEQ=4212794354 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A52E8C3EF0000000001030307) Dec 6 05:01:49 localhost nova_compute[282193]: 2025-12-06 10:01:49.651 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:01:50 localhost sshd[286493]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:01:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:01:51 localhost podman[286513]: 2025-12-06 10:01:51.200036182 +0000 UTC m=+0.093835769 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true) Dec 6 05:01:51 localhost podman[286513]: 2025-12-06 10:01:51.213405925 +0000 UTC m=+0.107205513 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd) Dec 6 05:01:51 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:01:51 localhost sshd[286532]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:01:51 localhost sshd[286534]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:01:52 localhost sshd[286536]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:01:52 localhost systemd-logind[766]: New session 63 of user tripleo-admin. Dec 6 05:01:52 localhost systemd[1]: Created slice User Slice of UID 1003. Dec 6 05:01:52 localhost systemd[1]: Starting User Runtime Directory /run/user/1003... Dec 6 05:01:52 localhost systemd[1]: Finished User Runtime Directory /run/user/1003. Dec 6 05:01:52 localhost systemd[1]: Starting User Manager for UID 1003... Dec 6 05:01:52 localhost systemd[286540]: Queued start job for default target Main User Target. Dec 6 05:01:52 localhost systemd[286540]: Created slice User Application Slice. Dec 6 05:01:52 localhost systemd[286540]: Started Mark boot as successful after the user session has run 2 minutes. Dec 6 05:01:52 localhost systemd[286540]: Started Daily Cleanup of User's Temporary Directories. Dec 6 05:01:52 localhost systemd[286540]: Reached target Paths. Dec 6 05:01:52 localhost systemd[286540]: Reached target Timers. Dec 6 05:01:52 localhost systemd[286540]: Starting D-Bus User Message Bus Socket... Dec 6 05:01:52 localhost systemd[286540]: Starting Create User's Volatile Files and Directories... Dec 6 05:01:52 localhost systemd[286540]: Listening on D-Bus User Message Bus Socket. Dec 6 05:01:52 localhost systemd[286540]: Reached target Sockets. Dec 6 05:01:52 localhost systemd[286540]: Finished Create User's Volatile Files and Directories. Dec 6 05:01:52 localhost systemd[286540]: Reached target Basic System. Dec 6 05:01:52 localhost systemd[1]: Started User Manager for UID 1003. Dec 6 05:01:52 localhost systemd[286540]: Reached target Main User Target. Dec 6 05:01:52 localhost systemd[286540]: Startup finished in 162ms. Dec 6 05:01:52 localhost nova_compute[282193]: 2025-12-06 10:01:52.838 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:01:52 localhost systemd[1]: Started Session 63 of User tripleo-admin. Dec 6 05:01:53 localhost python3[286683]: ansible-ansible.builtin.blockinfile Invoked with marker_begin=BEGIN ceph firewall rules marker_end=END ceph firewall rules path=/etc/nftables/edpm-rules.nft mode=0644 block=# 100 ceph_alertmanager (9093)#012add rule inet filter EDPM_INPUT tcp dport { 9093 } ct state new counter accept comment "100 ceph_alertmanager"#012# 100 ceph_dashboard (8443)#012add rule inet filter EDPM_INPUT tcp dport { 8443 } ct state new counter accept comment "100 ceph_dashboard"#012# 100 ceph_grafana (3100)#012add rule inet filter EDPM_INPUT tcp dport { 3100 } ct state new counter accept comment "100 ceph_grafana"#012# 100 ceph_prometheus (9092)#012add rule inet filter EDPM_INPUT tcp dport { 9092 } ct state new counter accept comment "100 ceph_prometheus"#012# 100 ceph_rgw (8080)#012add rule inet filter EDPM_INPUT tcp dport { 8080 } ct state new counter accept comment "100 ceph_rgw"#012# 110 ceph_mon (6789, 3300, 9100)#012add rule inet filter EDPM_INPUT tcp dport { 6789,3300,9100 } ct state new counter accept comment "110 ceph_mon"#012# 112 ceph_mds (6800-7300, 9100)#012add rule inet filter EDPM_INPUT tcp dport { 6800-7300,9100 } ct state new counter accept comment "112 ceph_mds"#012# 113 ceph_mgr (6800-7300, 8444)#012add rule inet filter EDPM_INPUT tcp dport { 6800-7300,8444 } ct state new counter accept comment "113 ceph_mgr"#012# 120 ceph_nfs (2049, 12049)#012add rule inet filter EDPM_INPUT tcp dport { 2049,12049 } ct state new counter accept comment "120 ceph_nfs"#012# 123 ceph_dashboard (9090, 9094, 9283)#012add rule inet filter EDPM_INPUT tcp dport { 9090,9094,9283 } ct state new counter accept comment "123 ceph_dashboard"#012 insertbefore=^# Lock down INPUT chains state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False unsafe_writes=False insertafter=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 05:01:53 localhost podman[241090]: time="2025-12-06T10:01:53Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:01:53 localhost podman[241090]: @ - - [06/Dec/2025:10:01:53 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149555 "" "Go-http-client/1.1" Dec 6 05:01:53 localhost podman[241090]: @ - - [06/Dec/2025:10:01:53 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17746 "" "Go-http-client/1.1" Dec 6 05:01:54 localhost python3[286827]: ansible-ansible.builtin.systemd Invoked with name=nftables state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Dec 6 05:01:54 localhost systemd[1]: Stopping Netfilter Tables... Dec 6 05:01:54 localhost systemd[1]: nftables.service: Deactivated successfully. Dec 6 05:01:54 localhost systemd[1]: Stopped Netfilter Tables. Dec 6 05:01:54 localhost systemd[1]: Starting Netfilter Tables... Dec 6 05:01:54 localhost systemd[1]: Finished Netfilter Tables. Dec 6 05:01:54 localhost nova_compute[282193]: 2025-12-06 10:01:54.653 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:01:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:01:55 localhost podman[286851]: 2025-12-06 10:01:55.933971719 +0000 UTC m=+0.090658151 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 6 05:01:55 localhost podman[286851]: 2025-12-06 10:01:55.942942706 +0000 UTC m=+0.099629138 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:01:55 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:01:57 localhost nova_compute[282193]: 2025-12-06 10:01:57.894 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:01:59 localhost nova_compute[282193]: 2025-12-06 10:01:59.657 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:02:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:02:01 localhost podman[286893]: 2025-12-06 10:02:01.925991526 +0000 UTC m=+0.083389627 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:02:01 localhost podman[286893]: 2025-12-06 10:02:01.990178479 +0000 UTC m=+0.147576560 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Dec 6 05:02:02 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:02:02 localhost nova_compute[282193]: 2025-12-06 10:02:02.934 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:02:04 localhost sshd[286953]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:02:04 localhost nova_compute[282193]: 2025-12-06 10:02:04.659 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:02:07 localhost nova_compute[282193]: 2025-12-06 10:02:07.975 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:02:09 localhost sshd[287009]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:02:09 localhost nova_compute[282193]: 2025-12-06 10:02:09.662 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:02:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:02:09 localhost podman[287011]: 2025-12-06 10:02:09.926699135 +0000 UTC m=+0.082165065 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent) Dec 6 05:02:09 localhost podman[287011]: 2025-12-06 10:02:09.934971627 +0000 UTC m=+0.090437557 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:02:09 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:02:11 localhost podman[287107]: Dec 6 05:02:11 localhost podman[287107]: 2025-12-06 10:02:11.266404804 +0000 UTC m=+0.078799048 container create 94c2f988328a57cd55ef492179cdeb25f438a62c1071b04773f768324fe76e28 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_dirac, vcs-type=git, release=1763362218, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, version=7, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, description=Red Hat Ceph Storage 7, name=rhceph, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 6 05:02:11 localhost systemd[1]: Started libpod-conmon-94c2f988328a57cd55ef492179cdeb25f438a62c1071b04773f768324fe76e28.scope. Dec 6 05:02:11 localhost podman[287107]: 2025-12-06 10:02:11.232216181 +0000 UTC m=+0.044610455 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:02:11 localhost systemd[1]: Started libcrun container. Dec 6 05:02:11 localhost podman[287107]: 2025-12-06 10:02:11.363653437 +0000 UTC m=+0.176047681 container init 94c2f988328a57cd55ef492179cdeb25f438a62c1071b04773f768324fe76e28 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_dirac, io.openshift.tags=rhceph ceph, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, release=1763362218, maintainer=Guillaume Abrioux , GIT_CLEAN=True, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 6 05:02:11 localhost podman[287107]: 2025-12-06 10:02:11.385963844 +0000 UTC m=+0.198358088 container start 94c2f988328a57cd55ef492179cdeb25f438a62c1071b04773f768324fe76e28 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_dirac, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, ceph=True, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , io.openshift.expose-services=, distribution-scope=public, release=1763362218) Dec 6 05:02:11 localhost podman[287107]: 2025-12-06 10:02:11.387000816 +0000 UTC m=+0.199395100 container attach 94c2f988328a57cd55ef492179cdeb25f438a62c1071b04773f768324fe76e28 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_dirac, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, distribution-scope=public, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, GIT_BRANCH=main, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, release=1763362218, RELEASE=main, vendor=Red Hat, Inc., GIT_CLEAN=True, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container) Dec 6 05:02:11 localhost hungry_dirac[287122]: 167 167 Dec 6 05:02:11 localhost systemd[1]: libpod-94c2f988328a57cd55ef492179cdeb25f438a62c1071b04773f768324fe76e28.scope: Deactivated successfully. Dec 6 05:02:11 localhost podman[287107]: 2025-12-06 10:02:11.392812221 +0000 UTC m=+0.205206495 container died 94c2f988328a57cd55ef492179cdeb25f438a62c1071b04773f768324fe76e28 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_dirac, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, RELEASE=main, io.openshift.expose-services=, version=7, io.buildah.version=1.41.4, distribution-scope=public, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, name=rhceph, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , GIT_CLEAN=True, vcs-type=git, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., ceph=True, io.openshift.tags=rhceph ceph) Dec 6 05:02:11 localhost podman[287128]: 2025-12-06 10:02:11.50319405 +0000 UTC m=+0.095708926 container remove 94c2f988328a57cd55ef492179cdeb25f438a62c1071b04773f768324fe76e28 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_dirac, RELEASE=main, version=7, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , architecture=x86_64, distribution-scope=public, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, release=1763362218, vendor=Red Hat, Inc., GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 6 05:02:11 localhost systemd[1]: libpod-conmon-94c2f988328a57cd55ef492179cdeb25f438a62c1071b04773f768324fe76e28.scope: Deactivated successfully. Dec 6 05:02:11 localhost systemd[1]: Reloading. Dec 6 05:02:11 localhost systemd-sysv-generator[287172]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 05:02:11 localhost systemd-rc-local-generator[287168]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 05:02:11 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:02:11 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 05:02:11 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:02:11 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:02:11 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 05:02:11 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 05:02:11 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:02:11 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:02:11 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:02:11 localhost systemd[1]: tmp-crun.jo1AJq.mount: Deactivated successfully. Dec 6 05:02:11 localhost systemd[1]: var-lib-containers-storage-overlay-e2a60bd4c290137f9d1be689aa19aba87cbb2a2866b2c9ba1410b595ca8495b9-merged.mount: Deactivated successfully. Dec 6 05:02:11 localhost systemd[1]: Reloading. Dec 6 05:02:12 localhost systemd-rc-local-generator[287208]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 05:02:12 localhost systemd-sysv-generator[287211]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 05:02:12 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:02:12 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 05:02:12 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:02:12 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:02:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 05:02:12 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 05:02:12 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:02:12 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:02:12 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:02:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:02:12 localhost systemd[1]: Starting Ceph mds.mds.np0005548789.vxwwsq for 1939e851-b10c-5c3b-9bb7-8e7f380233e8... Dec 6 05:02:12 localhost podman[287224]: 2025-12-06 10:02:12.395166549 +0000 UTC m=+0.093443122 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:02:12 localhost podman[287224]: 2025-12-06 10:02:12.40470043 +0000 UTC m=+0.102977043 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 6 05:02:12 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:02:12 localhost podman[287295]: Dec 6 05:02:12 localhost podman[287295]: 2025-12-06 10:02:12.70820238 +0000 UTC m=+0.078517399 container create 96c037b833c1c97d0b308664988eb916ff49aa4e1f4308925de2d06b5c221fa3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mds-mds-np0005548789-vxwwsq, release=1763362218, distribution-scope=public, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, name=rhceph, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_BRANCH=main, version=7, ceph=True, architecture=x86_64, io.openshift.tags=rhceph ceph, RELEASE=main) Dec 6 05:02:12 localhost systemd[1]: tmp-crun.0mtcSA.mount: Deactivated successfully. Dec 6 05:02:12 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d83fca0385576280c6ade8d0d4d93ad0d6e7af09ff9b2873234bbb7d6beeec03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 6 05:02:12 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d83fca0385576280c6ade8d0d4d93ad0d6e7af09ff9b2873234bbb7d6beeec03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 6 05:02:12 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d83fca0385576280c6ade8d0d4d93ad0d6e7af09ff9b2873234bbb7d6beeec03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 6 05:02:12 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d83fca0385576280c6ade8d0d4d93ad0d6e7af09ff9b2873234bbb7d6beeec03/merged/var/lib/ceph/mds/ceph-mds.np0005548789.vxwwsq supports timestamps until 2038 (0x7fffffff) Dec 6 05:02:12 localhost podman[287295]: 2025-12-06 10:02:12.77003029 +0000 UTC m=+0.140345319 container init 96c037b833c1c97d0b308664988eb916ff49aa4e1f4308925de2d06b5c221fa3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mds-mds-np0005548789-vxwwsq, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, RELEASE=main, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, release=1763362218, io.openshift.tags=rhceph ceph, vcs-type=git, architecture=x86_64, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, ceph=True, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 6 05:02:12 localhost podman[287295]: 2025-12-06 10:02:12.675049249 +0000 UTC m=+0.045364298 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:02:12 localhost podman[287295]: 2025-12-06 10:02:12.776977979 +0000 UTC m=+0.147293008 container start 96c037b833c1c97d0b308664988eb916ff49aa4e1f4308925de2d06b5c221fa3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mds-mds-np0005548789-vxwwsq, version=7, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, release=1763362218, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, ceph=True, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 6 05:02:12 localhost bash[287295]: 96c037b833c1c97d0b308664988eb916ff49aa4e1f4308925de2d06b5c221fa3 Dec 6 05:02:12 localhost systemd[1]: Started Ceph mds.mds.np0005548789.vxwwsq for 1939e851-b10c-5c3b-9bb7-8e7f380233e8. Dec 6 05:02:12 localhost ceph-mds[287313]: set uid:gid to 167:167 (ceph:ceph) Dec 6 05:02:12 localhost ceph-mds[287313]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mds, pid 2 Dec 6 05:02:12 localhost ceph-mds[287313]: main not setting numa affinity Dec 6 05:02:12 localhost ceph-mds[287313]: pidfile_write: ignore empty --pid-file Dec 6 05:02:12 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mds-mds-np0005548789-vxwwsq[287309]: starting mds.mds.np0005548789.vxwwsq at Dec 6 05:02:12 localhost ceph-mds[287313]: mds.mds.np0005548789.vxwwsq Updating MDS map to version 7 from mon.0 Dec 6 05:02:13 localhost nova_compute[282193]: 2025-12-06 10:02:13.026 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:02:13 localhost ceph-mds[287313]: mds.mds.np0005548789.vxwwsq Updating MDS map to version 8 from mon.0 Dec 6 05:02:13 localhost ceph-mds[287313]: mds.mds.np0005548789.vxwwsq Monitors have assigned me to become a standby. Dec 6 05:02:14 localhost nova_compute[282193]: 2025-12-06 10:02:14.664 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:02:14 localhost systemd[1]: session-62.scope: Deactivated successfully. Dec 6 05:02:14 localhost systemd-logind[766]: Session 62 logged out. Waiting for processes to exit. Dec 6 05:02:14 localhost systemd-logind[766]: Removed session 62. Dec 6 05:02:16 localhost openstack_network_exporter[243110]: ERROR 10:02:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:02:16 localhost openstack_network_exporter[243110]: ERROR 10:02:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:02:16 localhost openstack_network_exporter[243110]: ERROR 10:02:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:02:16 localhost openstack_network_exporter[243110]: ERROR 10:02:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:02:16 localhost openstack_network_exporter[243110]: Dec 6 05:02:16 localhost openstack_network_exporter[243110]: ERROR 10:02:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:02:16 localhost openstack_network_exporter[243110]: Dec 6 05:02:16 localhost systemd[1]: tmp-crun.hhBGZd.mount: Deactivated successfully. Dec 6 05:02:16 localhost podman[287458]: 2025-12-06 10:02:16.859565221 +0000 UTC m=+0.094228428 container exec ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, GIT_BRANCH=main, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, distribution-scope=public, version=7) Dec 6 05:02:16 localhost podman[287458]: 2025-12-06 10:02:16.995313473 +0000 UTC m=+0.229976650 container exec_died ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, distribution-scope=public, vcs-type=git, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, version=7, name=rhceph, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, CEPH_POINT_RELEASE=, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64) Dec 6 05:02:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:02:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:02:17 localhost podman[287541]: 2025-12-06 10:02:17.734532551 +0000 UTC m=+0.072316553 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, config_id=edpm, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:02:17 localhost podman[287541]: 2025-12-06 10:02:17.745117647 +0000 UTC m=+0.082901639 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:02:17 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:02:17 localhost podman[287540]: 2025-12-06 10:02:17.805443418 +0000 UTC m=+0.141494425 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, config_id=edpm, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, version=9.6, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, name=ubi9-minimal, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, distribution-scope=public) Dec 6 05:02:17 localhost podman[287540]: 2025-12-06 10:02:17.820076633 +0000 UTC m=+0.156127640 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, release=1755695350, version=9.6, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container) Dec 6 05:02:17 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:02:18 localhost nova_compute[282193]: 2025-12-06 10:02:18.063 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:02:19 localhost nova_compute[282193]: 2025-12-06 10:02:19.668 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:02:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:02:21 localhost systemd[1]: tmp-crun.5s7p4i.mount: Deactivated successfully. Dec 6 05:02:21 localhost podman[287616]: 2025-12-06 10:02:21.922740158 +0000 UTC m=+0.086470121 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd) Dec 6 05:02:21 localhost podman[287616]: 2025-12-06 10:02:21.937781915 +0000 UTC m=+0.101511878 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2) Dec 6 05:02:21 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:02:23 localhost nova_compute[282193]: 2025-12-06 10:02:23.125 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:02:23 localhost podman[241090]: time="2025-12-06T10:02:23Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:02:23 localhost podman[241090]: @ - - [06/Dec/2025:10:02:23 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 151703 "" "Go-http-client/1.1" Dec 6 05:02:23 localhost podman[241090]: @ - - [06/Dec/2025:10:02:23 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18230 "" "Go-http-client/1.1" Dec 6 05:02:24 localhost nova_compute[282193]: 2025-12-06 10:02:24.672 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:02:26 localhost nova_compute[282193]: 2025-12-06 10:02:26.196 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:02:26 localhost nova_compute[282193]: 2025-12-06 10:02:26.197 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:02:26 localhost nova_compute[282193]: 2025-12-06 10:02:26.216 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:02:26 localhost nova_compute[282193]: 2025-12-06 10:02:26.217 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 6 05:02:26 localhost nova_compute[282193]: 2025-12-06 10:02:26.217 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 6 05:02:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:02:26 localhost nova_compute[282193]: 2025-12-06 10:02:26.861 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 6 05:02:26 localhost nova_compute[282193]: 2025-12-06 10:02:26.861 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquired lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 6 05:02:26 localhost nova_compute[282193]: 2025-12-06 10:02:26.861 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 6 05:02:26 localhost nova_compute[282193]: 2025-12-06 10:02:26.862 282197 DEBUG nova.objects.instance [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 6 05:02:26 localhost podman[287635]: 2025-12-06 10:02:26.928494306 +0000 UTC m=+0.090453757 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 6 05:02:26 localhost podman[287635]: 2025-12-06 10:02:26.962798693 +0000 UTC m=+0.124758174 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 6 05:02:26 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:02:27 localhost nova_compute[282193]: 2025-12-06 10:02:27.288 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updating instance_info_cache with network_info: [{"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 6 05:02:27 localhost nova_compute[282193]: 2025-12-06 10:02:27.314 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Releasing lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 6 05:02:27 localhost nova_compute[282193]: 2025-12-06 10:02:27.315 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 6 05:02:27 localhost nova_compute[282193]: 2025-12-06 10:02:27.316 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:02:27 localhost nova_compute[282193]: 2025-12-06 10:02:27.316 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:02:27 localhost nova_compute[282193]: 2025-12-06 10:02:27.316 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:02:27 localhost nova_compute[282193]: 2025-12-06 10:02:27.316 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:02:27 localhost nova_compute[282193]: 2025-12-06 10:02:27.317 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:02:27 localhost nova_compute[282193]: 2025-12-06 10:02:27.317 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:02:27 localhost nova_compute[282193]: 2025-12-06 10:02:27.317 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 6 05:02:27 localhost nova_compute[282193]: 2025-12-06 10:02:27.317 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:02:27 localhost nova_compute[282193]: 2025-12-06 10:02:27.338 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:02:27 localhost nova_compute[282193]: 2025-12-06 10:02:27.338 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:02:27 localhost nova_compute[282193]: 2025-12-06 10:02:27.339 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:02:27 localhost nova_compute[282193]: 2025-12-06 10:02:27.339 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Auditing locally available compute resources for np0005548789.localdomain (node: np0005548789.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 6 05:02:27 localhost nova_compute[282193]: 2025-12-06 10:02:27.340 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:02:27 localhost nova_compute[282193]: 2025-12-06 10:02:27.794 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:02:27 localhost nova_compute[282193]: 2025-12-06 10:02:27.876 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:02:27 localhost nova_compute[282193]: 2025-12-06 10:02:27.877 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:02:28 localhost nova_compute[282193]: 2025-12-06 10:02:28.114 282197 WARNING nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 6 05:02:28 localhost nova_compute[282193]: 2025-12-06 10:02:28.116 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Hypervisor/Node resource view: name=np0005548789.localdomain free_ram=11984MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 6 05:02:28 localhost nova_compute[282193]: 2025-12-06 10:02:28.116 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:02:28 localhost nova_compute[282193]: 2025-12-06 10:02:28.117 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:02:28 localhost nova_compute[282193]: 2025-12-06 10:02:28.169 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:02:28 localhost nova_compute[282193]: 2025-12-06 10:02:28.199 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 6 05:02:28 localhost nova_compute[282193]: 2025-12-06 10:02:28.199 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 6 05:02:28 localhost nova_compute[282193]: 2025-12-06 10:02:28.200 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Final resource view: name=np0005548789.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 6 05:02:28 localhost nova_compute[282193]: 2025-12-06 10:02:28.241 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:02:28 localhost nova_compute[282193]: 2025-12-06 10:02:28.714 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:02:28 localhost nova_compute[282193]: 2025-12-06 10:02:28.722 282197 DEBUG nova.compute.provider_tree [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed in ProviderTree for provider: 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 6 05:02:28 localhost nova_compute[282193]: 2025-12-06 10:02:28.738 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 6 05:02:28 localhost nova_compute[282193]: 2025-12-06 10:02:28.741 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Compute_service record updated for np0005548789.localdomain:np0005548789.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 6 05:02:28 localhost nova_compute[282193]: 2025-12-06 10:02:28.741 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:02:29 localhost nova_compute[282193]: 2025-12-06 10:02:29.675 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:02:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:02:32 localhost systemd[1]: tmp-crun.LzSNNO.mount: Deactivated successfully. Dec 6 05:02:32 localhost podman[287700]: 2025-12-06 10:02:32.934121064 +0000 UTC m=+0.095935652 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:02:33 localhost podman[287700]: 2025-12-06 10:02:33.021282517 +0000 UTC m=+0.183097105 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2) Dec 6 05:02:33 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:02:33 localhost nova_compute[282193]: 2025-12-06 10:02:33.171 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:02:34 localhost nova_compute[282193]: 2025-12-06 10:02:34.678 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:02:38 localhost nova_compute[282193]: 2025-12-06 10:02:38.221 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:02:39 localhost nova_compute[282193]: 2025-12-06 10:02:39.680 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:02:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:02:40 localhost podman[287723]: 2025-12-06 10:02:40.929324707 +0000 UTC m=+0.088226097 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true) Dec 6 05:02:40 localhost podman[287723]: 2025-12-06 10:02:40.963341515 +0000 UTC m=+0.122242925 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Dec 6 05:02:40 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:02:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:02:42 localhost podman[287740]: 2025-12-06 10:02:42.922356713 +0000 UTC m=+0.083840368 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 6 05:02:42 localhost podman[287740]: 2025-12-06 10:02:42.93014038 +0000 UTC m=+0.091624075 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 6 05:02:42 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:02:43 localhost nova_compute[282193]: 2025-12-06 10:02:43.266 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:02:44 localhost nova_compute[282193]: 2025-12-06 10:02:44.683 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:02:46 localhost openstack_network_exporter[243110]: ERROR 10:02:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:02:46 localhost openstack_network_exporter[243110]: ERROR 10:02:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:02:46 localhost openstack_network_exporter[243110]: Dec 6 05:02:46 localhost openstack_network_exporter[243110]: ERROR 10:02:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:02:46 localhost openstack_network_exporter[243110]: ERROR 10:02:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:02:46 localhost openstack_network_exporter[243110]: ERROR 10:02:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:02:46 localhost openstack_network_exporter[243110]: Dec 6 05:02:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:02:47.292 160509 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:02:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:02:47.292 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:02:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:02:47.293 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:02:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:02:47 localhost podman[287764]: 2025-12-06 10:02:47.910830494 +0000 UTC m=+0.078135438 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute) Dec 6 05:02:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:02:47 localhost podman[287764]: 2025-12-06 10:02:47.92017674 +0000 UTC m=+0.087481624 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_id=edpm, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true) Dec 6 05:02:47 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:02:48 localhost podman[287783]: 2025-12-06 10:02:48.002897302 +0000 UTC m=+0.077020723 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, version=9.6, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.buildah.version=1.33.7, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible) Dec 6 05:02:48 localhost podman[287783]: 2025-12-06 10:02:48.042124955 +0000 UTC m=+0.116248386 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, name=ubi9-minimal, config_id=edpm, release=1755695350, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, architecture=x86_64, io.buildah.version=1.33.7, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 05:02:48 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:02:48 localhost nova_compute[282193]: 2025-12-06 10:02:48.306 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:02:49 localhost nova_compute[282193]: 2025-12-06 10:02:49.686 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:02:51 localhost sshd[287803]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:02:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:02:52 localhost systemd[1]: tmp-crun.0tOSuq.mount: Deactivated successfully. Dec 6 05:02:52 localhost podman[287805]: 2025-12-06 10:02:52.626991154 +0000 UTC m=+0.095760476 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 6 05:02:52 localhost podman[287805]: 2025-12-06 10:02:52.667470587 +0000 UTC m=+0.136239879 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 6 05:02:52 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:02:53 localhost nova_compute[282193]: 2025-12-06 10:02:53.347 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:02:53 localhost podman[241090]: time="2025-12-06T10:02:53Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:02:53 localhost podman[241090]: @ - - [06/Dec/2025:10:02:53 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 151703 "" "Go-http-client/1.1" Dec 6 05:02:53 localhost podman[241090]: @ - - [06/Dec/2025:10:02:53 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18235 "" "Go-http-client/1.1" Dec 6 05:02:54 localhost systemd[1]: session-63.scope: Deactivated successfully. Dec 6 05:02:54 localhost systemd[1]: session-63.scope: Consumed 1.198s CPU time. Dec 6 05:02:54 localhost systemd-logind[766]: Session 63 logged out. Waiting for processes to exit. Dec 6 05:02:54 localhost systemd-logind[766]: Removed session 63. Dec 6 05:02:54 localhost nova_compute[282193]: 2025-12-06 10:02:54.688 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:02:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:02:57 localhost systemd[1]: tmp-crun.R1ySP1.mount: Deactivated successfully. Dec 6 05:02:57 localhost podman[287824]: 2025-12-06 10:02:57.923532959 +0000 UTC m=+0.084162538 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 6 05:02:57 localhost podman[287824]: 2025-12-06 10:02:57.960142399 +0000 UTC m=+0.120771968 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 6 05:02:57 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:02:58 localhost nova_compute[282193]: 2025-12-06 10:02:58.350 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:02:59 localhost nova_compute[282193]: 2025-12-06 10:02:59.692 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:03:03 localhost nova_compute[282193]: 2025-12-06 10:03:03.394 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:03:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:03:03 localhost systemd[1]: tmp-crun.AKE2yp.mount: Deactivated successfully. Dec 6 05:03:03 localhost podman[287847]: 2025-12-06 10:03:03.936424896 +0000 UTC m=+0.097041865 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Dec 6 05:03:03 localhost podman[287847]: 2025-12-06 10:03:03.981235737 +0000 UTC m=+0.141852716 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:03:03 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:03:04 localhost systemd[286540]: Activating special unit Exit the Session... Dec 6 05:03:04 localhost systemd[286540]: Stopped target Main User Target. Dec 6 05:03:04 localhost systemd[286540]: Stopped target Basic System. Dec 6 05:03:04 localhost systemd[286540]: Stopped target Paths. Dec 6 05:03:04 localhost systemd[286540]: Stopped target Sockets. Dec 6 05:03:04 localhost systemd[286540]: Stopped target Timers. Dec 6 05:03:04 localhost systemd[286540]: Stopped Mark boot as successful after the user session has run 2 minutes. Dec 6 05:03:04 localhost systemd[286540]: Stopped Daily Cleanup of User's Temporary Directories. Dec 6 05:03:04 localhost systemd[286540]: Closed D-Bus User Message Bus Socket. Dec 6 05:03:04 localhost systemd[286540]: Stopped Create User's Volatile Files and Directories. Dec 6 05:03:04 localhost systemd[286540]: Removed slice User Application Slice. Dec 6 05:03:04 localhost systemd[286540]: Reached target Shutdown. Dec 6 05:03:04 localhost systemd[286540]: Finished Exit the Session. Dec 6 05:03:04 localhost systemd[286540]: Reached target Exit the Session. Dec 6 05:03:04 localhost systemd[1]: Stopping User Manager for UID 1003... Dec 6 05:03:04 localhost systemd[1]: user@1003.service: Deactivated successfully. Dec 6 05:03:04 localhost systemd[1]: Stopped User Manager for UID 1003. Dec 6 05:03:04 localhost systemd[1]: Stopping User Runtime Directory /run/user/1003... Dec 6 05:03:04 localhost systemd[1]: run-user-1003.mount: Deactivated successfully. Dec 6 05:03:04 localhost systemd[1]: user-runtime-dir@1003.service: Deactivated successfully. Dec 6 05:03:04 localhost systemd[1]: Stopped User Runtime Directory /run/user/1003. Dec 6 05:03:04 localhost systemd[1]: Removed slice User Slice of UID 1003. Dec 6 05:03:04 localhost systemd[1]: user-1003.slice: Consumed 1.603s CPU time. Dec 6 05:03:04 localhost nova_compute[282193]: 2025-12-06 10:03:04.694 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.913 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'name': 'test', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005548789.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'hostId': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.914 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.928 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.928 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '13f6fdb2-456d-47ec-9330-a95c0ba65552', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:03:07.914836', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'c3b5e894-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11806.16420738, 'message_signature': '599b141497833b5385a273a06061ace21a76d5a54509ace503ca9d03fb5e461c'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:03:07.914836', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'c3b5feec-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11806.16420738, 'message_signature': 'e8139afe6989ee3672d1f46d1dc4e53a3c49c115a978398e0d6b5925b3a57147'}]}, 'timestamp': '2025-12-06 10:03:07.929185', '_unique_id': '7439f0d9478b4000aaaf396437129d4c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.930 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.931 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.962 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.962 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1d3de8fa-e54a-4c6d-becc-584270e7a0e1', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:03:07.932093', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'c3bb1a08-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11806.181489178, 'message_signature': '004ac31b0766345c8583d0ab3368d5085e0fd39f4048bca3d16cae5fdb0604fd'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:03:07.932093', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'c3bb2f34-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11806.181489178, 'message_signature': 'fddb347c98978c86506e3552fbc830319dec7092d83e62e3403d60fb6e5801af'}]}, 'timestamp': '2025-12-06 10:03:07.963210', '_unique_id': 'f6d1631778c04c6bbc5aecd99a184b94'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.964 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.965 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.970 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9224698a-29b3-45aa-85e7-b711659659f9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:03:07.966007', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'c3bc576a-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11806.215378872, 'message_signature': 'aa436643508d51a8c44e4551ea655fa6c807e094d4208d4d07771bbf10013ef5'}]}, 'timestamp': '2025-12-06 10:03:07.970835', '_unique_id': '88ee8194e2f94aab8fa0968137a9102d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.971 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.973 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.973 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.latency volume: 1252245154 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.973 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.latency volume: 27668224 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f081c00a-bea2-40d3-9141-70657da5f795', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1252245154, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:03:07.973404', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'c3bcd0dc-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11806.181489178, 'message_signature': '90e213660852df2e2d80c5d8c7d53428cb3cf4838ed84211f0146dc6a41efcd9'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 27668224, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:03:07.973404', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'c3bce5c2-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11806.181489178, 'message_signature': 'a1d40775d4352163497852ca3c1a690b722cf170b93fda57123336d14f2b4888'}]}, 'timestamp': '2025-12-06 10:03:07.974404', '_unique_id': '3c729fc6ddab49e9aba8e05e6d553e55'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.975 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.976 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.976 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.976 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.977 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.977 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7915351a-028c-49d5-8ab3-72a897c081dd', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:03:07.977099', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'c3bd60d8-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11806.16420738, 'message_signature': '9f521a7689f47ac55b931e3c3c5e5f939bf79e28ce0aa7698ffbad664832846e'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:03:07.977099', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'c3bd741a-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11806.16420738, 'message_signature': '96073340772ea14d395cebdda50b366584f298d607009d8bb6396c77444887c9'}]}, 'timestamp': '2025-12-06 10:03:07.978049', '_unique_id': '00b50c79648549c89c590c135bc04202'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.979 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.980 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.980 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.980 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c32e7842-b93a-47ea-afe2-090c524dd337', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:03:07.980318', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'c3bdde5a-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11806.181489178, 'message_signature': '87d0ddf1ac92e75028e156906933d6e14c258b6dcf360910201ea2b848ac6fad'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:03:07.980318', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'c3bdf0a2-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11806.181489178, 'message_signature': '22f0a766fed45840276729b3eb6d207dfa24a1aa0619fa3b272147934a36d50c'}]}, 'timestamp': '2025-12-06 10:03:07.981228', '_unique_id': '7920bc547c374c948c490e0741b61a03'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.982 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:07.983 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.001 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/cpu volume: 12400000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c98323f6-dd5b-433e-b4fe-dd0517e3c8e5', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 12400000000, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'timestamp': '2025-12-06T10:03:07.983510', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': 'c3c1184a-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11806.250542145, 'message_signature': '48088b4e6e73cedc944ed6e2ec84728dcadd5d3976add3ac35b36ec1f5011b73'}]}, 'timestamp': '2025-12-06 10:03:08.001999', '_unique_id': '0f352e46940842b4923e45d08a5d7d28'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.003 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.004 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.004 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.bytes volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '230bf5b4-b2a1-41db-b0c8-7a4ca15e4970', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:03:08.004705', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'c3c199a0-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11806.215378872, 'message_signature': '33fbd59f4055943ceb38102ea5a7174f3a649ac23ccfe28f30f03c6993597296'}]}, 'timestamp': '2025-12-06 10:03:08.005250', '_unique_id': '99564e31fc5d478c86624c0efa551b60'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.006 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.007 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.007 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/memory.usage volume: 51.80859375 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e423e29b-20fb-4894-9dcd-1d619c9bda96', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.80859375, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'timestamp': '2025-12-06T10:03:08.007550', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': 'c3c207d2-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11806.250542145, 'message_signature': '431453fd96e57d3bee75b3c7f6d2b82352865861b04c04f925db9f9ded3ba1a6'}]}, 'timestamp': '2025-12-06 10:03:08.008083', '_unique_id': '8a5d455c2ee64345b11792ebf8395264'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.009 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.011 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.011 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '55c85834-0b52-47fc-aee9-04bc76dba77f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:03:08.011491', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'c3c2a2f0-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11806.215378872, 'message_signature': '2616fb949d1b216a18df7d70faeb180a63cacd9a2a53abd04cfe151748eb1435'}]}, 'timestamp': '2025-12-06 10:03:08.012081', '_unique_id': 'b0c0fdf6f1ad46b7b8973b596ee1701d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.013 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.014 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.014 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'dd396d59-e155-49c2-8383-8f79dc236544', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:03:08.014381', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'c3c310f0-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11806.215378872, 'message_signature': 'ca0fa46a7088b2c1ec36d6f7cb5252f4985e265931f68abfcb65703e1945d36b'}]}, 'timestamp': '2025-12-06 10:03:08.014905', '_unique_id': '9a1beede942649f19c23a739f0ed12de'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.015 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.017 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.017 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets volume: 114 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8ca04f33-507c-4098-a09f-712ee608c7ef', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 114, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:03:08.017288', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'c3c3829c-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11806.215378872, 'message_signature': 'bbd4e0ae5b49c0c5e3ea19906a1aea6a2cf94ac6229a4fa883e62d9ba944f6cc'}]}, 'timestamp': '2025-12-06 10:03:08.017798', '_unique_id': '2a4b0035372f4a39a2faec4701878d3c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.018 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.020 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.020 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets volume: 60 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '141a82c5-a1aa-4e8d-bcc7-a3c55a940f81', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 60, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:03:08.020298', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'c3c3f8e4-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11806.215378872, 'message_signature': '3e64785280cc4b922fabffd7c1de21d311b0c674c20a52b5f00cc078648368e4'}]}, 'timestamp': '2025-12-06 10:03:08.020752', '_unique_id': 'a329ef7c029c42409f07268f4885a6c6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.021 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.022 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.022 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.requests volume: 47 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.022 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'cfbcb9f2-2ed0-4212-8bf0-f08fb2e55f83', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 47, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:03:08.022155', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'c3c43d0e-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11806.181489178, 'message_signature': '10680dfb393f1265a7f7987ff4b45872f4b9debddc9d3dec4ae505c193222fe6'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:03:08.022155', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'c3c44858-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11806.181489178, 'message_signature': 'a2b675ab7a2863f68740aaba4723caf84c4e201b10baa617f9d6c46ff709027e'}]}, 'timestamp': '2025-12-06 10:03:08.022719', '_unique_id': '2a4e1828e2f6468daf53357bff2514a0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.023 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.024 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.024 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.024 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5436739a-0f65-4fdc-9365-8b4e94bae73e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:03:08.024353', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'c3c493ee-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11806.16420738, 'message_signature': '393f2506226437dce79de810726af5d94092df0e16d65f9e40cf174af7abb0b7'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:03:08.024353', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'c3c49f24-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11806.16420738, 'message_signature': 'a10b5d59de1fa971380bdd28047d690fb770dcb8d2e9246b309a987a2c0387bc'}]}, 'timestamp': '2025-12-06 10:03:08.024944', '_unique_id': 'fe4dadd7612a4bdc82f970993bce9bc0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.025 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.026 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.026 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.latency volume: 1525105336 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.026 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.latency volume: 106716064 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1155e441-713d-45fe-8163-7fd9e8928681', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1525105336, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:03:08.026334', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'c3c4e0b0-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11806.181489178, 'message_signature': 'ffbc55d7d6924b3993b3559d084e7ef0fb935e59901694841d49b70bf1a2ea3b'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 106716064, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:03:08.026334', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'c3c4ec04-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11806.181489178, 'message_signature': 'cec49a20ad55efe8a90a35129a5e1b0854c16a236914a2affd0f1dca097c1f41'}]}, 'timestamp': '2025-12-06 10:03:08.026911', '_unique_id': 'cfa26353ca6a48c282032cefa35a91a3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.027 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.028 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.028 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.bytes volume: 389120 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.028 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'fb3afe6f-4774-4034-af17-a6a59e9074a7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 389120, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:03:08.028373', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'c3c53060-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11806.181489178, 'message_signature': 'a4548b992e6cf06ee32434d6c4265207933658591ff5faca57b5f5680d8139ec'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:03:08.028373', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'c3c53c04-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11806.181489178, 'message_signature': '71cb780ae7bb0c8549f9a8948b6570a25d142b644e53e52085ff17287654dfe0'}]}, 'timestamp': '2025-12-06 10:03:08.028958', '_unique_id': '42b107ddd35d4f0ca675bb8fcce868e9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.029 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.030 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.030 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'bf351536-999b-493e-94a5-13a7003370a8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:03:08.030330', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'c3c57cbe-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11806.215378872, 'message_signature': '24da6cb5bc9014753a40c211f7ba99ddfbb1d7c54c641db5e15820a4b076b462'}]}, 'timestamp': '2025-12-06 10:03:08.030660', '_unique_id': 'deb04f4357ab4e5cbdcb6ca0f064ef9a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.031 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.032 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.032 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.032 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.bytes volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ac044595-a738-452a-8d31-55734b8513c4', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:03:08.032193', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'c3c5c50c-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11806.215378872, 'message_signature': '81dcc5ef76a517226a749719117ab46b95c757da7b30057f6a3427e22923ff3e'}]}, 'timestamp': '2025-12-06 10:03:08.032482', '_unique_id': 'cd277826f3ae46c89336627b10d18813'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.033 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '509229f2-452d-4de3-92c8-aaf7c1b5b192', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:03:08.034008', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'c3c60bfc-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11806.215378872, 'message_signature': '220da469df5371a0982ddd749b2c7c158d1815131859cc037fdb312722df66b4'}]}, 'timestamp': '2025-12-06 10:03:08.034296', '_unique_id': 'a7125c823c0c41ac9cfd46c275b624a2'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.034 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.035 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.035 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9124f985-a944-4423-90c7-4bfaf8fcfb28', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:03:08.035646', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'c3c64e6e-d28a-11f0-aaf2-fa163e118844', 'monotonic_time': 11806.215378872, 'message_signature': '27a54efb36ef9767c0de17c31f631ff0b971a88e94ecd4a72723112bbe8f046d'}]}, 'timestamp': '2025-12-06 10:03:08.036001', '_unique_id': '154c0a29da6946a98082d0535517dc2f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:03:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:03:08.036 12 ERROR oslo_messaging.notify.messaging Dec 6 05:03:08 localhost nova_compute[282193]: 2025-12-06 10:03:08.395 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:03:09 localhost nova_compute[282193]: 2025-12-06 10:03:09.696 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:03:10 localhost sshd[287961]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:03:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:03:11 localhost systemd[1]: tmp-crun.It7lNz.mount: Deactivated successfully. Dec 6 05:03:11 localhost podman[287979]: 2025-12-06 10:03:11.556925755 +0000 UTC m=+0.085736979 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:03:11 localhost podman[287979]: 2025-12-06 10:03:11.562671887 +0000 UTC m=+0.091483181 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible) Dec 6 05:03:11 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:03:11 localhost sshd[288001]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:03:12 localhost sshd[288021]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:03:13 localhost nova_compute[282193]: 2025-12-06 10:03:13.435 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:03:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:03:13 localhost podman[288023]: 2025-12-06 10:03:13.834123876 +0000 UTC m=+0.071867599 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 6 05:03:13 localhost podman[288023]: 2025-12-06 10:03:13.840352383 +0000 UTC m=+0.078096086 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 6 05:03:13 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:03:14 localhost sshd[288047]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:03:14 localhost nova_compute[282193]: 2025-12-06 10:03:14.699 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:03:15 localhost sshd[288049]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:03:16 localhost openstack_network_exporter[243110]: ERROR 10:03:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:03:16 localhost openstack_network_exporter[243110]: ERROR 10:03:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:03:16 localhost openstack_network_exporter[243110]: ERROR 10:03:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:03:16 localhost openstack_network_exporter[243110]: ERROR 10:03:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:03:16 localhost openstack_network_exporter[243110]: Dec 6 05:03:16 localhost openstack_network_exporter[243110]: ERROR 10:03:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:03:16 localhost openstack_network_exporter[243110]: Dec 6 05:03:18 localhost nova_compute[282193]: 2025-12-06 10:03:18.438 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:03:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:03:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:03:18 localhost podman[288052]: 2025-12-06 10:03:18.944896823 +0000 UTC m=+0.092542194 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, distribution-scope=public, vcs-type=git, version=9.6, name=ubi9-minimal, config_id=edpm, io.openshift.expose-services=) Dec 6 05:03:19 localhost podman[288053]: 2025-12-06 10:03:19.000875497 +0000 UTC m=+0.147194856 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 6 05:03:19 localhost podman[288052]: 2025-12-06 10:03:19.017513305 +0000 UTC m=+0.165158706 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, distribution-scope=public, name=ubi9-minimal, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, architecture=x86_64, io.openshift.expose-services=, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.openshift.tags=minimal rhel9, vcs-type=git) Dec 6 05:03:19 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:03:19 localhost podman[288053]: 2025-12-06 10:03:19.042261149 +0000 UTC m=+0.188580548 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:03:19 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:03:19 localhost nova_compute[282193]: 2025-12-06 10:03:19.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:03:19 localhost nova_compute[282193]: 2025-12-06 10:03:19.181 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Dec 6 05:03:19 localhost nova_compute[282193]: 2025-12-06 10:03:19.203 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Dec 6 05:03:19 localhost nova_compute[282193]: 2025-12-06 10:03:19.203 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:03:19 localhost nova_compute[282193]: 2025-12-06 10:03:19.203 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Dec 6 05:03:19 localhost nova_compute[282193]: 2025-12-06 10:03:19.220 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:03:19 localhost nova_compute[282193]: 2025-12-06 10:03:19.703 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:03:20 localhost nova_compute[282193]: 2025-12-06 10:03:20.230 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:03:20 localhost sshd[288092]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:03:22 localhost nova_compute[282193]: 2025-12-06 10:03:22.182 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:03:22 localhost nova_compute[282193]: 2025-12-06 10:03:22.183 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:03:22 localhost nova_compute[282193]: 2025-12-06 10:03:22.213 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:03:22 localhost nova_compute[282193]: 2025-12-06 10:03:22.213 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:03:22 localhost nova_compute[282193]: 2025-12-06 10:03:22.213 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:03:22 localhost nova_compute[282193]: 2025-12-06 10:03:22.214 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Auditing locally available compute resources for np0005548789.localdomain (node: np0005548789.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 6 05:03:22 localhost nova_compute[282193]: 2025-12-06 10:03:22.214 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:03:22 localhost nova_compute[282193]: 2025-12-06 10:03:22.681 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:03:22 localhost nova_compute[282193]: 2025-12-06 10:03:22.761 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:03:22 localhost nova_compute[282193]: 2025-12-06 10:03:22.762 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:03:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:03:22 localhost podman[288116]: 2025-12-06 10:03:22.945727472 +0000 UTC m=+0.104497363 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125) Dec 6 05:03:22 localhost podman[288116]: 2025-12-06 10:03:22.988316801 +0000 UTC m=+0.147086642 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:03:23 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:03:23 localhost nova_compute[282193]: 2025-12-06 10:03:23.013 282197 WARNING nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 6 05:03:23 localhost nova_compute[282193]: 2025-12-06 10:03:23.014 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Hypervisor/Node resource view: name=np0005548789.localdomain free_ram=11988MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 6 05:03:23 localhost nova_compute[282193]: 2025-12-06 10:03:23.014 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:03:23 localhost nova_compute[282193]: 2025-12-06 10:03:23.014 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:03:23 localhost nova_compute[282193]: 2025-12-06 10:03:23.209 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 6 05:03:23 localhost nova_compute[282193]: 2025-12-06 10:03:23.210 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 6 05:03:23 localhost nova_compute[282193]: 2025-12-06 10:03:23.211 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Final resource view: name=np0005548789.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 6 05:03:23 localhost nova_compute[282193]: 2025-12-06 10:03:23.310 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Refreshing inventories for resource provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Dec 6 05:03:23 localhost nova_compute[282193]: 2025-12-06 10:03:23.454 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Updating ProviderTree inventory for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Dec 6 05:03:23 localhost nova_compute[282193]: 2025-12-06 10:03:23.455 282197 DEBUG nova.compute.provider_tree [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Updating inventory in ProviderTree for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 6 05:03:23 localhost nova_compute[282193]: 2025-12-06 10:03:23.472 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Refreshing aggregate associations for resource provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Dec 6 05:03:23 localhost nova_compute[282193]: 2025-12-06 10:03:23.478 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:03:23 localhost nova_compute[282193]: 2025-12-06 10:03:23.492 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Refreshing trait associations for resource provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad, traits: HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_RESCUE_BFV,HW_CPU_X86_AVX2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SHA,HW_CPU_X86_BMI2,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSSE3,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_AVX,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AMD_SVM,HW_CPU_X86_FMA3,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_F16C,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_ABM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Dec 6 05:03:23 localhost nova_compute[282193]: 2025-12-06 10:03:23.534 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:03:23 localhost podman[241090]: time="2025-12-06T10:03:23Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:03:23 localhost podman[241090]: @ - - [06/Dec/2025:10:03:23 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 151703 "" "Go-http-client/1.1" Dec 6 05:03:23 localhost podman[241090]: @ - - [06/Dec/2025:10:03:23 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18234 "" "Go-http-client/1.1" Dec 6 05:03:24 localhost nova_compute[282193]: 2025-12-06 10:03:24.032 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:03:24 localhost nova_compute[282193]: 2025-12-06 10:03:24.039 282197 DEBUG nova.compute.provider_tree [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed in ProviderTree for provider: 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 6 05:03:24 localhost nova_compute[282193]: 2025-12-06 10:03:24.059 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 6 05:03:24 localhost nova_compute[282193]: 2025-12-06 10:03:24.061 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Compute_service record updated for np0005548789.localdomain:np0005548789.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 6 05:03:24 localhost nova_compute[282193]: 2025-12-06 10:03:24.062 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:03:24 localhost nova_compute[282193]: 2025-12-06 10:03:24.730 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:03:25 localhost nova_compute[282193]: 2025-12-06 10:03:25.062 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:03:25 localhost nova_compute[282193]: 2025-12-06 10:03:25.063 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:03:25 localhost nova_compute[282193]: 2025-12-06 10:03:25.180 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:03:25 localhost nova_compute[282193]: 2025-12-06 10:03:25.181 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 6 05:03:25 localhost nova_compute[282193]: 2025-12-06 10:03:25.181 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 6 05:03:25 localhost nova_compute[282193]: 2025-12-06 10:03:25.613 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 6 05:03:25 localhost nova_compute[282193]: 2025-12-06 10:03:25.614 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquired lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 6 05:03:25 localhost nova_compute[282193]: 2025-12-06 10:03:25.614 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 6 05:03:25 localhost nova_compute[282193]: 2025-12-06 10:03:25.615 282197 DEBUG nova.objects.instance [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 6 05:03:26 localhost nova_compute[282193]: 2025-12-06 10:03:26.054 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updating instance_info_cache with network_info: [{"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 6 05:03:26 localhost nova_compute[282193]: 2025-12-06 10:03:26.080 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Releasing lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 6 05:03:26 localhost nova_compute[282193]: 2025-12-06 10:03:26.081 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 6 05:03:26 localhost nova_compute[282193]: 2025-12-06 10:03:26.081 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:03:26 localhost nova_compute[282193]: 2025-12-06 10:03:26.082 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:03:26 localhost nova_compute[282193]: 2025-12-06 10:03:26.082 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:03:26 localhost nova_compute[282193]: 2025-12-06 10:03:26.083 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 6 05:03:28 localhost nova_compute[282193]: 2025-12-06 10:03:28.510 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:03:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:03:28 localhost podman[288156]: 2025-12-06 10:03:28.920849963 +0000 UTC m=+0.077468397 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 6 05:03:28 localhost podman[288156]: 2025-12-06 10:03:28.929337681 +0000 UTC m=+0.085956115 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 6 05:03:28 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:03:28 localhost sshd[288179]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:03:29 localhost nova_compute[282193]: 2025-12-06 10:03:29.766 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:03:33 localhost nova_compute[282193]: 2025-12-06 10:03:33.563 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:03:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:03:34 localhost podman[288256]: 2025-12-06 10:03:34.364568662 +0000 UTC m=+0.095253400 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 6 05:03:34 localhost podman[288256]: 2025-12-06 10:03:34.414263967 +0000 UTC m=+0.144948765 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS) Dec 6 05:03:34 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:03:34 localhost nova_compute[282193]: 2025-12-06 10:03:34.808 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:03:38 localhost nova_compute[282193]: 2025-12-06 10:03:38.566 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:03:39 localhost nova_compute[282193]: 2025-12-06 10:03:39.811 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:03:40 localhost podman[288395]: Dec 6 05:03:40 localhost podman[288395]: 2025-12-06 10:03:40.336700177 +0000 UTC m=+0.081378350 container create 78ec4e6836e7745e82f90135969cf07095d81ccaa2e8338ecaa65bbec5056ef6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_lederberg, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, architecture=x86_64, GIT_CLEAN=True, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, name=rhceph, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_BRANCH=main, release=1763362218, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7) Dec 6 05:03:40 localhost systemd[1]: Started libpod-conmon-78ec4e6836e7745e82f90135969cf07095d81ccaa2e8338ecaa65bbec5056ef6.scope. Dec 6 05:03:40 localhost podman[288395]: 2025-12-06 10:03:40.302276586 +0000 UTC m=+0.046954779 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:03:40 localhost systemd[1]: Started libcrun container. Dec 6 05:03:40 localhost podman[288395]: 2025-12-06 10:03:40.4355586 +0000 UTC m=+0.180236743 container init 78ec4e6836e7745e82f90135969cf07095d81ccaa2e8338ecaa65bbec5056ef6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_lederberg, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., distribution-scope=public, GIT_BRANCH=main, vcs-type=git, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, maintainer=Guillaume Abrioux , name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7) Dec 6 05:03:40 localhost podman[288395]: 2025-12-06 10:03:40.447734257 +0000 UTC m=+0.192412410 container start 78ec4e6836e7745e82f90135969cf07095d81ccaa2e8338ecaa65bbec5056ef6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_lederberg, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, RELEASE=main, architecture=x86_64, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, release=1763362218, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public) Dec 6 05:03:40 localhost podman[288395]: 2025-12-06 10:03:40.448078347 +0000 UTC m=+0.192756690 container attach 78ec4e6836e7745e82f90135969cf07095d81ccaa2e8338ecaa65bbec5056ef6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_lederberg, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, ceph=True, architecture=x86_64, distribution-scope=public, name=rhceph, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers) Dec 6 05:03:40 localhost nostalgic_lederberg[288410]: 167 167 Dec 6 05:03:40 localhost systemd[1]: libpod-78ec4e6836e7745e82f90135969cf07095d81ccaa2e8338ecaa65bbec5056ef6.scope: Deactivated successfully. Dec 6 05:03:40 localhost podman[288395]: 2025-12-06 10:03:40.453420767 +0000 UTC m=+0.198098970 container died 78ec4e6836e7745e82f90135969cf07095d81ccaa2e8338ecaa65bbec5056ef6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_lederberg, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, architecture=x86_64, name=rhceph, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, com.redhat.component=rhceph-container, GIT_CLEAN=True, RELEASE=main, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc.) Dec 6 05:03:40 localhost podman[288415]: 2025-12-06 10:03:40.566832741 +0000 UTC m=+0.098783092 container remove 78ec4e6836e7745e82f90135969cf07095d81ccaa2e8338ecaa65bbec5056ef6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_lederberg, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.openshift.expose-services=, com.redhat.component=rhceph-container, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_BRANCH=main, CEPH_POINT_RELEASE=, RELEASE=main, io.buildah.version=1.41.4, name=rhceph, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, version=7, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 05:03:40 localhost systemd[1]: libpod-conmon-78ec4e6836e7745e82f90135969cf07095d81ccaa2e8338ecaa65bbec5056ef6.scope: Deactivated successfully. Dec 6 05:03:40 localhost systemd[1]: Reloading. Dec 6 05:03:40 localhost systemd-rc-local-generator[288461]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 05:03:40 localhost systemd-sysv-generator[288464]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 05:03:40 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:03:40 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 05:03:40 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:03:40 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:03:40 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 05:03:40 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 05:03:40 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:03:40 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:03:40 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:03:40 localhost systemd[1]: var-lib-containers-storage-overlay-eecc5831ee4c8a69142491310ecb4219d97aece2676b0501a63112feb2a6e873-merged.mount: Deactivated successfully. Dec 6 05:03:41 localhost systemd[1]: Reloading. Dec 6 05:03:41 localhost systemd-rc-local-generator[288499]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 05:03:41 localhost systemd-sysv-generator[288503]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 05:03:41 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:03:41 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 05:03:41 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:03:41 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:03:41 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 05:03:41 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 05:03:41 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:03:41 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:03:41 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:03:41 localhost systemd[1]: Starting Ceph mgr.np0005548789.mzhmje for 1939e851-b10c-5c3b-9bb7-8e7f380233e8... Dec 6 05:03:41 localhost podman[288563]: Dec 6 05:03:41 localhost podman[288563]: 2025-12-06 10:03:41.744537887 +0000 UTC m=+0.078929734 container create a5642346d3b7fe4cd6e5bbe87414b688ab8475bffb00a03636a048ebe7ffdc6d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, release=1763362218, distribution-scope=public, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, vcs-type=git, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, ceph=True, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, RELEASE=main, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, architecture=x86_64) Dec 6 05:03:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:03:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abec372076723da270ee2433cd9138ae521a99d2e3c2e8c0743dbe456147a8f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 6 05:03:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abec372076723da270ee2433cd9138ae521a99d2e3c2e8c0743dbe456147a8f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 6 05:03:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abec372076723da270ee2433cd9138ae521a99d2e3c2e8c0743dbe456147a8f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 6 05:03:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abec372076723da270ee2433cd9138ae521a99d2e3c2e8c0743dbe456147a8f0/merged/var/lib/ceph/mgr/ceph-np0005548789.mzhmje supports timestamps until 2038 (0x7fffffff) Dec 6 05:03:41 localhost podman[288563]: 2025-12-06 10:03:41.711905992 +0000 UTC m=+0.046297869 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:03:41 localhost podman[288563]: 2025-12-06 10:03:41.815240827 +0000 UTC m=+0.149632684 container init a5642346d3b7fe4cd6e5bbe87414b688ab8475bffb00a03636a048ebe7ffdc6d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje, GIT_CLEAN=True, RELEASE=main, io.openshift.expose-services=, com.redhat.component=rhceph-container, vcs-type=git, architecture=x86_64, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7) Dec 6 05:03:41 localhost podman[288563]: 2025-12-06 10:03:41.830672036 +0000 UTC m=+0.165063883 container start a5642346d3b7fe4cd6e5bbe87414b688ab8475bffb00a03636a048ebe7ffdc6d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, GIT_BRANCH=main, architecture=x86_64, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, ceph=True, GIT_CLEAN=True, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, RELEASE=main, version=7, release=1763362218, io.openshift.expose-services=, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z) Dec 6 05:03:41 localhost bash[288563]: a5642346d3b7fe4cd6e5bbe87414b688ab8475bffb00a03636a048ebe7ffdc6d Dec 6 05:03:41 localhost systemd[1]: Started Ceph mgr.np0005548789.mzhmje for 1939e851-b10c-5c3b-9bb7-8e7f380233e8. Dec 6 05:03:41 localhost ceph-mgr[288591]: set uid:gid to 167:167 (ceph:ceph) Dec 6 05:03:41 localhost ceph-mgr[288591]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mgr, pid 2 Dec 6 05:03:41 localhost ceph-mgr[288591]: pidfile_write: ignore empty --pid-file Dec 6 05:03:41 localhost podman[288577]: 2025-12-06 10:03:41.896956107 +0000 UTC m=+0.112975351 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 6 05:03:41 localhost ceph-mgr[288591]: mgr[py] Loading python module 'alerts' Dec 6 05:03:41 localhost podman[288577]: 2025-12-06 10:03:41.926479703 +0000 UTC m=+0.142498957 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true) Dec 6 05:03:41 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:03:41 localhost ceph-mgr[288591]: mgr[py] Module alerts has missing NOTIFY_TYPES member Dec 6 05:03:41 localhost ceph-mgr[288591]: mgr[py] Loading python module 'balancer' Dec 6 05:03:42 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:03:41.997+0000 7f047f28a140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member Dec 6 05:03:42 localhost ceph-mgr[288591]: mgr[py] Module balancer has missing NOTIFY_TYPES member Dec 6 05:03:42 localhost ceph-mgr[288591]: mgr[py] Loading python module 'cephadm' Dec 6 05:03:42 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:03:42.065+0000 7f047f28a140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member Dec 6 05:03:42 localhost systemd[1]: tmp-crun.ndY5AH.mount: Deactivated successfully. Dec 6 05:03:42 localhost ceph-mgr[288591]: mgr[py] Loading python module 'crash' Dec 6 05:03:42 localhost ceph-mgr[288591]: mgr[py] Module crash has missing NOTIFY_TYPES member Dec 6 05:03:42 localhost ceph-mgr[288591]: mgr[py] Loading python module 'dashboard' Dec 6 05:03:42 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:03:42.856+0000 7f047f28a140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member Dec 6 05:03:43 localhost ceph-mgr[288591]: mgr[py] Loading python module 'devicehealth' Dec 6 05:03:43 localhost ceph-mgr[288591]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member Dec 6 05:03:43 localhost ceph-mgr[288591]: mgr[py] Loading python module 'diskprediction_local' Dec 6 05:03:43 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:03:43.421+0000 7f047f28a140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member Dec 6 05:03:43 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. Dec 6 05:03:43 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. Dec 6 05:03:43 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: from numpy import show_config as show_numpy_config Dec 6 05:03:43 localhost ceph-mgr[288591]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member Dec 6 05:03:43 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:03:43.563+0000 7f047f28a140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member Dec 6 05:03:43 localhost ceph-mgr[288591]: mgr[py] Loading python module 'influx' Dec 6 05:03:43 localhost nova_compute[282193]: 2025-12-06 10:03:43.615 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:03:43 localhost ceph-mgr[288591]: mgr[py] Module influx has missing NOTIFY_TYPES member Dec 6 05:03:43 localhost ceph-mgr[288591]: mgr[py] Loading python module 'insights' Dec 6 05:03:43 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:03:43.687+0000 7f047f28a140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member Dec 6 05:03:43 localhost ceph-mgr[288591]: mgr[py] Loading python module 'iostat' Dec 6 05:03:43 localhost ceph-mgr[288591]: mgr[py] Module iostat has missing NOTIFY_TYPES member Dec 6 05:03:43 localhost ceph-mgr[288591]: mgr[py] Loading python module 'k8sevents' Dec 6 05:03:43 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:03:43.810+0000 7f047f28a140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member Dec 6 05:03:44 localhost ceph-mgr[288591]: mgr[py] Loading python module 'localpool' Dec 6 05:03:44 localhost ceph-mgr[288591]: mgr[py] Loading python module 'mds_autoscaler' Dec 6 05:03:44 localhost ceph-mgr[288591]: mgr[py] Loading python module 'mirroring' Dec 6 05:03:44 localhost ceph-mgr[288591]: mgr[py] Loading python module 'nfs' Dec 6 05:03:44 localhost ceph-mgr[288591]: mgr[py] Module nfs has missing NOTIFY_TYPES member Dec 6 05:03:44 localhost ceph-mgr[288591]: mgr[py] Loading python module 'orchestrator' Dec 6 05:03:44 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:03:44.575+0000 7f047f28a140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member Dec 6 05:03:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:03:44 localhost ceph-mgr[288591]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member Dec 6 05:03:44 localhost ceph-mgr[288591]: mgr[py] Loading python module 'osd_perf_query' Dec 6 05:03:44 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:03:44.729+0000 7f047f28a140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member Dec 6 05:03:44 localhost podman[288647]: 2025-12-06 10:03:44.780150984 +0000 UTC m=+0.080584745 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 6 05:03:44 localhost podman[288647]: 2025-12-06 10:03:44.789096578 +0000 UTC m=+0.089530309 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 6 05:03:44 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:03:44 localhost ceph-mgr[288591]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member Dec 6 05:03:44 localhost ceph-mgr[288591]: mgr[py] Loading python module 'osd_support' Dec 6 05:03:44 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:03:44.804+0000 7f047f28a140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member Dec 6 05:03:44 localhost nova_compute[282193]: 2025-12-06 10:03:44.815 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:03:44 localhost ceph-mgr[288591]: mgr[py] Module osd_support has missing NOTIFY_TYPES member Dec 6 05:03:44 localhost ceph-mgr[288591]: mgr[py] Loading python module 'pg_autoscaler' Dec 6 05:03:44 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:03:44.870+0000 7f047f28a140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member Dec 6 05:03:44 localhost ceph-mgr[288591]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member Dec 6 05:03:44 localhost ceph-mgr[288591]: mgr[py] Loading python module 'progress' Dec 6 05:03:44 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:03:44.939+0000 7f047f28a140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member Dec 6 05:03:45 localhost ceph-mgr[288591]: mgr[py] Module progress has missing NOTIFY_TYPES member Dec 6 05:03:45 localhost ceph-mgr[288591]: mgr[py] Loading python module 'prometheus' Dec 6 05:03:45 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:03:45.000+0000 7f047f28a140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member Dec 6 05:03:45 localhost ceph-mgr[288591]: mgr[py] Module prometheus has missing NOTIFY_TYPES member Dec 6 05:03:45 localhost ceph-mgr[288591]: mgr[py] Loading python module 'rbd_support' Dec 6 05:03:45 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:03:45.308+0000 7f047f28a140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member Dec 6 05:03:45 localhost ceph-mgr[288591]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member Dec 6 05:03:45 localhost ceph-mgr[288591]: mgr[py] Loading python module 'restful' Dec 6 05:03:45 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:03:45.393+0000 7f047f28a140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member Dec 6 05:03:45 localhost ceph-mgr[288591]: mgr[py] Loading python module 'rgw' Dec 6 05:03:45 localhost ceph-mgr[288591]: mgr[py] Module rgw has missing NOTIFY_TYPES member Dec 6 05:03:45 localhost ceph-mgr[288591]: mgr[py] Loading python module 'rook' Dec 6 05:03:45 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:03:45.735+0000 7f047f28a140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member Dec 6 05:03:45 localhost podman[288778]: 2025-12-06 10:03:45.744992924 +0000 UTC m=+0.101703785 container exec ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, vendor=Red Hat, Inc., io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, ceph=True, version=7, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, name=rhceph, architecture=x86_64, io.openshift.expose-services=) Dec 6 05:03:45 localhost podman[288778]: 2025-12-06 10:03:45.821506449 +0000 UTC m=+0.178217310 container exec_died ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., RELEASE=main, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, CEPH_POINT_RELEASE=, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, release=1763362218, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4) Dec 6 05:03:46 localhost ceph-mgr[288591]: mgr[py] Module rook has missing NOTIFY_TYPES member Dec 6 05:03:46 localhost ceph-mgr[288591]: mgr[py] Loading python module 'selftest' Dec 6 05:03:46 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:03:46.231+0000 7f047f28a140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member Dec 6 05:03:46 localhost ceph-mgr[288591]: mgr[py] Module selftest has missing NOTIFY_TYPES member Dec 6 05:03:46 localhost ceph-mgr[288591]: mgr[py] Loading python module 'snap_schedule' Dec 6 05:03:46 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:03:46.298+0000 7f047f28a140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member Dec 6 05:03:46 localhost ceph-mgr[288591]: mgr[py] Loading python module 'stats' Dec 6 05:03:46 localhost ceph-mgr[288591]: mgr[py] Loading python module 'status' Dec 6 05:03:46 localhost ceph-mgr[288591]: mgr[py] Module status has missing NOTIFY_TYPES member Dec 6 05:03:46 localhost ceph-mgr[288591]: mgr[py] Loading python module 'telegraf' Dec 6 05:03:46 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:03:46.520+0000 7f047f28a140 -1 mgr[py] Module status has missing NOTIFY_TYPES member Dec 6 05:03:46 localhost ceph-mgr[288591]: mgr[py] Module telegraf has missing NOTIFY_TYPES member Dec 6 05:03:46 localhost ceph-mgr[288591]: mgr[py] Loading python module 'telemetry' Dec 6 05:03:46 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:03:46.583+0000 7f047f28a140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member Dec 6 05:03:46 localhost openstack_network_exporter[243110]: ERROR 10:03:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:03:46 localhost openstack_network_exporter[243110]: ERROR 10:03:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:03:46 localhost openstack_network_exporter[243110]: ERROR 10:03:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:03:46 localhost openstack_network_exporter[243110]: ERROR 10:03:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:03:46 localhost openstack_network_exporter[243110]: Dec 6 05:03:46 localhost openstack_network_exporter[243110]: ERROR 10:03:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:03:46 localhost openstack_network_exporter[243110]: Dec 6 05:03:46 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:03:46.750+0000 7f047f28a140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member Dec 6 05:03:46 localhost ceph-mgr[288591]: mgr[py] Module telemetry has missing NOTIFY_TYPES member Dec 6 05:03:46 localhost ceph-mgr[288591]: mgr[py] Loading python module 'test_orchestrator' Dec 6 05:03:46 localhost ceph-mgr[288591]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member Dec 6 05:03:46 localhost ceph-mgr[288591]: mgr[py] Loading python module 'volumes' Dec 6 05:03:46 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:03:46.917+0000 7f047f28a140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member Dec 6 05:03:46 localhost sshd[288897]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:03:47 localhost ceph-mgr[288591]: mgr[py] Module volumes has missing NOTIFY_TYPES member Dec 6 05:03:47 localhost ceph-mgr[288591]: mgr[py] Loading python module 'zabbix' Dec 6 05:03:47 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:03:47.113+0000 7f047f28a140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member Dec 6 05:03:47 localhost ceph-mgr[288591]: mgr[py] Module zabbix has missing NOTIFY_TYPES member Dec 6 05:03:47 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:03:47.173+0000 7f047f28a140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member Dec 6 05:03:47 localhost ceph-mgr[288591]: ms_deliver_dispatch: unhandled message 0x56140ed131e0 mon_map magic: 0 from mon.2 v2:172.18.0.104:3300/0 Dec 6 05:03:47 localhost ceph-mgr[288591]: client.0 ms_handle_reset on v2:172.18.0.103:6800/3108124117 Dec 6 05:03:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:03:47.293 160509 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:03:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:03:47.293 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:03:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:03:47.294 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:03:48 localhost nova_compute[282193]: 2025-12-06 10:03:48.663 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:03:49 localhost nova_compute[282193]: 2025-12-06 10:03:49.819 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:03:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:03:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:03:49 localhost podman[288936]: 2025-12-06 10:03:49.930725473 +0000 UTC m=+0.083869069 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0) Dec 6 05:03:49 localhost podman[288936]: 2025-12-06 10:03:49.944555521 +0000 UTC m=+0.097699097 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Dec 6 05:03:49 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:03:50 localhost podman[288935]: 2025-12-06 10:03:50.03633739 +0000 UTC m=+0.189690493 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, version=9.6, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, name=ubi9-minimal, vendor=Red Hat, Inc., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_id=edpm, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9) Dec 6 05:03:50 localhost podman[288935]: 2025-12-06 10:03:50.057205181 +0000 UTC m=+0.210558304 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., release=1755695350, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, managed_by=edpm_ansible, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 6 05:03:50 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:03:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:03:53 localhost podman[289010]: 2025-12-06 10:03:53.208824746 +0000 UTC m=+0.092346058 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 6 05:03:53 localhost podman[289010]: 2025-12-06 10:03:53.252890732 +0000 UTC m=+0.136412034 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 6 05:03:53 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:03:53 localhost nova_compute[282193]: 2025-12-06 10:03:53.707 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:03:53 localhost podman[241090]: time="2025-12-06T10:03:53Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:03:53 localhost podman[241090]: @ - - [06/Dec/2025:10:03:53 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153839 "" "Go-http-client/1.1" Dec 6 05:03:53 localhost podman[241090]: @ - - [06/Dec/2025:10:03:53 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18721 "" "Go-http-client/1.1" Dec 6 05:03:54 localhost nova_compute[282193]: 2025-12-06 10:03:54.821 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:03:58 localhost nova_compute[282193]: 2025-12-06 10:03:58.761 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:03:59 localhost sshd[289667]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:03:59 localhost nova_compute[282193]: 2025-12-06 10:03:59.853 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:03:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:03:59 localhost systemd[1]: tmp-crun.HSgR5V.mount: Deactivated successfully. Dec 6 05:03:59 localhost podman[289669]: 2025-12-06 10:03:59.958018188 +0000 UTC m=+0.089931832 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 6 05:03:59 localhost podman[289669]: 2025-12-06 10:03:59.969211772 +0000 UTC m=+0.101125446 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 6 05:03:59 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:04:02 localhost ceph-mgr[288591]: ms_deliver_dispatch: unhandled message 0x56140ed131e0 mon_map magic: 0 from mon.2 v2:172.18.0.104:3300/0 Dec 6 05:04:02 localhost podman[289770]: Dec 6 05:04:02 localhost podman[289770]: 2025-12-06 10:04:02.748072704 +0000 UTC m=+0.066257501 container create c0bc25dc24ef253950e6a7ebf32699ad905657f0b31ad1ed7e7973162177d4f2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_cohen, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, vcs-type=git, description=Red Hat Ceph Storage 7, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, io.openshift.expose-services=) Dec 6 05:04:02 localhost systemd[1]: Started libpod-conmon-c0bc25dc24ef253950e6a7ebf32699ad905657f0b31ad1ed7e7973162177d4f2.scope. Dec 6 05:04:02 localhost systemd[1]: Started libcrun container. Dec 6 05:04:02 localhost podman[289770]: 2025-12-06 10:04:02.710785703 +0000 UTC m=+0.028970520 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:04:02 localhost podman[289770]: 2025-12-06 10:04:02.816940647 +0000 UTC m=+0.135125444 container init c0bc25dc24ef253950e6a7ebf32699ad905657f0b31ad1ed7e7973162177d4f2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_cohen, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., distribution-scope=public, ceph=True, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux ) Dec 6 05:04:02 localhost systemd[1]: tmp-crun.YgimfA.mount: Deactivated successfully. Dec 6 05:04:02 localhost podman[289770]: 2025-12-06 10:04:02.831847409 +0000 UTC m=+0.150032216 container start c0bc25dc24ef253950e6a7ebf32699ad905657f0b31ad1ed7e7973162177d4f2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_cohen, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.buildah.version=1.41.4, architecture=x86_64, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public) Dec 6 05:04:02 localhost podman[289770]: 2025-12-06 10:04:02.832103827 +0000 UTC m=+0.150288625 container attach c0bc25dc24ef253950e6a7ebf32699ad905657f0b31ad1ed7e7973162177d4f2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_cohen, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , distribution-scope=public, ceph=True, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, name=rhceph, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7) Dec 6 05:04:02 localhost determined_cohen[289785]: 167 167 Dec 6 05:04:02 localhost systemd[1]: libpod-c0bc25dc24ef253950e6a7ebf32699ad905657f0b31ad1ed7e7973162177d4f2.scope: Deactivated successfully. Dec 6 05:04:02 localhost podman[289770]: 2025-12-06 10:04:02.837245401 +0000 UTC m=+0.155430198 container died c0bc25dc24ef253950e6a7ebf32699ad905657f0b31ad1ed7e7973162177d4f2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_cohen, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, ceph=True, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, CEPH_POINT_RELEASE=, vcs-type=git, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., distribution-scope=public) Dec 6 05:04:02 localhost podman[289790]: 2025-12-06 10:04:02.921753079 +0000 UTC m=+0.077119345 container remove c0bc25dc24ef253950e6a7ebf32699ad905657f0b31ad1ed7e7973162177d4f2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_cohen, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, GIT_BRANCH=main, release=1763362218, io.openshift.expose-services=, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, RELEASE=main, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, ceph=True, GIT_CLEAN=True) Dec 6 05:04:02 localhost systemd[1]: libpod-conmon-c0bc25dc24ef253950e6a7ebf32699ad905657f0b31ad1ed7e7973162177d4f2.scope: Deactivated successfully. Dec 6 05:04:03 localhost podman[289807]: Dec 6 05:04:03 localhost podman[289807]: 2025-12-06 10:04:03.051888894 +0000 UTC m=+0.089843420 container create f9763bd91c3dcd6ba1263087541332bac1010aafc65607c6776265c92137da31 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_fermi, io.openshift.tags=rhceph ceph, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , GIT_BRANCH=main, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, name=rhceph, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 6 05:04:03 localhost systemd[1]: Started libpod-conmon-f9763bd91c3dcd6ba1263087541332bac1010aafc65607c6776265c92137da31.scope. Dec 6 05:04:03 localhost systemd[1]: Started libcrun container. Dec 6 05:04:03 localhost podman[289807]: 2025-12-06 10:04:03.014280341 +0000 UTC m=+0.052234897 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:04:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb20ee5941d595a49a700bf299e37984ecbc14e42076b5a9d79ac3c38c286077/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff) Dec 6 05:04:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb20ee5941d595a49a700bf299e37984ecbc14e42076b5a9d79ac3c38c286077/merged/tmp/config supports timestamps until 2038 (0x7fffffff) Dec 6 05:04:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb20ee5941d595a49a700bf299e37984ecbc14e42076b5a9d79ac3c38c286077/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 6 05:04:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb20ee5941d595a49a700bf299e37984ecbc14e42076b5a9d79ac3c38c286077/merged/var/lib/ceph/mon/ceph-np0005548789 supports timestamps until 2038 (0x7fffffff) Dec 6 05:04:03 localhost podman[289807]: 2025-12-06 10:04:03.125405054 +0000 UTC m=+0.163359580 container init f9763bd91c3dcd6ba1263087541332bac1010aafc65607c6776265c92137da31 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_fermi, build-date=2025-11-26T19:44:28Z, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, vcs-type=git, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , RELEASE=main, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, name=rhceph, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.buildah.version=1.41.4) Dec 6 05:04:03 localhost podman[289807]: 2025-12-06 10:04:03.138813809 +0000 UTC m=+0.176768335 container start f9763bd91c3dcd6ba1263087541332bac1010aafc65607c6776265c92137da31 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_fermi, GIT_BRANCH=main, io.openshift.expose-services=, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, ceph=True, version=7, build-date=2025-11-26T19:44:28Z, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, release=1763362218, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, RELEASE=main, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 6 05:04:03 localhost podman[289807]: 2025-12-06 10:04:03.139421738 +0000 UTC m=+0.177376264 container attach f9763bd91c3dcd6ba1263087541332bac1010aafc65607c6776265c92137da31 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_fermi, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, distribution-scope=public, CEPH_POINT_RELEASE=, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, GIT_CLEAN=True, release=1763362218, RELEASE=main, com.redhat.component=rhceph-container) Dec 6 05:04:03 localhost systemd[1]: libpod-f9763bd91c3dcd6ba1263087541332bac1010aafc65607c6776265c92137da31.scope: Deactivated successfully. Dec 6 05:04:03 localhost podman[289807]: 2025-12-06 10:04:03.229706169 +0000 UTC m=+0.267660715 container died f9763bd91c3dcd6ba1263087541332bac1010aafc65607c6776265c92137da31 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_fermi, release=1763362218, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, RELEASE=main, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, vcs-type=git, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.openshift.expose-services=) Dec 6 05:04:03 localhost podman[289848]: 2025-12-06 10:04:03.320108444 +0000 UTC m=+0.081036470 container remove f9763bd91c3dcd6ba1263087541332bac1010aafc65607c6776265c92137da31 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_fermi, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, build-date=2025-11-26T19:44:28Z, ceph=True, vendor=Red Hat, Inc., version=7, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, description=Red Hat Ceph Storage 7) Dec 6 05:04:03 localhost systemd[1]: libpod-conmon-f9763bd91c3dcd6ba1263087541332bac1010aafc65607c6776265c92137da31.scope: Deactivated successfully. Dec 6 05:04:03 localhost systemd[1]: Reloading. Dec 6 05:04:03 localhost systemd-sysv-generator[289891]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 05:04:03 localhost systemd-rc-local-generator[289886]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 05:04:03 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:04:03 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 05:04:03 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:04:03 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:04:03 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 05:04:03 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 05:04:03 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:04:03 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:04:03 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:04:03 localhost systemd[1]: var-lib-containers-storage-overlay-6a38f3cd14cdcfbedfc2d0daf615495e6a4e29fe311517b312a3c102feb8548d-merged.mount: Deactivated successfully. Dec 6 05:04:03 localhost systemd[1]: Reloading. Dec 6 05:04:03 localhost nova_compute[282193]: 2025-12-06 10:04:03.786 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:04:03 localhost systemd-sysv-generator[289935]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 05:04:03 localhost systemd-rc-local-generator[289932]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 05:04:03 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:04:03 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 05:04:03 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:04:03 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:04:03 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 05:04:03 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 05:04:03 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:04:03 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:04:03 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:04:04 localhost systemd[1]: Starting Ceph mon.np0005548789 for 1939e851-b10c-5c3b-9bb7-8e7f380233e8... Dec 6 05:04:04 localhost podman[289992]: Dec 6 05:04:04 localhost podman[289992]: 2025-12-06 10:04:04.465845656 +0000 UTC m=+0.077295970 container create 8db79eb988f6401c6530def208a6cc95f5f6889aff146370f866953a0dd24fb0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mon-np0005548789, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, distribution-scope=public, description=Red Hat Ceph Storage 7, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, RELEASE=main, vendor=Red Hat, Inc., GIT_CLEAN=True, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, release=1763362218, architecture=x86_64, CEPH_POINT_RELEASE=, ceph=True, io.buildah.version=1.41.4, vcs-type=git) Dec 6 05:04:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:04:04 localhost systemd[1]: tmp-crun.kOrphq.mount: Deactivated successfully. Dec 6 05:04:04 localhost podman[289992]: 2025-12-06 10:04:04.42873075 +0000 UTC m=+0.040181094 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:04:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58b056c2dffd8d19854bf632a4b426ef2949193a3440de3f8d9216685a5e7198/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 6 05:04:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58b056c2dffd8d19854bf632a4b426ef2949193a3440de3f8d9216685a5e7198/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 6 05:04:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58b056c2dffd8d19854bf632a4b426ef2949193a3440de3f8d9216685a5e7198/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 6 05:04:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58b056c2dffd8d19854bf632a4b426ef2949193a3440de3f8d9216685a5e7198/merged/var/lib/ceph/mon/ceph-np0005548789 supports timestamps until 2038 (0x7fffffff) Dec 6 05:04:04 localhost podman[289992]: 2025-12-06 10:04:04.549854498 +0000 UTC m=+0.161304792 container init 8db79eb988f6401c6530def208a6cc95f5f6889aff146370f866953a0dd24fb0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mon-np0005548789, distribution-scope=public, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, name=rhceph, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, vcs-type=git, GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, RELEASE=main, vendor=Red Hat, Inc., io.openshift.expose-services=) Dec 6 05:04:04 localhost podman[289992]: 2025-12-06 10:04:04.557483191 +0000 UTC m=+0.168933495 container start 8db79eb988f6401c6530def208a6cc95f5f6889aff146370f866953a0dd24fb0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mon-np0005548789, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, com.redhat.component=rhceph-container, distribution-scope=public, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, ceph=True, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, version=7, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, RELEASE=main, GIT_CLEAN=True, CEPH_POINT_RELEASE=) Dec 6 05:04:04 localhost bash[289992]: 8db79eb988f6401c6530def208a6cc95f5f6889aff146370f866953a0dd24fb0 Dec 6 05:04:04 localhost systemd[1]: Started Ceph mon.np0005548789 for 1939e851-b10c-5c3b-9bb7-8e7f380233e8. Dec 6 05:04:04 localhost podman[290005]: 2025-12-06 10:04:04.603206359 +0000 UTC m=+0.097018635 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 6 05:04:04 localhost ceph-mon[290022]: set uid:gid to 167:167 (ceph:ceph) Dec 6 05:04:04 localhost ceph-mon[290022]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mon, pid 2 Dec 6 05:04:04 localhost ceph-mon[290022]: pidfile_write: ignore empty --pid-file Dec 6 05:04:04 localhost ceph-mon[290022]: load: jerasure load: lrc Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: RocksDB version: 7.9.2 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Git sha 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Compile date 2025-09-23 00:00:00 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: DB SUMMARY Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: DB Session ID: ETDWGFPM6GCTACWNDM5G Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: CURRENT file: CURRENT Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: IDENTITY file: IDENTITY Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: MANIFEST file: MANIFEST-000005 size: 59 Bytes Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: SST files in /var/lib/ceph/mon/ceph-np0005548789/store.db dir, Total Num: 0, files: Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-np0005548789/store.db: 000004.log size: 761 ; Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.error_if_exists: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.create_if_missing: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.paranoid_checks: 1 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.flush_verify_memtable_count: 1 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.env: 0x55b170c829e0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.fs: PosixFileSystem Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.info_log: 0x55b173030d20 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.max_file_opening_threads: 16 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.statistics: (nil) Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.use_fsync: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.max_log_file_size: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.max_manifest_file_size: 1073741824 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.log_file_time_to_roll: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.keep_log_file_num: 1000 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.recycle_log_file_num: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.allow_fallocate: 1 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.allow_mmap_reads: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.allow_mmap_writes: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.use_direct_reads: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.create_missing_column_families: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.db_log_dir: Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.wal_dir: Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.table_cache_numshardbits: 6 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.WAL_ttl_seconds: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.WAL_size_limit_MB: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.manifest_preallocation_size: 4194304 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.is_fd_close_on_exec: 1 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.advise_random_on_open: 1 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.db_write_buffer_size: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.write_buffer_manager: 0x55b173041540 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.access_hint_on_compaction_start: 1 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.random_access_max_buffer_size: 1048576 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.use_adaptive_mutex: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.rate_limiter: (nil) Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.wal_recovery_mode: 2 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.enable_thread_tracking: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.enable_pipelined_write: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.unordered_write: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.allow_concurrent_memtable_write: 1 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.write_thread_max_yield_usec: 100 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.write_thread_slow_yield_usec: 3 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.row_cache: None Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.wal_filter: None Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.avoid_flush_during_recovery: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.allow_ingest_behind: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.two_write_queues: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.manual_wal_flush: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.wal_compression: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.atomic_flush: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.persist_stats_to_disk: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.write_dbid_to_manifest: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.log_readahead_size: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.file_checksum_gen_factory: Unknown Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.best_efforts_recovery: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.allow_data_in_errors: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.db_host_id: __hostname__ Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.enforce_single_del_contracts: true Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.max_background_jobs: 2 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.max_background_compactions: -1 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.max_subcompactions: 1 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.avoid_flush_during_shutdown: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.writable_file_max_buffer_size: 1048576 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.delayed_write_rate : 16777216 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.max_total_wal_size: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.stats_dump_period_sec: 600 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.stats_persist_period_sec: 600 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.stats_history_buffer_size: 1048576 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.max_open_files: -1 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.bytes_per_sync: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.wal_bytes_per_sync: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.strict_bytes_per_sync: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.compaction_readahead_size: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.max_background_flushes: -1 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Compression algorithms supported: Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: #011kZSTD supported: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: #011kXpressCompression supported: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: #011kBZip2Compression supported: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: #011kLZ4Compression supported: 1 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: #011kZlibCompression supported: 1 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: #011kLZ4HCCompression supported: 1 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: #011kSnappyCompression supported: 1 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Fast CRC32 supported: Supported on x86 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: DMutex implementation: pthread_mutex_t Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-np0005548789/store.db/MANIFEST-000005 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.merge_operator: Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.compaction_filter: None Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.compaction_filter_factory: None Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.sst_partitioner_factory: None Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.memtable_factory: SkipListFactory Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.table_factory: BlockBasedTable Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b173030980)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55b17302d350#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.write_buffer_size: 33554432 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.max_write_buffer_number: 2 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.compression: NoCompression Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.bottommost_compression: Disabled Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.prefix_extractor: nullptr Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.num_levels: 7 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.min_write_buffer_number_to_merge: 1 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.compression_opts.window_bits: -14 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.compression_opts.level: 32767 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.compression_opts.strategy: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.compression_opts.enabled: false Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.level0_file_num_compaction_trigger: 4 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.target_file_size_base: 67108864 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.target_file_size_multiplier: 1 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.max_bytes_for_level_base: 268435456 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.arena_block_size: 1048576 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.disable_auto_compactions: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.table_properties_collectors: Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.inplace_update_support: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.memtable_huge_page_size: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.bloom_locality: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.max_successive_merges: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.paranoid_file_checks: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.force_consistency_checks: 1 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.report_bg_io_stats: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.ttl: 2592000 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.enable_blob_files: false Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.min_blob_size: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.blob_file_size: 268435456 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.blob_compression_type: NoCompression Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.enable_blob_garbage_collection: false Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.blob_file_starting_level: 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-np0005548789/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 8b48a877-4508-4eb4-a052-67f753f228b0 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015444621137, "job": 1, "event": "recovery_started", "wal_files": [4]} Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015444623570, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1887, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 773, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 651, "raw_average_value_size": 130, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015444, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8b48a877-4508-4eb4-a052-67f753f228b0", "db_session_id": "ETDWGFPM6GCTACWNDM5G", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}} Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015444623698, "job": 1, "event": "recovery_finished"} Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: [db/version_set.cc:5047] Creating manifest 10 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55b173054e00 Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: DB pointer 0x55b17314a000 Dec 6 05:04:04 localhost ceph-mon[290022]: mon.np0005548789 does not exist in monmap, will attempt to join an existing cluster Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 6 05:04:04 localhost ceph-mon[290022]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 1/0 1.84 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.8 0.00 0.00 1 0.002 0 0 0.0 0.0#012 Sum 1/0 1.84 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.8 0.00 0.00 1 0.002 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.8 0.00 0.00 1 0.002 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.8 0.00 0.00 1 0.002 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.12 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.12 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b17302d350#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 4.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(2,0.95 KB,0.000181794%)#012#012** File Read Latency Histogram By Level [default] ** Dec 6 05:04:04 localhost ceph-mon[290022]: using public_addr v2:172.18.0.107:0/0 -> [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] Dec 6 05:04:04 localhost ceph-mon[290022]: starting mon.np0005548789 rank -1 at public addrs [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] at bind addrs [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] mon_data /var/lib/ceph/mon/ceph-np0005548789 fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8 Dec 6 05:04:04 localhost ceph-mon[290022]: mon.np0005548789@-1(???) e0 preinit fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8 Dec 6 05:04:04 localhost podman[290005]: 2025-12-06 10:04:04.643191747 +0000 UTC m=+0.137004073 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller) Dec 6 05:04:04 localhost ceph-mon[290022]: mon.np0005548789@-1(synchronizing) e4 sync_obtain_latest_monmap Dec 6 05:04:04 localhost ceph-mon[290022]: mon.np0005548789@-1(synchronizing) e4 sync_obtain_latest_monmap obtained monmap e4 Dec 6 05:04:04 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:04:04 localhost nova_compute[282193]: 2025-12-06 10:04:04.887 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:04:04 localhost ceph-mon[290022]: mon.np0005548789@-1(synchronizing).mds e16 new map Dec 6 05:04:04 localhost ceph-mon[290022]: mon.np0005548789@-1(synchronizing).mds e16 print_map#012e16#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#01116#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-06T08:18:49.925523+0000#012modified#0112025-12-06T10:03:02.051468+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#01187#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=26356}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[6]#012metadata_pool#0117#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 26356 members: 26356#012[mds.mds.np0005548790.vhcezv{0:26356} state up:active seq 16 addr [v2:172.18.0.108:6808/1621657194,v1:172.18.0.108:6809/1621657194] compat {c=[1],r=[1],i=[17ff]}]#012 #012 #012Standby daemons:#012 #012[mds.mds.np0005548789.vxwwsq{-1:16884} state up:standby seq 1 addr [v2:172.18.0.107:6808/3033303281,v1:172.18.0.107:6809/3033303281] compat {c=[1],r=[1],i=[17ff]}]#012[mds.mds.np0005548788.erzujf{-1:16890} state up:standby seq 1 addr [v2:172.18.0.106:6808/309324236,v1:172.18.0.106:6809/309324236] compat {c=[1],r=[1],i=[17ff]}] Dec 6 05:04:04 localhost ceph-mon[290022]: mon.np0005548789@-1(synchronizing).osd e87 crush map has features 3314933000854323200, adjusting msgr requires Dec 6 05:04:04 localhost ceph-mon[290022]: mon.np0005548789@-1(synchronizing).osd e87 crush map has features 432629239337189376, adjusting msgr requires Dec 6 05:04:04 localhost ceph-mon[290022]: mon.np0005548789@-1(synchronizing).osd e87 crush map has features 432629239337189376, adjusting msgr requires Dec 6 05:04:04 localhost ceph-mon[290022]: mon.np0005548789@-1(synchronizing).osd e87 crush map has features 432629239337189376, adjusting msgr requires Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548789.mzhmje", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.np0005548789.mzhmje", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:04 localhost ceph-mon[290022]: Deploying daemon mgr.np0005548789.mzhmje on np0005548789.localdomain Dec 6 05:04:04 localhost ceph-mon[290022]: Added label mon to host np0005548785.localdomain Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548790.kvkfyr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.np0005548790.kvkfyr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished Dec 6 05:04:04 localhost ceph-mon[290022]: Added label _admin to host np0005548785.localdomain Dec 6 05:04:04 localhost ceph-mon[290022]: Deploying daemon mgr.np0005548790.kvkfyr on np0005548790.localdomain Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:04 localhost ceph-mon[290022]: Added label mon to host np0005548786.localdomain Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:04 localhost ceph-mon[290022]: Added label _admin to host np0005548786.localdomain Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:04 localhost ceph-mon[290022]: Added label mon to host np0005548787.localdomain Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:04 localhost ceph-mon[290022]: Added label _admin to host np0005548787.localdomain Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:04 localhost ceph-mon[290022]: Added label mon to host np0005548788.localdomain Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:04:04 localhost ceph-mon[290022]: Added label _admin to host np0005548788.localdomain Dec 6 05:04:04 localhost ceph-mon[290022]: Updating np0005548788.localdomain:/etc/ceph/ceph.conf Dec 6 05:04:04 localhost ceph-mon[290022]: Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:04 localhost ceph-mon[290022]: Added label mon to host np0005548789.localdomain Dec 6 05:04:04 localhost ceph-mon[290022]: Updating np0005548788.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 6 05:04:04 localhost ceph-mon[290022]: Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:04 localhost ceph-mon[290022]: Added label _admin to host np0005548789.localdomain Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:04:04 localhost ceph-mon[290022]: Updating np0005548789.localdomain:/etc/ceph/ceph.conf Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:04 localhost ceph-mon[290022]: Added label mon to host np0005548790.localdomain Dec 6 05:04:04 localhost ceph-mon[290022]: Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:04:04 localhost ceph-mon[290022]: Updating np0005548789.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:04 localhost ceph-mon[290022]: Added label _admin to host np0005548790.localdomain Dec 6 05:04:04 localhost ceph-mon[290022]: Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:04:04 localhost ceph-mon[290022]: Updating np0005548790.localdomain:/etc/ceph/ceph.conf Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:04 localhost ceph-mon[290022]: Saving service mon spec with placement label:mon Dec 6 05:04:04 localhost ceph-mon[290022]: Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:04:04 localhost ceph-mon[290022]: Updating np0005548790.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 6 05:04:04 localhost ceph-mon[290022]: Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring Dec 6 05:04:04 localhost ceph-mon[290022]: Deploying daemon mon.np0005548790 on np0005548790.localdomain Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:04 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 6 05:04:04 localhost ceph-mon[290022]: mon.np0005548789@-1(synchronizing).paxosservice(auth 1..34) refresh upgraded, format 0 -> 3 Dec 6 05:04:05 localhost sshd[290074]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:04:06 localhost sshd[290076]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:04:08 localhost nova_compute[282193]: 2025-12-06 10:04:08.834 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:04:08 localhost ceph-mgr[288591]: ms_deliver_dispatch: unhandled message 0x56140ed12f20 mon_map magic: 0 from mon.2 v2:172.18.0.104:3300/0 Dec 6 05:04:09 localhost nova_compute[282193]: 2025-12-06 10:04:09.914 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:04:09 localhost ceph-mon[290022]: mon.np0005548789@-1(probing) e4 adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints Dec 6 05:04:10 localhost ceph-mon[290022]: mon.np0005548789@-1(probing) e4 adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints Dec 6 05:04:10 localhost ceph-mon[290022]: mon.np0005548789@-1(probing) e5 my rank is now 4 (was -1) Dec 6 05:04:10 localhost ceph-mon[290022]: log_channel(cluster) log [INF] : mon.np0005548789 calling monitor election Dec 6 05:04:10 localhost ceph-mon[290022]: paxos.4).electionLogic(0) init, first boot, initializing epoch at 1 Dec 6 05:04:10 localhost ceph-mon[290022]: mon.np0005548789@4(electing) e5 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 6 05:04:12 localhost ceph-mon[290022]: mon.np0005548789@4(electing) e5 adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints Dec 6 05:04:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:04:12 localhost podman[290078]: 2025-12-06 10:04:12.928733591 +0000 UTC m=+0.087141682 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Dec 6 05:04:12 localhost podman[290078]: 2025-12-06 10:04:12.940242406 +0000 UTC m=+0.098650537 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_managed=true) Dec 6 05:04:12 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:04:13 localhost nova_compute[282193]: 2025-12-06 10:04:13.883 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:04:13 localhost ceph-mon[290022]: mon.np0005548789@4(electing) e5 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 6 05:04:13 localhost ceph-mon[290022]: mon.np0005548789@4(peon) e5 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code} Dec 6 05:04:13 localhost ceph-mon[290022]: mon.np0005548789@4(peon) e5 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout} Dec 6 05:04:13 localhost ceph-mon[290022]: Deploying daemon mon.np0005548789 on np0005548789.localdomain Dec 6 05:04:13 localhost ceph-mon[290022]: mon.np0005548785 calling monitor election Dec 6 05:04:13 localhost ceph-mon[290022]: mon.np0005548787 calling monitor election Dec 6 05:04:13 localhost ceph-mon[290022]: mon.np0005548786 calling monitor election Dec 6 05:04:14 localhost ceph-mon[290022]: mon.np0005548790 calling monitor election Dec 6 05:04:14 localhost ceph-mon[290022]: mon.np0005548785 is new leader, mons np0005548785,np0005548787,np0005548786,np0005548790 in quorum (ranks 0,1,2,3) Dec 6 05:04:14 localhost ceph-mon[290022]: overall HEALTH_OK Dec 6 05:04:14 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:14 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:14 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:14 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 6 05:04:14 localhost ceph-mon[290022]: Deploying daemon mon.np0005548788 on np0005548788.localdomain Dec 6 05:04:14 localhost ceph-mon[290022]: mon.np0005548789@4(peon) e5 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 6 05:04:14 localhost ceph-mon[290022]: mgrc update_daemon_metadata mon.np0005548789 metadata {addrs=[v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable),ceph_version_short=18.2.1-361.el9cp,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=np0005548789.localdomain,container_image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=rhel,distro_description=Red Hat Enterprise Linux 9.7 (Plow),distro_version=9.7,hostname=np0005548789.localdomain,kernel_description=#1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023,kernel_version=5.14.0-284.11.1.el9_2.x86_64,mem_swap_kb=1048572,mem_total_kb=16116612,os=Linux} Dec 6 05:04:14 localhost ceph-mon[290022]: mon.np0005548785 calling monitor election Dec 6 05:04:14 localhost ceph-mon[290022]: mon.np0005548787 calling monitor election Dec 6 05:04:14 localhost ceph-mon[290022]: mon.np0005548790 calling monitor election Dec 6 05:04:14 localhost ceph-mon[290022]: mon.np0005548786 calling monitor election Dec 6 05:04:14 localhost ceph-mon[290022]: mon.np0005548789 calling monitor election Dec 6 05:04:14 localhost ceph-mon[290022]: mon.np0005548785 is new leader, mons np0005548785,np0005548787,np0005548786,np0005548790,np0005548789 in quorum (ranks 0,1,2,3,4) Dec 6 05:04:14 localhost ceph-mon[290022]: overall HEALTH_OK Dec 6 05:04:14 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:14 localhost ceph-mon[290022]: mon.np0005548789@4(peon) e5 adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints Dec 6 05:04:14 localhost ceph-mgr[288591]: ms_deliver_dispatch: unhandled message 0x56140ed131e0 mon_map magic: 0 from mon.2 v2:172.18.0.104:3300/0 Dec 6 05:04:14 localhost ceph-mon[290022]: log_channel(cluster) log [INF] : mon.np0005548789 calling monitor election Dec 6 05:04:14 localhost ceph-mon[290022]: paxos.4).electionLogic(22) init, last seen epoch 22 Dec 6 05:04:14 localhost ceph-mon[290022]: mon.np0005548789@4(electing) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 6 05:04:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:04:14 localhost nova_compute[282193]: 2025-12-06 10:04:14.959 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:04:14 localhost podman[290150]: 2025-12-06 10:04:14.963364286 +0000 UTC m=+0.123518116 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 6 05:04:14 localhost podman[290150]: 2025-12-06 10:04:14.967625801 +0000 UTC m=+0.127779571 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 6 05:04:14 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:04:15 localhost podman[290244]: 2025-12-06 10:04:15.50714427 +0000 UTC m=+0.090473489 container exec ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, GIT_CLEAN=True, maintainer=Guillaume Abrioux , distribution-scope=public, architecture=x86_64, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, GIT_BRANCH=main, com.redhat.component=rhceph-container) Dec 6 05:04:15 localhost podman[290244]: 2025-12-06 10:04:15.580153074 +0000 UTC m=+0.163482273 container exec_died ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, release=1763362218, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-type=git, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main) Dec 6 05:04:16 localhost openstack_network_exporter[243110]: ERROR 10:04:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:04:16 localhost openstack_network_exporter[243110]: ERROR 10:04:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:04:16 localhost openstack_network_exporter[243110]: ERROR 10:04:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:04:16 localhost openstack_network_exporter[243110]: ERROR 10:04:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:04:16 localhost openstack_network_exporter[243110]: Dec 6 05:04:16 localhost openstack_network_exporter[243110]: ERROR 10:04:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:04:16 localhost openstack_network_exporter[243110]: Dec 6 05:04:18 localhost nova_compute[282193]: 2025-12-06 10:04:18.919 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:04:19 localhost ceph-mon[290022]: paxos.4).electionLogic(23) init, last seen epoch 23, mid-election, bumping Dec 6 05:04:19 localhost ceph-mon[290022]: mon.np0005548789@4(electing) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 6 05:04:19 localhost ceph-mon[290022]: mon.np0005548789@4(electing) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 6 05:04:19 localhost ceph-mon[290022]: mon.np0005548789@4(peon) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 6 05:04:19 localhost ceph-mon[290022]: mon.np0005548785 calling monitor election Dec 6 05:04:19 localhost ceph-mon[290022]: mon.np0005548786 calling monitor election Dec 6 05:04:19 localhost ceph-mon[290022]: mon.np0005548790 calling monitor election Dec 6 05:04:19 localhost ceph-mon[290022]: mon.np0005548787 calling monitor election Dec 6 05:04:19 localhost ceph-mon[290022]: mon.np0005548789 calling monitor election Dec 6 05:04:19 localhost ceph-mon[290022]: mon.np0005548788 calling monitor election Dec 6 05:04:19 localhost ceph-mon[290022]: mon.np0005548785 is new leader, mons np0005548785,np0005548787,np0005548786,np0005548790,np0005548789,np0005548788 in quorum (ranks 0,1,2,3,4,5) Dec 6 05:04:19 localhost ceph-mon[290022]: overall HEALTH_OK Dec 6 05:04:19 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:20 localhost nova_compute[282193]: 2025-12-06 10:04:19.996 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:04:20 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:20 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:20 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:20 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:20 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:20 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:20 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:04:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:04:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:04:20 localhost podman[290452]: 2025-12-06 10:04:20.57637708 +0000 UTC m=+0.075727271 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm) Dec 6 05:04:20 localhost podman[290452]: 2025-12-06 10:04:20.587455211 +0000 UTC m=+0.086805422 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute) Dec 6 05:04:20 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:04:20 localhost systemd[1]: tmp-crun.vSVj6z.mount: Deactivated successfully. Dec 6 05:04:20 localhost podman[290451]: 2025-12-06 10:04:20.628399139 +0000 UTC m=+0.128457772 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, name=ubi9-minimal, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, io.buildah.version=1.33.7, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vendor=Red Hat, Inc.) Dec 6 05:04:20 localhost podman[290451]: 2025-12-06 10:04:20.642049212 +0000 UTC m=+0.142107795 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., version=9.6, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, architecture=x86_64, io.buildah.version=1.33.7, container_name=openstack_network_exporter, io.openshift.expose-services=, io.openshift.tags=minimal rhel9) Dec 6 05:04:20 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:04:21 localhost sshd[290703]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:04:22 localhost ceph-mon[290022]: Updating np0005548785.localdomain:/etc/ceph/ceph.conf Dec 6 05:04:22 localhost ceph-mon[290022]: Updating np0005548786.localdomain:/etc/ceph/ceph.conf Dec 6 05:04:22 localhost ceph-mon[290022]: Updating np0005548787.localdomain:/etc/ceph/ceph.conf Dec 6 05:04:22 localhost ceph-mon[290022]: Updating np0005548788.localdomain:/etc/ceph/ceph.conf Dec 6 05:04:22 localhost ceph-mon[290022]: Updating np0005548789.localdomain:/etc/ceph/ceph.conf Dec 6 05:04:22 localhost ceph-mon[290022]: Updating np0005548790.localdomain:/etc/ceph/ceph.conf Dec 6 05:04:22 localhost ceph-mon[290022]: Updating np0005548786.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:04:22 localhost ceph-mon[290022]: Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:04:22 localhost ceph-mon[290022]: Updating np0005548785.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:04:22 localhost ceph-mon[290022]: Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:04:22 localhost ceph-mon[290022]: Updating np0005548787.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:04:22 localhost ceph-mon[290022]: Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:04:22 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:22 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:22 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:22 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:22 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:22 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:22 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:22 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:22 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:22 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:22 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:22 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:22 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:23 localhost nova_compute[282193]: 2025-12-06 10:04:23.078 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:04:23 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 6 05:04:23 localhost nova_compute[282193]: 2025-12-06 10:04:23.180 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:04:23 localhost nova_compute[282193]: 2025-12-06 10:04:23.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:04:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:04:23 localhost podman[290813]: 2025-12-06 10:04:23.515465329 +0000 UTC m=+0.087552696 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:04:23 localhost podman[290813]: 2025-12-06 10:04:23.529275847 +0000 UTC m=+0.101363214 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:04:23 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:04:23 localhost nova_compute[282193]: 2025-12-06 10:04:23.555 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:04:23 localhost nova_compute[282193]: 2025-12-06 10:04:23.555 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:04:23 localhost nova_compute[282193]: 2025-12-06 10:04:23.556 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:04:23 localhost nova_compute[282193]: 2025-12-06 10:04:23.556 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Auditing locally available compute resources for np0005548789.localdomain (node: np0005548789.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 6 05:04:23 localhost nova_compute[282193]: 2025-12-06 10:04:23.557 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:04:23 localhost sshd[290832]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:04:23 localhost podman[241090]: time="2025-12-06T10:04:23Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:04:23 localhost nova_compute[282193]: 2025-12-06 10:04:23.968 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:04:23 localhost podman[241090]: @ - - [06/Dec/2025:10:04:23 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156104 "" "Go-http-client/1.1" Dec 6 05:04:24 localhost nova_compute[282193]: 2025-12-06 10:04:24.014 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:04:24 localhost podman[241090]: @ - - [06/Dec/2025:10:04:23 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19225 "" "Go-http-client/1.1" Dec 6 05:04:24 localhost ceph-mon[290022]: Reconfiguring mon.np0005548785 (monmap changed)... Dec 6 05:04:24 localhost ceph-mon[290022]: Reconfiguring daemon mon.np0005548785 on np0005548785.localdomain Dec 6 05:04:24 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:24 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:24 localhost ceph-mon[290022]: Reconfiguring mgr.np0005548785.vhqlsq (monmap changed)... Dec 6 05:04:24 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548785.vhqlsq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:04:24 localhost ceph-mon[290022]: Reconfiguring daemon mgr.np0005548785.vhqlsq on np0005548785.localdomain Dec 6 05:04:24 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:24 localhost nova_compute[282193]: 2025-12-06 10:04:24.512 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:04:24 localhost nova_compute[282193]: 2025-12-06 10:04:24.513 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:04:24 localhost nova_compute[282193]: 2025-12-06 10:04:24.745 282197 WARNING nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 6 05:04:24 localhost nova_compute[282193]: 2025-12-06 10:04:24.747 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Hypervisor/Node resource view: name=np0005548789.localdomain free_ram=11515MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 6 05:04:24 localhost nova_compute[282193]: 2025-12-06 10:04:24.747 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:04:24 localhost nova_compute[282193]: 2025-12-06 10:04:24.747 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:04:25 localhost nova_compute[282193]: 2025-12-06 10:04:25.021 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:04:25 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:25 localhost ceph-mon[290022]: Reconfiguring crash.np0005548785 (monmap changed)... Dec 6 05:04:25 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548785.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:04:25 localhost ceph-mon[290022]: Reconfiguring daemon crash.np0005548785 on np0005548785.localdomain Dec 6 05:04:25 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:25 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:25 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:25 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548786.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:04:25 localhost nova_compute[282193]: 2025-12-06 10:04:25.469 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 6 05:04:25 localhost nova_compute[282193]: 2025-12-06 10:04:25.470 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 6 05:04:25 localhost nova_compute[282193]: 2025-12-06 10:04:25.470 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Final resource view: name=np0005548789.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 6 05:04:25 localhost nova_compute[282193]: 2025-12-06 10:04:25.507 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:04:25 localhost nova_compute[282193]: 2025-12-06 10:04:25.964 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:04:25 localhost nova_compute[282193]: 2025-12-06 10:04:25.970 282197 DEBUG nova.compute.provider_tree [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed in ProviderTree for provider: 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 6 05:04:26 localhost sshd[290878]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:04:26 localhost ceph-mon[290022]: Reconfiguring crash.np0005548786 (monmap changed)... Dec 6 05:04:26 localhost ceph-mon[290022]: Reconfiguring daemon crash.np0005548786 on np0005548786.localdomain Dec 6 05:04:26 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:26 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:26 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 6 05:04:26 localhost nova_compute[282193]: 2025-12-06 10:04:26.467 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 6 05:04:26 localhost nova_compute[282193]: 2025-12-06 10:04:26.470 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Compute_service record updated for np0005548789.localdomain:np0005548789.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 6 05:04:26 localhost nova_compute[282193]: 2025-12-06 10:04:26.470 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:04:27 localhost ceph-mon[290022]: Reconfiguring mon.np0005548786 (monmap changed)... Dec 6 05:04:27 localhost ceph-mon[290022]: Reconfiguring daemon mon.np0005548786 on np0005548786.localdomain Dec 6 05:04:27 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:27 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:27 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548786.mczynb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:04:27 localhost nova_compute[282193]: 2025-12-06 10:04:27.471 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:04:27 localhost nova_compute[282193]: 2025-12-06 10:04:27.471 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 6 05:04:27 localhost nova_compute[282193]: 2025-12-06 10:04:27.472 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 6 05:04:27 localhost nova_compute[282193]: 2025-12-06 10:04:27.914 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 6 05:04:27 localhost nova_compute[282193]: 2025-12-06 10:04:27.914 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquired lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 6 05:04:27 localhost nova_compute[282193]: 2025-12-06 10:04:27.915 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 6 05:04:27 localhost nova_compute[282193]: 2025-12-06 10:04:27.915 282197 DEBUG nova.objects.instance [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 6 05:04:28 localhost ceph-mon[290022]: mon.np0005548789@4(peon) e6 handle_command mon_command({"prefix": "mgr fail"} v 0) Dec 6 05:04:28 localhost ceph-mon[290022]: log_channel(audit) log [INF] : from='client.? 172.18.0.103:0/899954398' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Dec 6 05:04:28 localhost ceph-mon[290022]: mon.np0005548789@4(peon).osd e87 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375 Dec 6 05:04:28 localhost ceph-mon[290022]: mon.np0005548789@4(peon).osd e87 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1 Dec 6 05:04:28 localhost ceph-mon[290022]: mon.np0005548789@4(peon).osd e88 e88: 6 total, 6 up, 6 in Dec 6 05:04:28 localhost sshd[290880]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:04:28 localhost systemd[1]: session-25.scope: Deactivated successfully. Dec 6 05:04:28 localhost systemd[1]: session-19.scope: Deactivated successfully. Dec 6 05:04:28 localhost systemd[1]: session-20.scope: Deactivated successfully. Dec 6 05:04:28 localhost systemd[1]: session-16.scope: Deactivated successfully. Dec 6 05:04:28 localhost systemd[1]: session-14.scope: Deactivated successfully. Dec 6 05:04:28 localhost systemd[1]: session-23.scope: Deactivated successfully. Dec 6 05:04:28 localhost systemd[1]: session-24.scope: Deactivated successfully. Dec 6 05:04:28 localhost systemd[1]: session-21.scope: Deactivated successfully. Dec 6 05:04:28 localhost systemd[1]: session-22.scope: Deactivated successfully. Dec 6 05:04:28 localhost systemd-logind[766]: Session 22 logged out. Waiting for processes to exit. Dec 6 05:04:28 localhost systemd-logind[766]: Session 21 logged out. Waiting for processes to exit. Dec 6 05:04:28 localhost systemd-logind[766]: Session 25 logged out. Waiting for processes to exit. Dec 6 05:04:28 localhost systemd-logind[766]: Session 19 logged out. Waiting for processes to exit. Dec 6 05:04:28 localhost systemd-logind[766]: Session 23 logged out. Waiting for processes to exit. Dec 6 05:04:28 localhost systemd-logind[766]: Session 24 logged out. Waiting for processes to exit. Dec 6 05:04:28 localhost systemd-logind[766]: Session 14 logged out. Waiting for processes to exit. Dec 6 05:04:28 localhost systemd-logind[766]: Session 16 logged out. Waiting for processes to exit. Dec 6 05:04:28 localhost systemd-logind[766]: Session 20 logged out. Waiting for processes to exit. Dec 6 05:04:28 localhost systemd[1]: session-17.scope: Deactivated successfully. Dec 6 05:04:28 localhost systemd[1]: session-18.scope: Deactivated successfully. Dec 6 05:04:28 localhost systemd-logind[766]: Session 17 logged out. Waiting for processes to exit. Dec 6 05:04:28 localhost systemd-logind[766]: Session 18 logged out. Waiting for processes to exit. Dec 6 05:04:28 localhost ceph-mon[290022]: Reconfiguring mgr.np0005548786.mczynb (monmap changed)... Dec 6 05:04:28 localhost ceph-mon[290022]: Reconfiguring daemon mgr.np0005548786.mczynb on np0005548786.localdomain Dec 6 05:04:28 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:28 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' Dec 6 05:04:28 localhost ceph-mon[290022]: Reconfiguring mon.np0005548787 (monmap changed)... Dec 6 05:04:28 localhost ceph-mon[290022]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 6 05:04:28 localhost ceph-mon[290022]: Reconfiguring daemon mon.np0005548787 on np0005548787.localdomain Dec 6 05:04:28 localhost ceph-mon[290022]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Dec 6 05:04:28 localhost systemd-logind[766]: Removed session 25. Dec 6 05:04:28 localhost ceph-mon[290022]: Activating manager daemon np0005548788.yvwbqq Dec 6 05:04:28 localhost ceph-mon[290022]: from='client.? 172.18.0.103:0/899954398' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Dec 6 05:04:28 localhost systemd[1]: session-26.scope: Deactivated successfully. Dec 6 05:04:28 localhost systemd[1]: session-26.scope: Consumed 3min 21.222s CPU time. Dec 6 05:04:28 localhost systemd-logind[766]: Removed session 19. Dec 6 05:04:28 localhost systemd-logind[766]: Session 26 logged out. Waiting for processes to exit. Dec 6 05:04:28 localhost systemd-logind[766]: Removed session 20. Dec 6 05:04:28 localhost systemd-logind[766]: Removed session 16. Dec 6 05:04:28 localhost systemd-logind[766]: Removed session 14. Dec 6 05:04:28 localhost systemd-logind[766]: Removed session 23. Dec 6 05:04:28 localhost systemd-logind[766]: Removed session 24. Dec 6 05:04:28 localhost systemd-logind[766]: Removed session 21. Dec 6 05:04:28 localhost systemd-logind[766]: Removed session 22. Dec 6 05:04:28 localhost systemd-logind[766]: Removed session 17. Dec 6 05:04:28 localhost systemd-logind[766]: Removed session 18. Dec 6 05:04:28 localhost systemd-logind[766]: Removed session 26. Dec 6 05:04:28 localhost sshd[290882]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:04:28 localhost systemd-logind[766]: New session 65 of user ceph-admin. Dec 6 05:04:28 localhost systemd[1]: Started Session 65 of User ceph-admin. Dec 6 05:04:29 localhost nova_compute[282193]: 2025-12-06 10:04:29.003 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:04:29 localhost ceph-mon[290022]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Dec 6 05:04:29 localhost ceph-mon[290022]: Manager daemon np0005548788.yvwbqq is now available Dec 6 05:04:29 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548788.yvwbqq/mirror_snapshot_schedule"} : dispatch Dec 6 05:04:29 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548788.yvwbqq/mirror_snapshot_schedule"} : dispatch Dec 6 05:04:29 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548788.yvwbqq/trash_purge_schedule"} : dispatch Dec 6 05:04:29 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548788.yvwbqq/trash_purge_schedule"} : dispatch Dec 6 05:04:29 localhost ceph-mon[290022]: mon.np0005548789@4(peon).osd e88 _set_new_cache_sizes cache_size:1019645081 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:04:29 localhost systemd[1]: tmp-crun.XROgqq.mount: Deactivated successfully. Dec 6 05:04:29 localhost podman[290993]: 2025-12-06 10:04:29.980224808 +0000 UTC m=+0.114806900 container exec ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, version=7, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, name=rhceph, vcs-type=git, io.openshift.expose-services=, GIT_BRANCH=main, GIT_CLEAN=True, ceph=True, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public) Dec 6 05:04:30 localhost nova_compute[282193]: 2025-12-06 10:04:30.027 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:04:30 localhost podman[290993]: 2025-12-06 10:04:30.111348394 +0000 UTC m=+0.245930486 container exec_died ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, architecture=x86_64, GIT_BRANCH=main, CEPH_POINT_RELEASE=, GIT_CLEAN=True, ceph=True, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, RELEASE=main) Dec 6 05:04:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:04:30 localhost podman[291027]: 2025-12-06 10:04:30.281883839 +0000 UTC m=+0.104989589 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 6 05:04:30 localhost podman[291027]: 2025-12-06 10:04:30.290902714 +0000 UTC m=+0.114008494 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:04:30 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:04:30 localhost sshd[291080]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:04:31 localhost nova_compute[282193]: 2025-12-06 10:04:31.011 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updating instance_info_cache with network_info: [{"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 6 05:04:31 localhost nova_compute[282193]: 2025-12-06 10:04:31.025 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Releasing lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 6 05:04:31 localhost nova_compute[282193]: 2025-12-06 10:04:31.025 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 6 05:04:31 localhost ceph-mon[290022]: [06/Dec/2025:10:04:30] ENGINE Bus STARTING Dec 6 05:04:31 localhost ceph-mon[290022]: [06/Dec/2025:10:04:30] ENGINE Serving on http://172.18.0.106:8765 Dec 6 05:04:31 localhost ceph-mon[290022]: [06/Dec/2025:10:04:30] ENGINE Serving on https://172.18.0.106:7150 Dec 6 05:04:31 localhost ceph-mon[290022]: [06/Dec/2025:10:04:30] ENGINE Bus STARTED Dec 6 05:04:31 localhost ceph-mon[290022]: [06/Dec/2025:10:04:30] ENGINE Client ('172.18.0.106', 43646) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Dec 6 05:04:31 localhost nova_compute[282193]: 2025-12-06 10:04:31.026 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:04:31 localhost nova_compute[282193]: 2025-12-06 10:04:31.027 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:04:31 localhost nova_compute[282193]: 2025-12-06 10:04:31.028 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:04:31 localhost nova_compute[282193]: 2025-12-06 10:04:31.029 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:04:31 localhost nova_compute[282193]: 2025-12-06 10:04:31.029 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:04:31 localhost nova_compute[282193]: 2025-12-06 10:04:31.030 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 6 05:04:31 localhost nova_compute[282193]: 2025-12-06 10:04:31.736 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:04:32 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:32 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:32 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:32 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:32 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:32 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:32 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:32 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:32 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:32 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:32 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:32 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:33 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:33 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:33 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:33 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:33 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 6 05:04:33 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:33 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 6 05:04:33 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:33 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd/host:np0005548786", "name": "osd_memory_target"} : dispatch Dec 6 05:04:33 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:33 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd/host:np0005548786", "name": "osd_memory_target"} : dispatch Dec 6 05:04:33 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 6 05:04:33 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:33 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 6 05:04:33 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 6 05:04:33 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:33 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 6 05:04:33 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd/host:np0005548785", "name": "osd_memory_target"} : dispatch Dec 6 05:04:33 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:33 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd/host:np0005548785", "name": "osd_memory_target"} : dispatch Dec 6 05:04:33 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 6 05:04:33 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 6 05:04:33 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:33 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 6 05:04:33 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:33 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 6 05:04:33 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd/host:np0005548787", "name": "osd_memory_target"} : dispatch Dec 6 05:04:33 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 6 05:04:33 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd/host:np0005548787", "name": "osd_memory_target"} : dispatch Dec 6 05:04:33 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 6 05:04:33 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:04:34 localhost nova_compute[282193]: 2025-12-06 10:04:34.040 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:04:34 localhost ceph-mon[290022]: Adjusting osd_memory_target on np0005548790.localdomain to 836.6M Dec 6 05:04:34 localhost ceph-mon[290022]: Adjusting osd_memory_target on np0005548789.localdomain to 836.6M Dec 6 05:04:34 localhost ceph-mon[290022]: Unable to set osd_memory_target on np0005548790.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 6 05:04:34 localhost ceph-mon[290022]: Unable to set osd_memory_target on np0005548789.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 6 05:04:34 localhost ceph-mon[290022]: Adjusting osd_memory_target on np0005548788.localdomain to 836.6M Dec 6 05:04:34 localhost ceph-mon[290022]: Unable to set osd_memory_target on np0005548788.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Dec 6 05:04:34 localhost ceph-mon[290022]: Updating np0005548785.localdomain:/etc/ceph/ceph.conf Dec 6 05:04:34 localhost ceph-mon[290022]: Updating np0005548786.localdomain:/etc/ceph/ceph.conf Dec 6 05:04:34 localhost ceph-mon[290022]: Updating np0005548787.localdomain:/etc/ceph/ceph.conf Dec 6 05:04:34 localhost ceph-mon[290022]: Updating np0005548788.localdomain:/etc/ceph/ceph.conf Dec 6 05:04:34 localhost ceph-mon[290022]: Updating np0005548789.localdomain:/etc/ceph/ceph.conf Dec 6 05:04:34 localhost ceph-mon[290022]: Updating np0005548790.localdomain:/etc/ceph/ceph.conf Dec 6 05:04:34 localhost ceph-mon[290022]: mon.np0005548789@4(peon).osd e88 _set_new_cache_sizes cache_size:1020044797 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:04:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:04:34 localhost systemd[1]: tmp-crun.IqDGuR.mount: Deactivated successfully. Dec 6 05:04:34 localhost podman[291706]: 2025-12-06 10:04:34.86937747 +0000 UTC m=+0.116102500 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true) Dec 6 05:04:34 localhost podman[291706]: 2025-12-06 10:04:34.942106415 +0000 UTC m=+0.188831425 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 6 05:04:34 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:04:35 localhost nova_compute[282193]: 2025-12-06 10:04:35.028 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:04:35 localhost ceph-mon[290022]: Updating np0005548786.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:04:35 localhost ceph-mon[290022]: Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:04:35 localhost ceph-mon[290022]: Updating np0005548785.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:04:35 localhost ceph-mon[290022]: Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:04:35 localhost ceph-mon[290022]: Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:04:35 localhost ceph-mon[290022]: Updating np0005548787.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:04:35 localhost ceph-mon[290022]: Updating np0005548786.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 6 05:04:35 localhost ceph-mon[290022]: Updating np0005548785.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 6 05:04:35 localhost ceph-mon[290022]: Updating np0005548789.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 6 05:04:35 localhost ceph-mon[290022]: Updating np0005548790.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 6 05:04:35 localhost ceph-mon[290022]: Updating np0005548788.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 6 05:04:35 localhost ceph-mon[290022]: Updating np0005548787.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 6 05:04:36 localhost ceph-mon[290022]: Updating np0005548786.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring Dec 6 05:04:36 localhost ceph-mon[290022]: Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring Dec 6 05:04:36 localhost ceph-mon[290022]: Updating np0005548785.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring Dec 6 05:04:36 localhost ceph-mon[290022]: Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring Dec 6 05:04:36 localhost ceph-mon[290022]: Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring Dec 6 05:04:36 localhost ceph-mon[290022]: Updating np0005548787.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring Dec 6 05:04:36 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:36 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:36 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:36 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:36 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:36 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:36 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:36 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:36 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:36 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:36 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:36 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:36 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:37 localhost ceph-mon[290022]: Reconfiguring mon.np0005548787 (monmap changed)... Dec 6 05:04:37 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 6 05:04:37 localhost ceph-mon[290022]: Reconfiguring daemon mon.np0005548787 on np0005548787.localdomain Dec 6 05:04:37 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:37 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:37 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548787.umwsra", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:04:37 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548787.umwsra", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:04:38 localhost sshd[291944]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:04:38 localhost ceph-mon[290022]: Reconfiguring mgr.np0005548787.umwsra (monmap changed)... Dec 6 05:04:38 localhost ceph-mon[290022]: Reconfiguring daemon mgr.np0005548787.umwsra on np0005548787.localdomain Dec 6 05:04:38 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:38 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:38 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548787.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:04:38 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548787.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:04:39 localhost nova_compute[282193]: 2025-12-06 10:04:39.078 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:04:39 localhost ceph-mon[290022]: Reconfiguring crash.np0005548787 (monmap changed)... Dec 6 05:04:39 localhost ceph-mon[290022]: Reconfiguring daemon crash.np0005548787 on np0005548787.localdomain Dec 6 05:04:39 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:39 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:39 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:39 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548788.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:04:39 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548788.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:04:39 localhost ceph-mon[290022]: mon.np0005548789@4(peon).osd e88 _set_new_cache_sizes cache_size:1020054486 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:04:40 localhost nova_compute[282193]: 2025-12-06 10:04:40.029 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:04:40 localhost ceph-mon[290022]: Reconfiguring crash.np0005548788 (monmap changed)... Dec 6 05:04:40 localhost ceph-mon[290022]: Reconfiguring daemon crash.np0005548788 on np0005548788.localdomain Dec 6 05:04:40 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:40 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:40 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 6 05:04:41 localhost ceph-mon[290022]: Reconfiguring osd.2 (monmap changed)... Dec 6 05:04:41 localhost ceph-mon[290022]: Reconfiguring daemon osd.2 on np0005548788.localdomain Dec 6 05:04:41 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:41 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:41 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 6 05:04:42 localhost ceph-mon[290022]: Reconfiguring osd.5 (monmap changed)... Dec 6 05:04:42 localhost ceph-mon[290022]: Reconfiguring daemon osd.5 on np0005548788.localdomain Dec 6 05:04:42 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:42 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:42 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548788.erzujf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 6 05:04:42 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548788.erzujf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 6 05:04:43 localhost ceph-mon[290022]: Reconfiguring mds.mds.np0005548788.erzujf (monmap changed)... Dec 6 05:04:43 localhost ceph-mon[290022]: Reconfiguring daemon mds.mds.np0005548788.erzujf on np0005548788.localdomain Dec 6 05:04:43 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:43 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:43 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548788.yvwbqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:04:43 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548788.yvwbqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:04:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:04:43 localhost podman[291946]: 2025-12-06 10:04:43.929947818 +0000 UTC m=+0.089767846 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 6 05:04:43 localhost podman[291946]: 2025-12-06 10:04:43.934667567 +0000 UTC m=+0.094487585 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible) Dec 6 05:04:43 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:04:44 localhost nova_compute[282193]: 2025-12-06 10:04:44.115 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:04:44 localhost ceph-mgr[288591]: ms_deliver_dispatch: unhandled message 0x56140ed131e0 mon_map magic: 0 from mon.2 v2:172.18.0.104:3300/0 Dec 6 05:04:44 localhost ceph-mon[290022]: mon.np0005548789@4(peon) e7 my rank is now 3 (was 4) Dec 6 05:04:44 localhost ceph-mgr[288591]: client.0 ms_handle_reset on v2:172.18.0.104:3300/0 Dec 6 05:04:44 localhost ceph-mgr[288591]: client.0 ms_handle_reset on v2:172.18.0.104:3300/0 Dec 6 05:04:44 localhost ceph-mgr[288591]: ms_deliver_dispatch: unhandled message 0x56140ed13080 mon_map magic: 0 from mon.3 v2:172.18.0.107:3300/0 Dec 6 05:04:44 localhost ceph-mon[290022]: log_channel(cluster) log [INF] : mon.np0005548789 calling monitor election Dec 6 05:04:44 localhost ceph-mon[290022]: paxos.3).electionLogic(26) init, last seen epoch 26 Dec 6 05:04:44 localhost ceph-mon[290022]: mon.np0005548789@3(electing) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 6 05:04:44 localhost ceph-mon[290022]: mon.np0005548789@3(electing) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 6 05:04:44 localhost ceph-mon[290022]: mon.np0005548789@3(electing) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 6 05:04:45 localhost nova_compute[282193]: 2025-12-06 10:04:45.034 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:04:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:04:45 localhost podman[291965]: 2025-12-06 10:04:45.924010125 +0000 UTC m=+0.086349637 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 6 05:04:45 localhost podman[291965]: 2025-12-06 10:04:45.961248426 +0000 UTC m=+0.123587938 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:04:45 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:04:46 localhost openstack_network_exporter[243110]: ERROR 10:04:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:04:46 localhost openstack_network_exporter[243110]: ERROR 10:04:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:04:46 localhost openstack_network_exporter[243110]: ERROR 10:04:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:04:46 localhost openstack_network_exporter[243110]: ERROR 10:04:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:04:46 localhost openstack_network_exporter[243110]: Dec 6 05:04:46 localhost openstack_network_exporter[243110]: ERROR 10:04:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:04:46 localhost openstack_network_exporter[243110]: Dec 6 05:04:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:04:47.294 160509 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:04:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:04:47.294 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:04:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:04:47.295 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:04:48 localhost ceph-mds[287313]: mds.beacon.mds.np0005548789.vxwwsq missed beacon ack from the monitors Dec 6 05:04:49 localhost nova_compute[282193]: 2025-12-06 10:04:49.167 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:04:49 localhost ceph-mon[290022]: mon.np0005548789@3(peon) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 6 05:04:49 localhost ceph-mon[290022]: Reconfiguring mgr.np0005548788.yvwbqq (monmap changed)... Dec 6 05:04:49 localhost ceph-mon[290022]: Reconfiguring daemon mgr.np0005548788.yvwbqq on np0005548788.localdomain Dec 6 05:04:49 localhost ceph-mon[290022]: Remove daemons mon.np0005548785 Dec 6 05:04:49 localhost ceph-mon[290022]: Safe to remove mon.np0005548785: new quorum should be ['np0005548787', 'np0005548786', 'np0005548790', 'np0005548789', 'np0005548788'] (from ['np0005548787', 'np0005548786', 'np0005548790', 'np0005548789', 'np0005548788']) Dec 6 05:04:49 localhost ceph-mon[290022]: Removing monitor np0005548785 from monmap... Dec 6 05:04:49 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mon rm", "name": "np0005548785"} : dispatch Dec 6 05:04:49 localhost ceph-mon[290022]: Reconfiguring mon.np0005548788 (monmap changed)... Dec 6 05:04:49 localhost ceph-mon[290022]: Removing daemon mon.np0005548785 from np0005548785.localdomain -- ports [] Dec 6 05:04:49 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 6 05:04:49 localhost ceph-mon[290022]: mon.np0005548786 calling monitor election Dec 6 05:04:49 localhost ceph-mon[290022]: mon.np0005548790 calling monitor election Dec 6 05:04:49 localhost ceph-mon[290022]: mon.np0005548787 calling monitor election Dec 6 05:04:49 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 6 05:04:49 localhost ceph-mon[290022]: mon.np0005548789 calling monitor election Dec 6 05:04:49 localhost ceph-mon[290022]: mon.np0005548787 is new leader, mons np0005548787,np0005548786,np0005548790,np0005548789 in quorum (ranks 0,1,2,3) Dec 6 05:04:49 localhost ceph-mon[290022]: Health check failed: 1/5 mons down, quorum np0005548787,np0005548786,np0005548790,np0005548789 (MON_DOWN) Dec 6 05:04:49 localhost ceph-mon[290022]: Health detail: HEALTH_WARN 1/5 mons down, quorum np0005548787,np0005548786,np0005548790,np0005548789 Dec 6 05:04:49 localhost ceph-mon[290022]: [WRN] MON_DOWN: 1/5 mons down, quorum np0005548787,np0005548786,np0005548790,np0005548789 Dec 6 05:04:49 localhost ceph-mon[290022]: mon.np0005548788 (rank 4) addr [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] is down (out of quorum) Dec 6 05:04:49 localhost ceph-mon[290022]: mon.np0005548789@3(peon).osd e88 _set_new_cache_sizes cache_size:1020054726 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:04:50 localhost nova_compute[282193]: 2025-12-06 10:04:50.035 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:04:50 localhost sshd[291989]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:04:50 localhost ceph-mon[290022]: log_channel(cluster) log [INF] : mon.np0005548789 calling monitor election Dec 6 05:04:50 localhost ceph-mon[290022]: paxos.3).electionLogic(29) init, last seen epoch 29, mid-election, bumping Dec 6 05:04:50 localhost ceph-mon[290022]: mon.np0005548789@3(electing) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 6 05:04:50 localhost ceph-mon[290022]: mon.np0005548789@3(electing) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 6 05:04:50 localhost ceph-mon[290022]: mon.np0005548789@3(peon) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 6 05:04:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:04:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:04:50 localhost podman[292042]: 2025-12-06 10:04:50.814347366 +0000 UTC m=+0.085372706 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., release=1755695350, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.7, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, container_name=openstack_network_exporter, config_id=edpm, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Dec 6 05:04:50 localhost podman[292042]: 2025-12-06 10:04:50.830194179 +0000 UTC m=+0.101219509 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, container_name=openstack_network_exporter, distribution-scope=public, vendor=Red Hat, Inc., version=9.6, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, architecture=x86_64, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Dec 6 05:04:50 localhost podman[292056]: Dec 6 05:04:50 localhost podman[292056]: 2025-12-06 10:04:50.842059645 +0000 UTC m=+0.081953779 container create b6383cc55f58a79b8aebd067c80c1d30f31c6a97a7c2ed9460b58c98269d7229 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_gauss, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, GIT_BRANCH=main, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, ceph=True, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, RELEASE=main, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 6 05:04:50 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:04:50 localhost systemd[1]: Started libpod-conmon-b6383cc55f58a79b8aebd067c80c1d30f31c6a97a7c2ed9460b58c98269d7229.scope. Dec 6 05:04:50 localhost systemd[1]: Started libcrun container. Dec 6 05:04:50 localhost podman[292056]: 2025-12-06 10:04:50.812626541 +0000 UTC m=+0.052520755 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:04:50 localhost podman[292043]: 2025-12-06 10:04:50.932694767 +0000 UTC m=+0.203092397 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:04:50 localhost podman[292056]: 2025-12-06 10:04:50.947626441 +0000 UTC m=+0.187520615 container init b6383cc55f58a79b8aebd067c80c1d30f31c6a97a7c2ed9460b58c98269d7229 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_gauss, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, vcs-type=git, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, RELEASE=main, build-date=2025-11-26T19:44:28Z, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 05:04:50 localhost podman[292056]: 2025-12-06 10:04:50.958747943 +0000 UTC m=+0.198642107 container start b6383cc55f58a79b8aebd067c80c1d30f31c6a97a7c2ed9460b58c98269d7229 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_gauss, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, ceph=True, vcs-type=git, maintainer=Guillaume Abrioux , architecture=x86_64, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, version=7, io.openshift.expose-services=, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 6 05:04:50 localhost podman[292056]: 2025-12-06 10:04:50.959032762 +0000 UTC m=+0.198926926 container attach b6383cc55f58a79b8aebd067c80c1d30f31c6a97a7c2ed9460b58c98269d7229 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_gauss, CEPH_POINT_RELEASE=, ceph=True, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., com.redhat.component=rhceph-container, release=1763362218, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, GIT_BRANCH=main, version=7, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7) Dec 6 05:04:50 localhost systemd[1]: libpod-b6383cc55f58a79b8aebd067c80c1d30f31c6a97a7c2ed9460b58c98269d7229.scope: Deactivated successfully. Dec 6 05:04:50 localhost sharp_gauss[292090]: 167 167 Dec 6 05:04:50 localhost podman[292056]: 2025-12-06 10:04:50.96655198 +0000 UTC m=+0.206446144 container died b6383cc55f58a79b8aebd067c80c1d30f31c6a97a7c2ed9460b58c98269d7229 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_gauss, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=7, io.buildah.version=1.41.4, ceph=True, maintainer=Guillaume Abrioux , release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, distribution-scope=public, RELEASE=main, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, io.openshift.expose-services=) Dec 6 05:04:51 localhost podman[292043]: 2025-12-06 10:04:51.021407519 +0000 UTC m=+0.291805139 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=edpm, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:04:51 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:04:51 localhost podman[292104]: 2025-12-06 10:04:51.05425386 +0000 UTC m=+0.078183979 container remove b6383cc55f58a79b8aebd067c80c1d30f31c6a97a7c2ed9460b58c98269d7229 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_gauss, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, io.openshift.tags=rhceph ceph, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.openshift.expose-services=, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, build-date=2025-11-26T19:44:28Z, ceph=True, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 05:04:51 localhost systemd[1]: libpod-conmon-b6383cc55f58a79b8aebd067c80c1d30f31c6a97a7c2ed9460b58c98269d7229.scope: Deactivated successfully. Dec 6 05:04:51 localhost ceph-mon[290022]: mon.np0005548788 calling monitor election Dec 6 05:04:51 localhost ceph-mon[290022]: Removed label mon from host np0005548785.localdomain Dec 6 05:04:51 localhost ceph-mon[290022]: Reconfiguring crash.np0005548789 (monmap changed)... Dec 6 05:04:51 localhost ceph-mon[290022]: Reconfiguring daemon crash.np0005548789 on np0005548789.localdomain Dec 6 05:04:51 localhost ceph-mon[290022]: mon.np0005548786 calling monitor election Dec 6 05:04:51 localhost ceph-mon[290022]: mon.np0005548789 calling monitor election Dec 6 05:04:51 localhost ceph-mon[290022]: mon.np0005548790 calling monitor election Dec 6 05:04:51 localhost ceph-mon[290022]: mon.np0005548787 calling monitor election Dec 6 05:04:51 localhost ceph-mon[290022]: mon.np0005548787 is new leader, mons np0005548787,np0005548786,np0005548790,np0005548789,np0005548788 in quorum (ranks 0,1,2,3,4) Dec 6 05:04:51 localhost ceph-mon[290022]: Health check cleared: MON_DOWN (was: 1/5 mons down, quorum np0005548787,np0005548786,np0005548790,np0005548789) Dec 6 05:04:51 localhost ceph-mon[290022]: Cluster is now healthy Dec 6 05:04:51 localhost ceph-mon[290022]: overall HEALTH_OK Dec 6 05:04:51 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:51 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:51 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:51 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Dec 6 05:04:51 localhost podman[292174]: Dec 6 05:04:51 localhost podman[292174]: 2025-12-06 10:04:51.73485313 +0000 UTC m=+0.077867959 container create aab6d0e942556f090ce342b6a7c6f60155a525c26da8e5e430a68d0eaff10eac (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_lederberg, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, io.buildah.version=1.41.4, version=7, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, GIT_BRANCH=main, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 6 05:04:51 localhost systemd[1]: Started libpod-conmon-aab6d0e942556f090ce342b6a7c6f60155a525c26da8e5e430a68d0eaff10eac.scope. Dec 6 05:04:51 localhost systemd[1]: Started libcrun container. Dec 6 05:04:51 localhost podman[292174]: 2025-12-06 10:04:51.796544185 +0000 UTC m=+0.139559034 container init aab6d0e942556f090ce342b6a7c6f60155a525c26da8e5e430a68d0eaff10eac (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_lederberg, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, GIT_CLEAN=True, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, version=7, ceph=True, io.openshift.tags=rhceph ceph, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc.) Dec 6 05:04:51 localhost podman[292174]: 2025-12-06 10:04:51.703848408 +0000 UTC m=+0.046863287 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:04:51 localhost podman[292174]: 2025-12-06 10:04:51.806715887 +0000 UTC m=+0.149730736 container start aab6d0e942556f090ce342b6a7c6f60155a525c26da8e5e430a68d0eaff10eac (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_lederberg, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, ceph=True, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, release=1763362218, RELEASE=main, maintainer=Guillaume Abrioux , vcs-type=git, GIT_CLEAN=True, CEPH_POINT_RELEASE=, version=7, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, architecture=x86_64, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 6 05:04:51 localhost podman[292174]: 2025-12-06 10:04:51.80711182 +0000 UTC m=+0.150126669 container attach aab6d0e942556f090ce342b6a7c6f60155a525c26da8e5e430a68d0eaff10eac (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_lederberg, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, vendor=Red Hat, Inc., RELEASE=main, ceph=True, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, build-date=2025-11-26T19:44:28Z, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, vcs-type=git, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 6 05:04:51 localhost priceless_lederberg[292189]: 167 167 Dec 6 05:04:51 localhost systemd[1]: libpod-aab6d0e942556f090ce342b6a7c6f60155a525c26da8e5e430a68d0eaff10eac.scope: Deactivated successfully. Dec 6 05:04:51 localhost podman[292174]: 2025-12-06 10:04:51.809444114 +0000 UTC m=+0.152458963 container died aab6d0e942556f090ce342b6a7c6f60155a525c26da8e5e430a68d0eaff10eac (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_lederberg, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, maintainer=Guillaume Abrioux , distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., version=7, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True) Dec 6 05:04:51 localhost systemd[1]: var-lib-containers-storage-overlay-804bc0cf7b72699221b747076d5b7b86d5e8c5022904974dc1238458c891a736-merged.mount: Deactivated successfully. Dec 6 05:04:51 localhost systemd[1]: var-lib-containers-storage-overlay-5bd68a17d9d521561017dc0a056d4164fb9192d99c40fc3c6ad21859d8ccb73d-merged.mount: Deactivated successfully. Dec 6 05:04:51 localhost podman[292194]: 2025-12-06 10:04:51.909596788 +0000 UTC m=+0.087718721 container remove aab6d0e942556f090ce342b6a7c6f60155a525c26da8e5e430a68d0eaff10eac (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_lederberg, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, vendor=Red Hat, Inc., GIT_CLEAN=True, com.redhat.component=rhceph-container, name=rhceph, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=7) Dec 6 05:04:51 localhost systemd[1]: libpod-conmon-aab6d0e942556f090ce342b6a7c6f60155a525c26da8e5e430a68d0eaff10eac.scope: Deactivated successfully. Dec 6 05:04:52 localhost ceph-mon[290022]: Removed label mgr from host np0005548785.localdomain Dec 6 05:04:52 localhost ceph-mon[290022]: Reconfiguring osd.1 (monmap changed)... Dec 6 05:04:52 localhost ceph-mon[290022]: Reconfiguring daemon osd.1 on np0005548789.localdomain Dec 6 05:04:52 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:52 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Dec 6 05:04:52 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:52 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:52 localhost podman[292271]: Dec 6 05:04:52 localhost podman[292271]: 2025-12-06 10:04:52.78585355 +0000 UTC m=+0.082975971 container create 8082bc65610e3e53fac9a91e09859ce502e342a3711b26d36ed35ec9d1b8ebe2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_elion, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, RELEASE=main, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, ceph=True, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, io.buildah.version=1.41.4, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main) Dec 6 05:04:52 localhost systemd[1]: Started libpod-conmon-8082bc65610e3e53fac9a91e09859ce502e342a3711b26d36ed35ec9d1b8ebe2.scope. Dec 6 05:04:52 localhost systemd[1]: Started libcrun container. Dec 6 05:04:52 localhost podman[292271]: 2025-12-06 10:04:52.749833188 +0000 UTC m=+0.046955679 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:04:52 localhost podman[292271]: 2025-12-06 10:04:52.858575925 +0000 UTC m=+0.155698346 container init 8082bc65610e3e53fac9a91e09859ce502e342a3711b26d36ed35ec9d1b8ebe2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_elion, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, architecture=x86_64, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, distribution-scope=public, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, RELEASE=main, version=7, io.openshift.expose-services=, ceph=True, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, com.redhat.component=rhceph-container) Dec 6 05:04:52 localhost podman[292271]: 2025-12-06 10:04:52.868128377 +0000 UTC m=+0.165250798 container start 8082bc65610e3e53fac9a91e09859ce502e342a3711b26d36ed35ec9d1b8ebe2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_elion, vcs-type=git, architecture=x86_64, ceph=True, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, RELEASE=main, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, name=rhceph, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 6 05:04:52 localhost podman[292271]: 2025-12-06 10:04:52.868458297 +0000 UTC m=+0.165580728 container attach 8082bc65610e3e53fac9a91e09859ce502e342a3711b26d36ed35ec9d1b8ebe2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_elion, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, release=1763362218, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., GIT_BRANCH=main, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, version=7, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , RELEASE=main, ceph=True, distribution-scope=public, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 6 05:04:52 localhost youthful_elion[292286]: 167 167 Dec 6 05:04:52 localhost systemd[1]: libpod-8082bc65610e3e53fac9a91e09859ce502e342a3711b26d36ed35ec9d1b8ebe2.scope: Deactivated successfully. Dec 6 05:04:52 localhost podman[292271]: 2025-12-06 10:04:52.874549 +0000 UTC m=+0.171671481 container died 8082bc65610e3e53fac9a91e09859ce502e342a3711b26d36ed35ec9d1b8ebe2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_elion, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , RELEASE=main, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.openshift.expose-services=, distribution-scope=public, name=rhceph, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, GIT_BRANCH=main, ceph=True, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 6 05:04:52 localhost systemd[1]: var-lib-containers-storage-overlay-cec35d441eb029aee8c5824847ad289891d3696db27359c32a86aaa970b767e9-merged.mount: Deactivated successfully. Dec 6 05:04:52 localhost podman[292291]: 2025-12-06 10:04:52.989297887 +0000 UTC m=+0.097309114 container remove 8082bc65610e3e53fac9a91e09859ce502e342a3711b26d36ed35ec9d1b8ebe2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_elion, io.openshift.expose-services=, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , GIT_CLEAN=True, name=rhceph, GIT_BRANCH=main, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, RELEASE=main, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, version=7) Dec 6 05:04:52 localhost systemd[1]: libpod-conmon-8082bc65610e3e53fac9a91e09859ce502e342a3711b26d36ed35ec9d1b8ebe2.scope: Deactivated successfully. Dec 6 05:04:53 localhost ceph-mon[290022]: Reconfiguring osd.4 (monmap changed)... Dec 6 05:04:53 localhost ceph-mon[290022]: Reconfiguring daemon osd.4 on np0005548789.localdomain Dec 6 05:04:53 localhost ceph-mon[290022]: Removed label _admin from host np0005548785.localdomain Dec 6 05:04:53 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:53 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:53 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548789.vxwwsq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 6 05:04:53 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548789.vxwwsq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 6 05:04:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:04:53 localhost podman[292367]: 2025-12-06 10:04:53.856025947 +0000 UTC m=+0.096859211 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 6 05:04:53 localhost podman[292367]: 2025-12-06 10:04:53.898319117 +0000 UTC m=+0.139152351 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd) Dec 6 05:04:53 localhost podman[292375]: Dec 6 05:04:53 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:04:53 localhost podman[292375]: 2025-12-06 10:04:53.914032106 +0000 UTC m=+0.131285902 container create ebf42fe8296d28a77b1ca2d2ca4de8448ebce0959a1b1cdd730e7979adbcfd49 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_tharp, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.openshift.expose-services=, CEPH_POINT_RELEASE=, distribution-scope=public, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, vcs-type=git, version=7, GIT_CLEAN=True, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64) Dec 6 05:04:53 localhost podman[241090]: time="2025-12-06T10:04:53Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:04:53 localhost podman[292375]: 2025-12-06 10:04:53.881240456 +0000 UTC m=+0.098494262 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:04:54 localhost systemd[1]: Started libpod-conmon-ebf42fe8296d28a77b1ca2d2ca4de8448ebce0959a1b1cdd730e7979adbcfd49.scope. Dec 6 05:04:54 localhost systemd[1]: Started libcrun container. Dec 6 05:04:54 localhost podman[292375]: 2025-12-06 10:04:54.048787176 +0000 UTC m=+0.266041042 container init ebf42fe8296d28a77b1ca2d2ca4de8448ebce0959a1b1cdd730e7979adbcfd49 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_tharp, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, GIT_CLEAN=True, io.openshift.expose-services=, vendor=Red Hat, Inc., RELEASE=main, release=1763362218, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, build-date=2025-11-26T19:44:28Z, distribution-scope=public, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 6 05:04:54 localhost optimistic_tharp[292403]: 167 167 Dec 6 05:04:54 localhost podman[292375]: 2025-12-06 10:04:54.057072369 +0000 UTC m=+0.274326235 container start ebf42fe8296d28a77b1ca2d2ca4de8448ebce0959a1b1cdd730e7979adbcfd49 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_tharp, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, distribution-scope=public, architecture=x86_64, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, ceph=True, io.openshift.expose-services=, RELEASE=main, CEPH_POINT_RELEASE=) Dec 6 05:04:54 localhost podman[292375]: 2025-12-06 10:04:54.057386969 +0000 UTC m=+0.274640755 container attach ebf42fe8296d28a77b1ca2d2ca4de8448ebce0959a1b1cdd730e7979adbcfd49 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_tharp, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, maintainer=Guillaume Abrioux , GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, architecture=x86_64, GIT_BRANCH=main, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, RELEASE=main, io.openshift.tags=rhceph ceph, release=1763362218, ceph=True, version=7, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 6 05:04:54 localhost systemd[1]: libpod-ebf42fe8296d28a77b1ca2d2ca4de8448ebce0959a1b1cdd730e7979adbcfd49.scope: Deactivated successfully. Dec 6 05:04:54 localhost podman[292375]: 2025-12-06 10:04:54.059187475 +0000 UTC m=+0.276441301 container died ebf42fe8296d28a77b1ca2d2ca4de8448ebce0959a1b1cdd730e7979adbcfd49 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_tharp, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , GIT_CLEAN=True, io.openshift.expose-services=, com.redhat.component=rhceph-container, version=7, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, vcs-type=git, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, vendor=Red Hat, Inc., release=1763362218, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, distribution-scope=public, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 05:04:54 localhost podman[241090]: @ - - [06/Dec/2025:10:04:53 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157891 "" "Go-http-client/1.1" Dec 6 05:04:54 localhost podman[241090]: @ - - [06/Dec/2025:10:04:54 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19519 "" "Go-http-client/1.1" Dec 6 05:04:54 localhost podman[292408]: 2025-12-06 10:04:54.163533043 +0000 UTC m=+0.093290378 container remove ebf42fe8296d28a77b1ca2d2ca4de8448ebce0959a1b1cdd730e7979adbcfd49 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_tharp, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, ceph=True, RELEASE=main, name=rhceph, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.41.4, release=1763362218, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc.) Dec 6 05:04:54 localhost systemd[1]: libpod-conmon-ebf42fe8296d28a77b1ca2d2ca4de8448ebce0959a1b1cdd730e7979adbcfd49.scope: Deactivated successfully. Dec 6 05:04:54 localhost nova_compute[282193]: 2025-12-06 10:04:54.205 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:04:54 localhost ceph-mon[290022]: Reconfiguring mds.mds.np0005548789.vxwwsq (monmap changed)... Dec 6 05:04:54 localhost ceph-mon[290022]: Reconfiguring daemon mds.mds.np0005548789.vxwwsq on np0005548789.localdomain Dec 6 05:04:54 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:54 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:54 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548789.mzhmje", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:04:54 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548789.mzhmje", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:04:54 localhost ceph-mon[290022]: mon.np0005548789@3(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:04:54 localhost systemd[1]: var-lib-containers-storage-overlay-0945c963a2bef7b73a93aff65337751ac4be9b4623b6d9c58d9832793c21ad25-merged.mount: Deactivated successfully. Dec 6 05:04:54 localhost podman[292478]: Dec 6 05:04:54 localhost podman[292478]: 2025-12-06 10:04:54.919489431 +0000 UTC m=+0.063157942 container create ba921e336fee6edc024f52d8fdea366612287a7527acdb52d196de1f80200cf2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_kirch, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , architecture=x86_64, RELEASE=main, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, distribution-scope=public, name=rhceph, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, version=7, io.openshift.expose-services=, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, GIT_BRANCH=main) Dec 6 05:04:54 localhost systemd[1]: Started libpod-conmon-ba921e336fee6edc024f52d8fdea366612287a7527acdb52d196de1f80200cf2.scope. Dec 6 05:04:54 localhost systemd[1]: Started libcrun container. Dec 6 05:04:54 localhost podman[292478]: 2025-12-06 10:04:54.985992429 +0000 UTC m=+0.129660940 container init ba921e336fee6edc024f52d8fdea366612287a7527acdb52d196de1f80200cf2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_kirch, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , name=rhceph, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, ceph=True, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, release=1763362218, GIT_BRANCH=main, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, RELEASE=main, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 6 05:04:54 localhost podman[292478]: 2025-12-06 10:04:54.889213102 +0000 UTC m=+0.032881603 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:04:54 localhost podman[292478]: 2025-12-06 10:04:54.996491152 +0000 UTC m=+0.140159643 container start ba921e336fee6edc024f52d8fdea366612287a7527acdb52d196de1f80200cf2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_kirch, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, ceph=True, vcs-type=git, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 6 05:04:54 localhost podman[292478]: 2025-12-06 10:04:54.996708529 +0000 UTC m=+0.140377050 container attach ba921e336fee6edc024f52d8fdea366612287a7527acdb52d196de1f80200cf2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_kirch, io.openshift.expose-services=, version=7, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, ceph=True, CEPH_POINT_RELEASE=, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux ) Dec 6 05:04:54 localhost great_kirch[292494]: 167 167 Dec 6 05:04:54 localhost systemd[1]: libpod-ba921e336fee6edc024f52d8fdea366612287a7527acdb52d196de1f80200cf2.scope: Deactivated successfully. Dec 6 05:04:54 localhost podman[292478]: 2025-12-06 10:04:54.999438305 +0000 UTC m=+0.143106836 container died ba921e336fee6edc024f52d8fdea366612287a7527acdb52d196de1f80200cf2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_kirch, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_BRANCH=main, RELEASE=main, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, distribution-scope=public, name=rhceph, architecture=x86_64, vcs-type=git, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, version=7, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 6 05:04:55 localhost nova_compute[282193]: 2025-12-06 10:04:55.041 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:04:55 localhost podman[292499]: 2025-12-06 10:04:55.085061999 +0000 UTC m=+0.071521108 container remove ba921e336fee6edc024f52d8fdea366612287a7527acdb52d196de1f80200cf2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_kirch, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, release=1763362218, vcs-type=git, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_CLEAN=True, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, ceph=True, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4) Dec 6 05:04:55 localhost systemd[1]: libpod-conmon-ba921e336fee6edc024f52d8fdea366612287a7527acdb52d196de1f80200cf2.scope: Deactivated successfully. Dec 6 05:04:55 localhost ceph-mon[290022]: Reconfiguring mgr.np0005548789.mzhmje (monmap changed)... Dec 6 05:04:55 localhost ceph-mon[290022]: Reconfiguring daemon mgr.np0005548789.mzhmje on np0005548789.localdomain Dec 6 05:04:55 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:55 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:55 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 6 05:04:55 localhost sshd[292549]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:04:55 localhost podman[292568]: Dec 6 05:04:55 localhost podman[292568]: 2025-12-06 10:04:55.809988224 +0000 UTC m=+0.077787186 container create 89998159898d054faf7ea20cf916cf5a8b4a1b67da02054bfbc0bb583f09d5b4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_germain, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, vcs-type=git, maintainer=Guillaume Abrioux , release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, RELEASE=main, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, name=rhceph, version=7) Dec 6 05:04:55 localhost systemd[1]: Started libpod-conmon-89998159898d054faf7ea20cf916cf5a8b4a1b67da02054bfbc0bb583f09d5b4.scope. Dec 6 05:04:55 localhost systemd[1]: var-lib-containers-storage-overlay-a99cc2b8add5af5710e00540b9ec2e0bf298bd394ee07e52eee333272053d4ca-merged.mount: Deactivated successfully. Dec 6 05:04:55 localhost systemd[1]: Started libcrun container. Dec 6 05:04:55 localhost podman[292568]: 2025-12-06 10:04:55.874410806 +0000 UTC m=+0.142209768 container init 89998159898d054faf7ea20cf916cf5a8b4a1b67da02054bfbc0bb583f09d5b4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_germain, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.buildah.version=1.41.4, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, GIT_BRANCH=main, architecture=x86_64, ceph=True, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, distribution-scope=public, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7) Dec 6 05:04:55 localhost podman[292568]: 2025-12-06 10:04:55.778206487 +0000 UTC m=+0.046005509 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:04:55 localhost podman[292568]: 2025-12-06 10:04:55.887327385 +0000 UTC m=+0.155126347 container start 89998159898d054faf7ea20cf916cf5a8b4a1b67da02054bfbc0bb583f09d5b4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_germain, io.openshift.tags=rhceph ceph, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, maintainer=Guillaume Abrioux , architecture=x86_64, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, distribution-scope=public, GIT_CLEAN=True, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, io.openshift.expose-services=, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218) Dec 6 05:04:55 localhost podman[292568]: 2025-12-06 10:04:55.887554072 +0000 UTC m=+0.155353034 container attach 89998159898d054faf7ea20cf916cf5a8b4a1b67da02054bfbc0bb583f09d5b4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_germain, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, version=7, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, ceph=True, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, name=rhceph, RELEASE=main, GIT_CLEAN=True, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 6 05:04:55 localhost exciting_germain[292583]: 167 167 Dec 6 05:04:55 localhost systemd[1]: libpod-89998159898d054faf7ea20cf916cf5a8b4a1b67da02054bfbc0bb583f09d5b4.scope: Deactivated successfully. Dec 6 05:04:55 localhost podman[292568]: 2025-12-06 10:04:55.889526385 +0000 UTC m=+0.157325377 container died 89998159898d054faf7ea20cf916cf5a8b4a1b67da02054bfbc0bb583f09d5b4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_germain, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., name=rhceph, vcs-type=git, GIT_CLEAN=True, GIT_BRANCH=main, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, RELEASE=main) Dec 6 05:04:55 localhost podman[292590]: 2025-12-06 10:04:55.993039616 +0000 UTC m=+0.091697178 container remove 89998159898d054faf7ea20cf916cf5a8b4a1b67da02054bfbc0bb583f09d5b4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_germain, io.buildah.version=1.41.4, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, version=7, release=1763362218, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, architecture=x86_64, RELEASE=main, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, name=rhceph, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 6 05:04:55 localhost systemd[1]: libpod-conmon-89998159898d054faf7ea20cf916cf5a8b4a1b67da02054bfbc0bb583f09d5b4.scope: Deactivated successfully. Dec 6 05:04:56 localhost ceph-mon[290022]: Reconfiguring mon.np0005548789 (monmap changed)... Dec 6 05:04:56 localhost ceph-mon[290022]: Reconfiguring daemon mon.np0005548789 on np0005548789.localdomain Dec 6 05:04:56 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:56 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:56 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548790.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:04:56 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548790.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:04:56 localhost systemd[1]: var-lib-containers-storage-overlay-4d7ea6e363a1e8d567b6eaeefcf1f0c98c4dfd00868b981e4f80fa4a2f4c63b2-merged.mount: Deactivated successfully. Dec 6 05:04:57 localhost ceph-mon[290022]: Reconfiguring crash.np0005548790 (monmap changed)... Dec 6 05:04:57 localhost ceph-mon[290022]: Reconfiguring daemon crash.np0005548790 on np0005548790.localdomain Dec 6 05:04:57 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:57 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:57 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Dec 6 05:04:57 localhost ceph-mon[290022]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0. Dec 6 05:04:57 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:04:57.496864) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 6 05:04:57 localhost ceph-mon[290022]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13 Dec 6 05:04:57 localhost ceph-mon[290022]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015497496973, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 11446, "num_deletes": 523, "total_data_size": 16360845, "memory_usage": 16981888, "flush_reason": "Manual Compaction"} Dec 6 05:04:57 localhost ceph-mon[290022]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started Dec 6 05:04:57 localhost ceph-mon[290022]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015497576012, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 11186426, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6, "largest_seqno": 11451, "table_properties": {"data_size": 11133322, "index_size": 27526, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24517, "raw_key_size": 257404, "raw_average_key_size": 26, "raw_value_size": 10968763, "raw_average_value_size": 1120, "num_data_blocks": 1034, "num_entries": 9790, "num_filter_entries": 9790, "num_deletions": 522, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015444, "oldest_key_time": 1765015444, "file_creation_time": 1765015497, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8b48a877-4508-4eb4-a052-67f753f228b0", "db_session_id": "ETDWGFPM6GCTACWNDM5G", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}} Dec 6 05:04:57 localhost ceph-mon[290022]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 79223 microseconds, and 26042 cpu microseconds. Dec 6 05:04:57 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:04:57.576090) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 11186426 bytes OK Dec 6 05:04:57 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:04:57.576118) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started Dec 6 05:04:57 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:04:57.578088) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done Dec 6 05:04:57 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:04:57.578111) EVENT_LOG_v1 {"time_micros": 1765015497578105, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0} Dec 6 05:04:57 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:04:57.578130) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50 Dec 6 05:04:57 localhost ceph-mon[290022]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 16284988, prev total WAL file size 16284988, number of live WAL files 2. Dec 6 05:04:57 localhost ceph-mon[290022]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:04:57 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:04:57.580708) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130303430' seq:72057594037927935, type:22 .. '7061786F73003130323932' seq:0, type:0; will stop at (end) Dec 6 05:04:57 localhost ceph-mon[290022]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00 Dec 6 05:04:57 localhost ceph-mon[290022]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(10MB) 8(1887B)] Dec 6 05:04:57 localhost ceph-mon[290022]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015497580856, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 11188313, "oldest_snapshot_seqno": -1} Dec 6 05:04:57 localhost ceph-mon[290022]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 9271 keys, 11178507 bytes, temperature: kUnknown Dec 6 05:04:57 localhost ceph-mon[290022]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015497667266, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 11178507, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11126708, "index_size": 27506, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23237, "raw_key_size": 248828, "raw_average_key_size": 26, "raw_value_size": 10968821, "raw_average_value_size": 1183, "num_data_blocks": 1033, "num_entries": 9271, "num_filter_entries": 9271, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015444, "oldest_key_time": 0, "file_creation_time": 1765015497, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8b48a877-4508-4eb4-a052-67f753f228b0", "db_session_id": "ETDWGFPM6GCTACWNDM5G", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}} Dec 6 05:04:57 localhost ceph-mon[290022]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 6 05:04:57 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:04:57.667606) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 11178507 bytes Dec 6 05:04:57 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:04:57.669367) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 129.3 rd, 129.2 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(10.7, 0.0 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 9795, records dropped: 524 output_compression: NoCompression Dec 6 05:04:57 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:04:57.669397) EVENT_LOG_v1 {"time_micros": 1765015497669385, "job": 4, "event": "compaction_finished", "compaction_time_micros": 86504, "compaction_time_cpu_micros": 35981, "output_level": 6, "num_output_files": 1, "total_output_size": 11178507, "num_input_records": 9795, "num_output_records": 9271, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 6 05:04:57 localhost ceph-mon[290022]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:04:57 localhost ceph-mon[290022]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015497670933, "job": 4, "event": "table_file_deletion", "file_number": 14} Dec 6 05:04:57 localhost ceph-mon[290022]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:04:57 localhost ceph-mon[290022]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015497670985, "job": 4, "event": "table_file_deletion", "file_number": 8} Dec 6 05:04:57 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:04:57.580451) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:04:58 localhost ceph-mon[290022]: Reconfiguring osd.0 (monmap changed)... Dec 6 05:04:58 localhost ceph-mon[290022]: Reconfiguring daemon osd.0 on np0005548790.localdomain Dec 6 05:04:58 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:58 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:58 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Dec 6 05:04:59 localhost nova_compute[282193]: 2025-12-06 10:04:59.231 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:04:59 localhost ceph-mon[290022]: Reconfiguring osd.3 (monmap changed)... Dec 6 05:04:59 localhost ceph-mon[290022]: Reconfiguring daemon osd.3 on np0005548790.localdomain Dec 6 05:04:59 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:59 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:04:59 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548790.vhcezv", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 6 05:04:59 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548790.vhcezv", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 6 05:04:59 localhost ceph-mon[290022]: mon.np0005548789@3(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:05:00 localhost nova_compute[282193]: 2025-12-06 10:05:00.045 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:05:00 localhost ceph-mon[290022]: Reconfiguring mds.mds.np0005548790.vhcezv (monmap changed)... Dec 6 05:05:00 localhost ceph-mon[290022]: Reconfiguring daemon mds.mds.np0005548790.vhcezv on np0005548790.localdomain Dec 6 05:05:00 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:00 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:00 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548790.kvkfyr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:05:00 localhost ceph-mon[290022]: Reconfiguring mgr.np0005548790.kvkfyr (monmap changed)... Dec 6 05:05:00 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548790.kvkfyr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:05:00 localhost ceph-mon[290022]: Reconfiguring daemon mgr.np0005548790.kvkfyr on np0005548790.localdomain Dec 6 05:05:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:05:00 localhost podman[292607]: 2025-12-06 10:05:00.94416045 +0000 UTC m=+0.099308748 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 6 05:05:00 localhost podman[292607]: 2025-12-06 10:05:00.955489319 +0000 UTC m=+0.110637637 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:05:00 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:05:01 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:01 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:01 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 6 05:05:01 localhost ceph-mon[290022]: Reconfiguring mon.np0005548790 (monmap changed)... Dec 6 05:05:01 localhost ceph-mon[290022]: Reconfiguring daemon mon.np0005548790 on np0005548790.localdomain Dec 6 05:05:01 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:01 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:04 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:04 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:04 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:05:04 localhost ceph-mon[290022]: Removing np0005548785.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:05:04 localhost ceph-mon[290022]: Updating np0005548786.localdomain:/etc/ceph/ceph.conf Dec 6 05:05:04 localhost ceph-mon[290022]: Updating np0005548787.localdomain:/etc/ceph/ceph.conf Dec 6 05:05:04 localhost ceph-mon[290022]: Updating np0005548788.localdomain:/etc/ceph/ceph.conf Dec 6 05:05:04 localhost ceph-mon[290022]: Updating np0005548789.localdomain:/etc/ceph/ceph.conf Dec 6 05:05:04 localhost ceph-mon[290022]: Updating np0005548790.localdomain:/etc/ceph/ceph.conf Dec 6 05:05:04 localhost ceph-mon[290022]: Removing np0005548785.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 6 05:05:04 localhost ceph-mon[290022]: Removing np0005548785.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring Dec 6 05:05:04 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:04 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:04 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:04 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:04 localhost nova_compute[282193]: 2025-12-06 10:05:04.270 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:05:04 localhost ceph-mon[290022]: mon.np0005548789@3(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:05:05 localhost nova_compute[282193]: 2025-12-06 10:05:05.048 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:05:05 localhost sshd[292950]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:05:05 localhost ceph-mon[290022]: Added label _no_schedule to host np0005548785.localdomain Dec 6 05:05:05 localhost ceph-mon[290022]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005548785.localdomain Dec 6 05:05:05 localhost ceph-mon[290022]: Updating np0005548787.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:05:05 localhost ceph-mon[290022]: Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:05:05 localhost ceph-mon[290022]: Updating np0005548786.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:05:05 localhost ceph-mon[290022]: Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:05:05 localhost ceph-mon[290022]: Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:05:05 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:05 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:05 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:05 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:05 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:05 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:05 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:05 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:05 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:05 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:05 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:05:05 localhost podman[292952]: 2025-12-06 10:05:05.775190802 +0000 UTC m=+0.093760473 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible) Dec 6 05:05:05 localhost podman[292952]: 2025-12-06 10:05:05.823249605 +0000 UTC m=+0.141819306 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:05:05 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:05:06 localhost ceph-mon[290022]: Removing daemon crash.np0005548785 from np0005548785.localdomain -- ports [] Dec 6 05:05:07 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:07 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005548785.localdomain"} : dispatch Dec 6 05:05:07 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005548785.localdomain"} : dispatch Dec 6 05:05:07 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005548785.localdomain"}]': finished Dec 6 05:05:07 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth rm", "entity": "client.crash.np0005548785.localdomain"} : dispatch Dec 6 05:05:07 localhost ceph-mon[290022]: Removed host np0005548785.localdomain Dec 6 05:05:07 localhost ceph-mon[290022]: Removing key for client.crash.np0005548785.localdomain Dec 6 05:05:07 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth rm", "entity": "client.crash.np0005548785.localdomain"} : dispatch Dec 6 05:05:07 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd='[{"prefix": "auth rm", "entity": "client.crash.np0005548785.localdomain"}]': finished Dec 6 05:05:07 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:07 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:07 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:05:07 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:07 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548786.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:05:07 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548786.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.913 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'name': 'test', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005548789.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'hostId': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.914 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.934 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/memory.usage volume: 51.80859375 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b60881d7-0300-408d-8bcb-dc23d4010ae3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.80859375, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'timestamp': '2025-12-06T10:05:07.914909', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '0b3d72c2-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 11926.183417577, 'message_signature': '7a9c1a8ec781467465a8d39e26c7bed89f9a2be2d28e0567c896376f5ea850cd'}]}, 'timestamp': '2025-12-06 10:05:07.935302', '_unique_id': '94a26eab537246ed9683ea4c9b47d41c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.938 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.939 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.950 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.951 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '49b31586-5b2c-49b8-a284-ff31daf04a9e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:05:07.939846', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '0b3ff150-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 11926.189234415, 'message_signature': '7eab78081cab732a57eb01036fde85ecbfae4cc061c9d94c9cf1423a47f223b9'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:05:07.939846', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '0b400938-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 11926.189234415, 'message_signature': 'a84a0a5f764fe817fd5e22570c6af1eec4dba9ba27d39b0f12997426b8e48537'}]}, 'timestamp': '2025-12-06 10:05:07.952117', '_unique_id': 'bb8b931dca4546f886c915802439de08'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.953 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.954 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.954 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.955 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ff5c7c6f-15a9-4734-91b8-feaf5bedc52b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:05:07.954939', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '0b408a98-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 11926.189234415, 'message_signature': 'dc933fb32434638c6d672f6736b46bf77b3f443f416be950ada2345dfb262dcf'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:05:07.954939', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '0b409ab0-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 11926.189234415, 'message_signature': 'd3bf84551b41cc7459ea055b091c883a411e475cf328fc867105742db4ef7212'}]}, 'timestamp': '2025-12-06 10:05:07.955831', '_unique_id': '4fdf674ab7614789b237db8569d98baf'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.956 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.958 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.961 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets volume: 60 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '810fb076-e1a0-408d-a2e0-33f766343f01', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 60, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:05:07.958213', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '0b4198b6-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 11926.207643981, 'message_signature': '7769664b1f732f5c999df54b0d526e7c4dc0c61c84de482e77f9efc1d930956d'}]}, 'timestamp': '2025-12-06 10:05:07.962375', '_unique_id': 'bcc1cd9acd184aa8a851143ed4c8bf32'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.963 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.964 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.964 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.965 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8a73fa79-20d5-434c-b161-027b4d3cb523', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:05:07.965035', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '0b421868-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 11926.207643981, 'message_signature': 'bae77179e7032572752434b17cb5452058bc4c7baee3112c7c2e9ab908f8e1d6'}]}, 'timestamp': '2025-12-06 10:05:07.965749', '_unique_id': 'd3244f2115fa45df921a3ee34f935b23'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.966 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.967 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.995 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.latency volume: 1252245154 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.995 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.latency volume: 27668224 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8809fc10-309f-43f3-9003-670ce523ab23', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1252245154, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:05:07.968092', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '0b46b3c8-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 11926.217481403, 'message_signature': 'e1a4c26d7199c90913aa8548cbd457ebc67e1f80adf37c5676ec7c839af301a9'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 27668224, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:05:07.968092', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '0b46cb4c-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 11926.217481403, 'message_signature': '3157a57e9a7ccaccd45e366fbc31958b5ad028e367df8648abd91fbbfae9f8b3'}]}, 'timestamp': '2025-12-06 10:05:07.996383', '_unique_id': '9f5f581b39984f8f955e27bfed665188'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:05:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.997 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.999 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:07.999 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '25d8d9cb-5080-4f25-87df-4c2720253048', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:05:07.999394', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '0b4757a6-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 11926.207643981, 'message_signature': 'b78c6eeb25c5b931e3e887500883258c533434c0fa20087fd407cd3b382b0126'}]}, 'timestamp': '2025-12-06 10:05:08.000132', '_unique_id': '06d9000b047142aea97dc18a307a5dab'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.001 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.003 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.003 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.004 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3c611ed0-e7a5-410c-aa59-b6e0059bb150', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:05:08.003375', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '0b47f31e-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 11926.189234415, 'message_signature': '87ef39184a5ccac35941c61dc17a2787f5a2f7a8a2a197851efaa1d827efdb4d'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:05:08.003375', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '0b480c50-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 11926.189234415, 'message_signature': 'f1536ff62b93301f5b4a1359ad5ff080a8bafa936f9c2c1faad017fa34588300'}]}, 'timestamp': '2025-12-06 10:05:08.004671', '_unique_id': 'a5de3d7cea544380844d6c11e5aa2840'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.006 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.007 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.007 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.requests volume: 47 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.008 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6004663b-4437-4dbd-9ac3-e6e8a39f6726', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 47, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:05:08.007903', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '0b48a430-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 11926.217481403, 'message_signature': '7b19b2cfd7289590c073132387b4dee23c12b26cbab007c5dd50a937b4584bed'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:05:08.007903', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '0b48bd30-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 11926.217481403, 'message_signature': '9bd90fc4d5a75fbbe8248e91506184155da7404a10932da89a7208378757e55e'}]}, 'timestamp': '2025-12-06 10:05:08.009204', '_unique_id': 'f0c3453de41d4508a0c1c8c998700408'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.010 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.012 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.012 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.012 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.bytes volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd8c313c2-6c4b-4d16-9a31-b0eeb09f96ee', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:05:08.012649', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '0b495f56-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 11926.207643981, 'message_signature': '40347d1e6c5c67c6d80ce41e32cceb809ba3eb058faa484f9343465ac50f27f7'}]}, 'timestamp': '2025-12-06 10:05:08.013388', '_unique_id': '28d8745f240a422694393668954276e4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.014 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.015 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.016 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.bytes volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'fbd774ac-6968-4222-8327-5f062e8e8698', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:05:08.015968', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '0b49db02-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 11926.207643981, 'message_signature': 'c0655b7d126770ce66153d0c830ee764aaddbc171c7977d5011f4b16c28a4f05'}]}, 'timestamp': '2025-12-06 10:05:08.016451', '_unique_id': 'a4294e5151d847eebde3dc35a239733a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.017 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.018 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.018 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.019 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '03823540-9df5-4049-8cc7-70812847024c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:05:08.018627', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '0b4a4362-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 11926.217481403, 'message_signature': '21ede74668d9454950e34f55621221b8ecbcc0cd205471e585f608c577c0a725'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:05:08.018627', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '0b4a5352-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 11926.217481403, 'message_signature': 'c00accf9da8ba208cce0d71f9cb04639f28331eae11ac01602262148045eb0df'}]}, 'timestamp': '2025-12-06 10:05:08.019498', '_unique_id': '8794eb034624482db942b105537cd2a2'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.020 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.021 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.021 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.latency volume: 1525105336 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.022 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.latency volume: 106716064 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4ce9acdd-17b1-4644-86a7-74407e7cfb4a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1525105336, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:05:08.021629', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '0b4ab82e-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 11926.217481403, 'message_signature': 'c1483d2004f5b3878319882d22f2cd1a5ab86b5f7a1e5a26b2f9a327200c81b3'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 106716064, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:05:08.021629', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '0b4ac8c8-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 11926.217481403, 'message_signature': 'eb83c71ee6eab426491a62d1c903ef3bc999d309c32ea3b6b9586db83638450e'}]}, 'timestamp': '2025-12-06 10:05:08.022499', '_unique_id': '43f937ab9f7a4dafa707ceb5e97e2cd1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.023 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.024 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.024 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.025 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6ed3742c-979c-4c5b-97fb-431175bbcf81', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:05:08.024611', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '0b4b2cb4-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 11926.217481403, 'message_signature': '4a29779e35d71d91c644e7738f12f54a0b5cc5f53c60702b4ca481a497ced313'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:05:08.024611', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '0b4b3fb0-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 11926.217481403, 'message_signature': '9475d14c395db178de827117646c21b0e3d144c198c13dfb05541563fe186f3c'}]}, 'timestamp': '2025-12-06 10:05:08.025548', '_unique_id': '11396ae3468c40d3bcf74d267a60a510'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.026 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.028 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.028 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.028 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/cpu volume: 13030000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4ca591d5-4395-4451-b4d7-8232d4ec6278', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 13030000000, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'timestamp': '2025-12-06T10:05:08.028399', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '0b4bc12e-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 11926.183417577, 'message_signature': 'ec13f51ed2510fe8c208785477cec605c2465dafa2ff9742e32c7283a5f1e119'}]}, 'timestamp': '2025-12-06 10:05:08.028908', '_unique_id': 'e751c279813c472489598bb3e142fb78'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.029 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.030 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets volume: 114 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '454963c2-e32b-4470-b64b-4684c8d7896e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 114, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:05:08.031020', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '0b4c23b2-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 11926.207643981, 'message_signature': 'c25a104bc593e2b4cd85fecd1b37ef05cbd1ade4de6ce7540569d7cb9654598e'}]}, 'timestamp': '2025-12-06 10:05:08.031331', '_unique_id': '0d407d273c3749c49b1205caa123af4d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.031 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.032 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.032 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e50b25ec-8048-4c4a-94d6-e833f34fd044', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:05:08.032728', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '0b4c675a-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 11926.207643981, 'message_signature': 'b3573de2c6c54d557855d72ed6c9133ff5fdd6030918bfbd980be2883bea9f81'}]}, 'timestamp': '2025-12-06 10:05:08.033060', '_unique_id': '09b5e65864584de5ab688b56d27cf6c5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.033 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.034 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.034 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '19784a1e-f5d7-42c9-95b8-58f61e6b66da', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:05:08.034462', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '0b4ca9fe-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 11926.207643981, 'message_signature': 'f0332ada12b99bd9b6a41d25cb54644a5ba71f209546e833c51097c9c39330e7'}]}, 'timestamp': '2025-12-06 10:05:08.034787', '_unique_id': '2a380c8ed5d44d8f8cedf000919987bf'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.035 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.036 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.036 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.bytes volume: 389120 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.036 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7b7ced3a-c355-43cc-8ae5-c6f42d1b9454', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 389120, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:05:08.036159', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '0b4cec34-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 11926.217481403, 'message_signature': 'd99452fda81c5d7604401e87a1569b1ce2703bc82ef62cfa0a6a2ed238c52b3b'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:05:08.036159', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '0b4cf6ac-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 11926.217481403, 'message_signature': '4d2648c5d1351f8a329d91dab860c376df31824bd3a29214b76f2491ab480785'}]}, 'timestamp': '2025-12-06 10:05:08.036708', '_unique_id': '24c8df0ceef147a3ae368cff8597fea6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.037 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.038 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.038 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b317da36-c575-4e90-88d0-92d76049a8b3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:05:08.038202', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '0b4d3cfc-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 11926.207643981, 'message_signature': '7266d786bd9468884c39d9f9118f5f909bb6359817444787866ddb7d5942c53f'}]}, 'timestamp': '2025-12-06 10:05:08.038526', '_unique_id': '45d463d54a824ad3abe9ed33307f9e09'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.039 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd324bf9a-a9fc-4e0b-bfc9-4f2d78dc7a34', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:05:08.039911', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '0b4d7ec4-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 11926.207643981, 'message_signature': 'a2affc2359bc3e6a79d7f42917fa96b141292f53368226fb878894dc43cd5298'}]}, 'timestamp': '2025-12-06 10:05:08.040213', '_unique_id': 'fbc365b884d34f58b178ed7524b869e4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.040 12 ERROR oslo_messaging.notify.messaging Dec 6 05:05:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:05:08.041 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:05:08 localhost ceph-mon[290022]: Reconfiguring crash.np0005548786 (monmap changed)... Dec 6 05:05:08 localhost ceph-mon[290022]: Reconfiguring daemon crash.np0005548786 on np0005548786.localdomain Dec 6 05:05:08 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:08 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:08 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 6 05:05:09 localhost nova_compute[282193]: 2025-12-06 10:05:09.310 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:05:09 localhost ceph-mon[290022]: Reconfiguring mon.np0005548786 (monmap changed)... Dec 6 05:05:09 localhost ceph-mon[290022]: Reconfiguring daemon mon.np0005548786 on np0005548786.localdomain Dec 6 05:05:09 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:09 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:09 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548786.mczynb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:05:09 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:09 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548786.mczynb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:05:09 localhost ceph-mon[290022]: mon.np0005548789@3(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:05:09 localhost sshd[293015]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:05:10 localhost nova_compute[282193]: 2025-12-06 10:05:10.051 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:05:10 localhost ceph-mon[290022]: Reconfiguring mgr.np0005548786.mczynb (monmap changed)... Dec 6 05:05:10 localhost ceph-mon[290022]: Reconfiguring daemon mgr.np0005548786.mczynb on np0005548786.localdomain Dec 6 05:05:10 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:10 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:10 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 6 05:05:10 localhost ceph-mon[290022]: Reconfiguring mon.np0005548787 (monmap changed)... Dec 6 05:05:10 localhost ceph-mon[290022]: Reconfiguring daemon mon.np0005548787 on np0005548787.localdomain Dec 6 05:05:12 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:12 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:12 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548787.umwsra", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:05:12 localhost ceph-mon[290022]: Reconfiguring mgr.np0005548787.umwsra (monmap changed)... Dec 6 05:05:12 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548787.umwsra", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:05:12 localhost ceph-mon[290022]: Reconfiguring daemon mgr.np0005548787.umwsra on np0005548787.localdomain Dec 6 05:05:12 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:12 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:12 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548787.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:05:12 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548787.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:05:13 localhost ceph-mon[290022]: Reconfiguring crash.np0005548787 (monmap changed)... Dec 6 05:05:13 localhost ceph-mon[290022]: Reconfiguring daemon crash.np0005548787 on np0005548787.localdomain Dec 6 05:05:13 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:13 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:13 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548788.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:05:13 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548788.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:05:14 localhost ceph-mon[290022]: Reconfiguring crash.np0005548788 (monmap changed)... Dec 6 05:05:14 localhost ceph-mon[290022]: Reconfiguring daemon crash.np0005548788 on np0005548788.localdomain Dec 6 05:05:14 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:14 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:14 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 6 05:05:14 localhost nova_compute[282193]: 2025-12-06 10:05:14.314 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:05:14 localhost ceph-mon[290022]: mon.np0005548789@3(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:05:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:05:14 localhost podman[293017]: 2025-12-06 10:05:14.9335594 +0000 UTC m=+0.092333338 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible) Dec 6 05:05:14 localhost podman[293017]: 2025-12-06 10:05:14.96448723 +0000 UTC m=+0.123261158 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible) Dec 6 05:05:14 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:05:15 localhost nova_compute[282193]: 2025-12-06 10:05:15.052 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:05:15 localhost ceph-mon[290022]: Reconfiguring osd.2 (monmap changed)... Dec 6 05:05:15 localhost ceph-mon[290022]: Reconfiguring daemon osd.2 on np0005548788.localdomain Dec 6 05:05:15 localhost ceph-mon[290022]: Saving service mon spec with placement label:mon Dec 6 05:05:15 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:15 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:15 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:15 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 6 05:05:16 localhost ceph-mon[290022]: Reconfiguring osd.5 (monmap changed)... Dec 6 05:05:16 localhost ceph-mon[290022]: Reconfiguring daemon osd.5 on np0005548788.localdomain Dec 6 05:05:16 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:16 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:16 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548788.erzujf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 6 05:05:16 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548788.erzujf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 6 05:05:16 localhost openstack_network_exporter[243110]: ERROR 10:05:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:05:16 localhost openstack_network_exporter[243110]: ERROR 10:05:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:05:16 localhost openstack_network_exporter[243110]: ERROR 10:05:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:05:16 localhost openstack_network_exporter[243110]: Dec 6 05:05:16 localhost openstack_network_exporter[243110]: ERROR 10:05:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:05:16 localhost openstack_network_exporter[243110]: ERROR 10:05:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:05:16 localhost openstack_network_exporter[243110]: Dec 6 05:05:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:05:16 localhost podman[293035]: 2025-12-06 10:05:16.90662905 +0000 UTC m=+0.071085405 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 6 05:05:16 localhost podman[293035]: 2025-12-06 10:05:16.937502918 +0000 UTC m=+0.101959263 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:05:16 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:05:17 localhost ceph-mgr[288591]: ms_deliver_dispatch: unhandled message 0x56140ed13080 mon_map magic: 0 from mon.3 v2:172.18.0.107:3300/0 Dec 6 05:05:17 localhost ceph-mon[290022]: log_channel(cluster) log [INF] : mon.np0005548789 calling monitor election Dec 6 05:05:17 localhost ceph-mon[290022]: paxos.3).electionLogic(32) init, last seen epoch 32 Dec 6 05:05:17 localhost ceph-mon[290022]: mon.np0005548789@3(electing) e8 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 6 05:05:17 localhost ceph-mon[290022]: mon.np0005548789@3(electing) e8 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 6 05:05:17 localhost ceph-mon[290022]: mon.np0005548789@3(electing) e8 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 6 05:05:17 localhost sshd[293058]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:05:19 localhost nova_compute[282193]: 2025-12-06 10:05:19.347 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:05:20 localhost nova_compute[282193]: 2025-12-06 10:05:20.056 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:05:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:05:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:05:21 localhost systemd[1]: tmp-crun.LVWoyc.mount: Deactivated successfully. Dec 6 05:05:21 localhost podman[293060]: 2025-12-06 10:05:21.925848456 +0000 UTC m=+0.078924146 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, config_id=edpm, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, version=9.6, release=1755695350) Dec 6 05:05:21 localhost podman[293060]: 2025-12-06 10:05:21.934150711 +0000 UTC m=+0.087226371 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.buildah.version=1.33.7, release=1755695350, vcs-type=git, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, build-date=2025-08-20T13:12:41, version=9.6, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 6 05:05:21 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:05:21 localhost podman[293061]: 2025-12-06 10:05:21.987577983 +0000 UTC m=+0.135840836 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:05:21 localhost podman[293061]: 2025-12-06 10:05:21.995790355 +0000 UTC m=+0.144053148 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:05:22 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:05:22 localhost ceph-mon[290022]: mon.np0005548789@3(electing) e8 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 6 05:05:22 localhost ceph-mon[290022]: mon.np0005548789@3(electing) e8 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 6 05:05:22 localhost ceph-mon[290022]: mon.np0005548789@3(peon) e8 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 6 05:05:22 localhost ceph-mon[290022]: Reconfiguring mgr.np0005548788.yvwbqq (monmap changed)... Dec 6 05:05:22 localhost ceph-mon[290022]: Reconfiguring daemon mgr.np0005548788.yvwbqq on np0005548788.localdomain Dec 6 05:05:22 localhost ceph-mon[290022]: Remove daemons mon.np0005548788 Dec 6 05:05:22 localhost ceph-mon[290022]: Safe to remove mon.np0005548788: new quorum should be ['np0005548787', 'np0005548786', 'np0005548790', 'np0005548789'] (from ['np0005548787', 'np0005548786', 'np0005548790', 'np0005548789']) Dec 6 05:05:22 localhost ceph-mon[290022]: Removing monitor np0005548788 from monmap... Dec 6 05:05:22 localhost ceph-mon[290022]: Removing daemon mon.np0005548788 from np0005548788.localdomain -- ports [] Dec 6 05:05:22 localhost ceph-mon[290022]: mon.np0005548786 calling monitor election Dec 6 05:05:22 localhost ceph-mon[290022]: mon.np0005548789 calling monitor election Dec 6 05:05:22 localhost ceph-mon[290022]: mon.np0005548790 calling monitor election Dec 6 05:05:22 localhost ceph-mon[290022]: mon.np0005548787 calling monitor election Dec 6 05:05:22 localhost ceph-mon[290022]: mon.np0005548787 is new leader, mons np0005548787,np0005548790,np0005548789 in quorum (ranks 0,2,3) Dec 6 05:05:22 localhost ceph-mon[290022]: overall HEALTH_OK Dec 6 05:05:22 localhost ceph-mon[290022]: mon.np0005548787 calling monitor election Dec 6 05:05:22 localhost ceph-mon[290022]: mon.np0005548787 is new leader, mons np0005548787,np0005548786,np0005548790,np0005548789 in quorum (ranks 0,1,2,3) Dec 6 05:05:22 localhost ceph-mon[290022]: overall HEALTH_OK Dec 6 05:05:22 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:22 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:22 localhost podman[293151]: Dec 6 05:05:22 localhost podman[293151]: 2025-12-06 10:05:22.962514581 +0000 UTC m=+0.091520743 container create 5918be0bbd711da785c6cf0858733beb40f7be7a66c90ddd07235e6563123205 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_tesla, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, vendor=Red Hat, Inc., io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, architecture=x86_64, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, GIT_CLEAN=True, maintainer=Guillaume Abrioux , RELEASE=main, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph) Dec 6 05:05:23 localhost podman[293151]: 2025-12-06 10:05:22.92047548 +0000 UTC m=+0.049481692 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:05:23 localhost systemd[1]: Started libpod-conmon-5918be0bbd711da785c6cf0858733beb40f7be7a66c90ddd07235e6563123205.scope. Dec 6 05:05:23 localhost systemd[1]: Started libcrun container. Dec 6 05:05:23 localhost podman[293151]: 2025-12-06 10:05:23.055956372 +0000 UTC m=+0.184962564 container init 5918be0bbd711da785c6cf0858733beb40f7be7a66c90ddd07235e6563123205 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_tesla, io.openshift.tags=rhceph ceph, RELEASE=main, distribution-scope=public, GIT_CLEAN=True, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., release=1763362218, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, version=7, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, architecture=x86_64, maintainer=Guillaume Abrioux ) Dec 6 05:05:23 localhost podman[293151]: 2025-12-06 10:05:23.067995872 +0000 UTC m=+0.197002034 container start 5918be0bbd711da785c6cf0858733beb40f7be7a66c90ddd07235e6563123205 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_tesla, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, GIT_BRANCH=main, architecture=x86_64, io.openshift.expose-services=, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, io.buildah.version=1.41.4, ceph=True, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, maintainer=Guillaume Abrioux , distribution-scope=public, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 6 05:05:23 localhost podman[293151]: 2025-12-06 10:05:23.068327823 +0000 UTC m=+0.197334025 container attach 5918be0bbd711da785c6cf0858733beb40f7be7a66c90ddd07235e6563123205 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_tesla, version=7, distribution-scope=public, build-date=2025-11-26T19:44:28Z, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, architecture=x86_64, vendor=Red Hat, Inc., GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux ) Dec 6 05:05:23 localhost priceless_tesla[293166]: 167 167 Dec 6 05:05:23 localhost systemd[1]: libpod-5918be0bbd711da785c6cf0858733beb40f7be7a66c90ddd07235e6563123205.scope: Deactivated successfully. Dec 6 05:05:23 localhost podman[293151]: 2025-12-06 10:05:23.0763826 +0000 UTC m=+0.205388762 container died 5918be0bbd711da785c6cf0858733beb40f7be7a66c90ddd07235e6563123205 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_tesla, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., release=1763362218, io.openshift.tags=rhceph ceph, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, ceph=True, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main) Dec 6 05:05:23 localhost podman[293171]: 2025-12-06 10:05:23.156900254 +0000 UTC m=+0.074324324 container remove 5918be0bbd711da785c6cf0858733beb40f7be7a66c90ddd07235e6563123205 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_tesla, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_CLEAN=True, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, name=rhceph, vcs-type=git, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_BRANCH=main, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, release=1763362218, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, version=7, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 6 05:05:23 localhost systemd[1]: libpod-conmon-5918be0bbd711da785c6cf0858733beb40f7be7a66c90ddd07235e6563123205.scope: Deactivated successfully. Dec 6 05:05:23 localhost nova_compute[282193]: 2025-12-06 10:05:23.180 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:05:23 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548789.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:05:23 localhost ceph-mon[290022]: Reconfiguring crash.np0005548789 (monmap changed)... Dec 6 05:05:23 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548789.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:05:23 localhost ceph-mon[290022]: Reconfiguring daemon crash.np0005548789 on np0005548789.localdomain Dec 6 05:05:23 localhost podman[241090]: time="2025-12-06T10:05:23Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:05:23 localhost podman[241090]: @ - - [06/Dec/2025:10:05:23 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156104 "" "Go-http-client/1.1" Dec 6 05:05:23 localhost systemd[1]: var-lib-containers-storage-overlay-0a69c25864226b0bea7e77227e53807758885aa61e10a3dd69cff582964d9ab4-merged.mount: Deactivated successfully. Dec 6 05:05:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:05:23 localhost podman[293239]: Dec 6 05:05:24 localhost podman[293239]: 2025-12-06 10:05:24.007058749 +0000 UTC m=+0.136267028 container create c095304d7f0d99f6728e18e9209c902b868a7fd533b6ff12eaf950f298e99e65 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_lewin, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , ceph=True, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=rhceph-container) Dec 6 05:05:24 localhost podman[293239]: 2025-12-06 10:05:23.921608143 +0000 UTC m=+0.050816482 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:05:24 localhost systemd[1]: Started libpod-conmon-c095304d7f0d99f6728e18e9209c902b868a7fd533b6ff12eaf950f298e99e65.scope. Dec 6 05:05:24 localhost podman[241090]: @ - - [06/Dec/2025:10:05:23 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19215 "" "Go-http-client/1.1" Dec 6 05:05:24 localhost systemd[1]: Started libcrun container. Dec 6 05:05:24 localhost podman[293239]: 2025-12-06 10:05:24.077118702 +0000 UTC m=+0.206326971 container init c095304d7f0d99f6728e18e9209c902b868a7fd533b6ff12eaf950f298e99e65 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_lewin, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., GIT_CLEAN=True, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, maintainer=Guillaume Abrioux , name=rhceph, release=1763362218, io.buildah.version=1.41.4, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, distribution-scope=public) Dec 6 05:05:24 localhost busy_lewin[293266]: 167 167 Dec 6 05:05:24 localhost podman[293239]: 2025-12-06 10:05:24.090033068 +0000 UTC m=+0.219241337 container start c095304d7f0d99f6728e18e9209c902b868a7fd533b6ff12eaf950f298e99e65 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_lewin, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhceph ceph, release=1763362218, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, distribution-scope=public, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, version=7, ceph=True, io.k8s.description=Red Hat Ceph Storage 7) Dec 6 05:05:24 localhost systemd[1]: libpod-c095304d7f0d99f6728e18e9209c902b868a7fd533b6ff12eaf950f298e99e65.scope: Deactivated successfully. Dec 6 05:05:24 localhost podman[293239]: 2025-12-06 10:05:24.090696179 +0000 UTC m=+0.219904538 container attach c095304d7f0d99f6728e18e9209c902b868a7fd533b6ff12eaf950f298e99e65 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_lewin, com.redhat.component=rhceph-container, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, io.buildah.version=1.41.4, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, ceph=True, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, RELEASE=main) Dec 6 05:05:24 localhost podman[293239]: 2025-12-06 10:05:24.093906808 +0000 UTC m=+0.223115167 container died c095304d7f0d99f6728e18e9209c902b868a7fd533b6ff12eaf950f298e99e65 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_lewin, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, GIT_BRANCH=main, version=7, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, vcs-type=git, vendor=Red Hat, Inc., ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.openshift.expose-services=) Dec 6 05:05:24 localhost podman[293253]: 2025-12-06 10:05:24.098163168 +0000 UTC m=+0.101831590 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:05:24 localhost podman[293253]: 2025-12-06 10:05:24.117313077 +0000 UTC m=+0.120981539 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:05:24 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:05:24 localhost nova_compute[282193]: 2025-12-06 10:05:24.178 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:05:24 localhost nova_compute[282193]: 2025-12-06 10:05:24.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:05:24 localhost podman[293275]: 2025-12-06 10:05:24.206779916 +0000 UTC m=+0.104934695 container remove c095304d7f0d99f6728e18e9209c902b868a7fd533b6ff12eaf950f298e99e65 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_lewin, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, ceph=True, RELEASE=main, io.openshift.tags=rhceph ceph, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., name=rhceph, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 6 05:05:24 localhost systemd[1]: libpod-conmon-c095304d7f0d99f6728e18e9209c902b868a7fd533b6ff12eaf950f298e99e65.scope: Deactivated successfully. Dec 6 05:05:24 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:24 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:24 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Dec 6 05:05:24 localhost ceph-mon[290022]: Reconfiguring osd.1 (monmap changed)... Dec 6 05:05:24 localhost ceph-mon[290022]: Reconfiguring daemon osd.1 on np0005548789.localdomain Dec 6 05:05:24 localhost nova_compute[282193]: 2025-12-06 10:05:24.349 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:05:24 localhost ceph-mon[290022]: mon.np0005548789@3(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:05:24 localhost systemd[1]: var-lib-containers-storage-overlay-f78b92e4bb25f2cead10674e485ed40d7fb806b8e639da61ee5d91234c7cff2c-merged.mount: Deactivated successfully. Dec 6 05:05:25 localhost nova_compute[282193]: 2025-12-06 10:05:25.058 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:05:25 localhost nova_compute[282193]: 2025-12-06 10:05:25.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:05:25 localhost nova_compute[282193]: 2025-12-06 10:05:25.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:05:25 localhost podman[293355]: Dec 6 05:05:25 localhost nova_compute[282193]: 2025-12-06 10:05:25.208 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:05:25 localhost nova_compute[282193]: 2025-12-06 10:05:25.209 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:05:25 localhost nova_compute[282193]: 2025-12-06 10:05:25.210 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:05:25 localhost nova_compute[282193]: 2025-12-06 10:05:25.210 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Auditing locally available compute resources for np0005548789.localdomain (node: np0005548789.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 6 05:05:25 localhost nova_compute[282193]: 2025-12-06 10:05:25.211 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:05:25 localhost podman[293355]: 2025-12-06 10:05:25.222051334 +0000 UTC m=+0.083881969 container create 8d07218d62a798c0e9ec89c9e6582d688371a72d8667dbdabcb3628a99cfb2e0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_panini, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, version=7, CEPH_POINT_RELEASE=, io.openshift.expose-services=, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, maintainer=Guillaume Abrioux , RELEASE=main) Dec 6 05:05:25 localhost systemd[1]: Started libpod-conmon-8d07218d62a798c0e9ec89c9e6582d688371a72d8667dbdabcb3628a99cfb2e0.scope. Dec 6 05:05:25 localhost podman[293355]: 2025-12-06 10:05:25.187264245 +0000 UTC m=+0.049094920 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:05:25 localhost systemd[1]: Started libcrun container. Dec 6 05:05:25 localhost podman[293355]: 2025-12-06 10:05:25.31855859 +0000 UTC m=+0.180389215 container init 8d07218d62a798c0e9ec89c9e6582d688371a72d8667dbdabcb3628a99cfb2e0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_panini, architecture=x86_64, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, io.openshift.expose-services=, RELEASE=main, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, name=rhceph, version=7, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, distribution-scope=public, ceph=True, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.) Dec 6 05:05:25 localhost podman[293355]: 2025-12-06 10:05:25.341970069 +0000 UTC m=+0.203800714 container start 8d07218d62a798c0e9ec89c9e6582d688371a72d8667dbdabcb3628a99cfb2e0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_panini, maintainer=Guillaume Abrioux , vcs-type=git, RELEASE=main, GIT_CLEAN=True, architecture=x86_64, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., distribution-scope=public, CEPH_POINT_RELEASE=, GIT_BRANCH=main, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, version=7, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 05:05:25 localhost podman[293355]: 2025-12-06 10:05:25.342390082 +0000 UTC m=+0.204220757 container attach 8d07218d62a798c0e9ec89c9e6582d688371a72d8667dbdabcb3628a99cfb2e0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_panini, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, architecture=x86_64, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, GIT_CLEAN=True, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, vcs-type=git, io.openshift.expose-services=) Dec 6 05:05:25 localhost blissful_panini[293371]: 167 167 Dec 6 05:05:25 localhost systemd[1]: libpod-8d07218d62a798c0e9ec89c9e6582d688371a72d8667dbdabcb3628a99cfb2e0.scope: Deactivated successfully. Dec 6 05:05:25 localhost podman[293355]: 2025-12-06 10:05:25.345566329 +0000 UTC m=+0.207396954 container died 8d07218d62a798c0e9ec89c9e6582d688371a72d8667dbdabcb3628a99cfb2e0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_panini, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, RELEASE=main, GIT_BRANCH=main, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, version=7, vcs-type=git, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 6 05:05:25 localhost podman[293378]: 2025-12-06 10:05:25.455779617 +0000 UTC m=+0.094622369 container remove 8d07218d62a798c0e9ec89c9e6582d688371a72d8667dbdabcb3628a99cfb2e0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_panini, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_BRANCH=main, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, ceph=True, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, architecture=x86_64) Dec 6 05:05:25 localhost systemd[1]: libpod-conmon-8d07218d62a798c0e9ec89c9e6582d688371a72d8667dbdabcb3628a99cfb2e0.scope: Deactivated successfully. Dec 6 05:05:25 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:25 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:25 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Dec 6 05:05:25 localhost ceph-mon[290022]: Reconfiguring osd.4 (monmap changed)... Dec 6 05:05:25 localhost ceph-mon[290022]: Reconfiguring daemon osd.4 on np0005548789.localdomain Dec 6 05:05:25 localhost ceph-mon[290022]: mon.np0005548789@3(peon) e8 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:05:25 localhost ceph-mon[290022]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3699539753' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:05:25 localhost nova_compute[282193]: 2025-12-06 10:05:25.708 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:05:25 localhost nova_compute[282193]: 2025-12-06 10:05:25.793 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:05:25 localhost nova_compute[282193]: 2025-12-06 10:05:25.794 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:05:25 localhost systemd[1]: tmp-crun.O1FR7V.mount: Deactivated successfully. Dec 6 05:05:25 localhost systemd[1]: var-lib-containers-storage-overlay-61141d3f6fd5c2203c20226adbd423a0da21902efe75cfd4383c42cfb7f8103a-merged.mount: Deactivated successfully. Dec 6 05:05:25 localhost nova_compute[282193]: 2025-12-06 10:05:25.989 282197 WARNING nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 6 05:05:25 localhost nova_compute[282193]: 2025-12-06 10:05:25.991 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Hypervisor/Node resource view: name=np0005548789.localdomain free_ram=11559MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 6 05:05:25 localhost nova_compute[282193]: 2025-12-06 10:05:25.992 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:05:25 localhost nova_compute[282193]: 2025-12-06 10:05:25.992 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:05:26 localhost nova_compute[282193]: 2025-12-06 10:05:26.075 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 6 05:05:26 localhost nova_compute[282193]: 2025-12-06 10:05:26.076 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 6 05:05:26 localhost nova_compute[282193]: 2025-12-06 10:05:26.076 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Final resource view: name=np0005548789.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 6 05:05:26 localhost nova_compute[282193]: 2025-12-06 10:05:26.170 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:05:26 localhost podman[293475]: Dec 6 05:05:26 localhost podman[293475]: 2025-12-06 10:05:26.312263136 +0000 UTC m=+0.072743617 container create 03150583466d44ebe09e2a3504a1ab2a2d349dd181c41c3ca8b6f33964ff39f9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_roentgen, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, name=rhceph, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., GIT_BRANCH=main, RELEASE=main, com.redhat.component=rhceph-container, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , version=7, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 6 05:05:26 localhost systemd[1]: Started libpod-conmon-03150583466d44ebe09e2a3504a1ab2a2d349dd181c41c3ca8b6f33964ff39f9.scope. Dec 6 05:05:26 localhost systemd[1]: Started libcrun container. Dec 6 05:05:26 localhost podman[293475]: 2025-12-06 10:05:26.273891696 +0000 UTC m=+0.034372247 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:05:26 localhost podman[293475]: 2025-12-06 10:05:26.377074227 +0000 UTC m=+0.137554688 container init 03150583466d44ebe09e2a3504a1ab2a2d349dd181c41c3ca8b6f33964ff39f9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_roentgen, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., GIT_BRANCH=main, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, release=1763362218, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, description=Red Hat Ceph Storage 7) Dec 6 05:05:26 localhost podman[293475]: 2025-12-06 10:05:26.385711562 +0000 UTC m=+0.146192023 container start 03150583466d44ebe09e2a3504a1ab2a2d349dd181c41c3ca8b6f33964ff39f9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_roentgen, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., name=rhceph, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, RELEASE=main, vcs-type=git, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhceph ceph, version=7) Dec 6 05:05:26 localhost podman[293475]: 2025-12-06 10:05:26.386095784 +0000 UTC m=+0.146576235 container attach 03150583466d44ebe09e2a3504a1ab2a2d349dd181c41c3ca8b6f33964ff39f9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_roentgen, ceph=True, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., name=rhceph, version=7, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, architecture=x86_64, io.buildah.version=1.41.4, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 05:05:26 localhost cool_roentgen[293509]: 167 167 Dec 6 05:05:26 localhost systemd[1]: libpod-03150583466d44ebe09e2a3504a1ab2a2d349dd181c41c3ca8b6f33964ff39f9.scope: Deactivated successfully. Dec 6 05:05:26 localhost podman[293475]: 2025-12-06 10:05:26.394586785 +0000 UTC m=+0.155067276 container died 03150583466d44ebe09e2a3504a1ab2a2d349dd181c41c3ca8b6f33964ff39f9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_roentgen, vendor=Red Hat, Inc., architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, RELEASE=main, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, distribution-scope=public, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, name=rhceph, vcs-type=git, io.openshift.expose-services=, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 6 05:05:26 localhost podman[293515]: 2025-12-06 10:05:26.48196141 +0000 UTC m=+0.081565268 container remove 03150583466d44ebe09e2a3504a1ab2a2d349dd181c41c3ca8b6f33964ff39f9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_roentgen, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, architecture=x86_64, vcs-type=git, GIT_CLEAN=True, name=rhceph, RELEASE=main, vendor=Red Hat, Inc., ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z) Dec 6 05:05:26 localhost systemd[1]: libpod-conmon-03150583466d44ebe09e2a3504a1ab2a2d349dd181c41c3ca8b6f33964ff39f9.scope: Deactivated successfully. Dec 6 05:05:26 localhost nova_compute[282193]: 2025-12-06 10:05:26.672 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:05:26 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:26 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:26 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548789.vxwwsq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 6 05:05:26 localhost ceph-mon[290022]: Reconfiguring mds.mds.np0005548789.vxwwsq (monmap changed)... Dec 6 05:05:26 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548789.vxwwsq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 6 05:05:26 localhost ceph-mon[290022]: Reconfiguring daemon mds.mds.np0005548789.vxwwsq on np0005548789.localdomain Dec 6 05:05:26 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:26 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:26 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548789.mzhmje", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:05:26 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548789.mzhmje", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:05:26 localhost nova_compute[282193]: 2025-12-06 10:05:26.683 282197 DEBUG nova.compute.provider_tree [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed in ProviderTree for provider: 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 6 05:05:26 localhost nova_compute[282193]: 2025-12-06 10:05:26.698 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 6 05:05:26 localhost nova_compute[282193]: 2025-12-06 10:05:26.700 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Compute_service record updated for np0005548789.localdomain:np0005548789.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 6 05:05:26 localhost nova_compute[282193]: 2025-12-06 10:05:26.701 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:05:26 localhost systemd[1]: var-lib-containers-storage-overlay-f735fd3a4ab614b24b165107e92fd20284281e4316d6ae58435ca9bed9328c06-merged.mount: Deactivated successfully. Dec 6 05:05:27 localhost podman[293585]: Dec 6 05:05:27 localhost podman[293585]: 2025-12-06 10:05:27.287483173 +0000 UTC m=+0.085318183 container create e3942d85554a3517be780e1c43256bebb32749f7f5dd285453312c6ce63af0d7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_perlman, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_CLEAN=True, io.openshift.expose-services=, io.buildah.version=1.41.4, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.component=rhceph-container, GIT_BRANCH=main, maintainer=Guillaume Abrioux ) Dec 6 05:05:27 localhost systemd[1]: Started libpod-conmon-e3942d85554a3517be780e1c43256bebb32749f7f5dd285453312c6ce63af0d7.scope. Dec 6 05:05:27 localhost systemd[1]: Started libcrun container. Dec 6 05:05:27 localhost podman[293585]: 2025-12-06 10:05:27.255278203 +0000 UTC m=+0.053113213 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:05:27 localhost podman[293585]: 2025-12-06 10:05:27.359782474 +0000 UTC m=+0.157617484 container init e3942d85554a3517be780e1c43256bebb32749f7f5dd285453312c6ce63af0d7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_perlman, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, CEPH_POINT_RELEASE=, distribution-scope=public, vcs-type=git, GIT_CLEAN=True, version=7, io.openshift.tags=rhceph ceph, ceph=True, name=rhceph, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., RELEASE=main) Dec 6 05:05:27 localhost podman[293585]: 2025-12-06 10:05:27.373390692 +0000 UTC m=+0.171225712 container start e3942d85554a3517be780e1c43256bebb32749f7f5dd285453312c6ce63af0d7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_perlman, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, vcs-type=git, distribution-scope=public, RELEASE=main, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, GIT_CLEAN=True, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7) Dec 6 05:05:27 localhost podman[293585]: 2025-12-06 10:05:27.373824405 +0000 UTC m=+0.171659455 container attach e3942d85554a3517be780e1c43256bebb32749f7f5dd285453312c6ce63af0d7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_perlman, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, architecture=x86_64, distribution-scope=public, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, com.redhat.component=rhceph-container, version=7, io.openshift.tags=rhceph ceph, name=rhceph, io.openshift.expose-services=) Dec 6 05:05:27 localhost objective_perlman[293600]: 167 167 Dec 6 05:05:27 localhost systemd[1]: libpod-e3942d85554a3517be780e1c43256bebb32749f7f5dd285453312c6ce63af0d7.scope: Deactivated successfully. Dec 6 05:05:27 localhost podman[293585]: 2025-12-06 10:05:27.378551031 +0000 UTC m=+0.176386091 container died e3942d85554a3517be780e1c43256bebb32749f7f5dd285453312c6ce63af0d7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_perlman, name=rhceph, GIT_CLEAN=True, vendor=Red Hat, Inc., version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, maintainer=Guillaume Abrioux , io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, GIT_BRANCH=main, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7) Dec 6 05:05:27 localhost podman[293605]: 2025-12-06 10:05:27.487941172 +0000 UTC m=+0.100590342 container remove e3942d85554a3517be780e1c43256bebb32749f7f5dd285453312c6ce63af0d7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_perlman, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, release=1763362218, GIT_CLEAN=True, com.redhat.component=rhceph-container, vcs-type=git, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, ceph=True, RELEASE=main, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 6 05:05:27 localhost systemd[1]: libpod-conmon-e3942d85554a3517be780e1c43256bebb32749f7f5dd285453312c6ce63af0d7.scope: Deactivated successfully. Dec 6 05:05:27 localhost nova_compute[282193]: 2025-12-06 10:05:27.702 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:05:27 localhost nova_compute[282193]: 2025-12-06 10:05:27.703 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 6 05:05:27 localhost nova_compute[282193]: 2025-12-06 10:05:27.704 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 6 05:05:27 localhost nova_compute[282193]: 2025-12-06 10:05:27.923 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 6 05:05:27 localhost nova_compute[282193]: 2025-12-06 10:05:27.924 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquired lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 6 05:05:27 localhost nova_compute[282193]: 2025-12-06 10:05:27.924 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 6 05:05:27 localhost nova_compute[282193]: 2025-12-06 10:05:27.925 282197 DEBUG nova.objects.instance [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 6 05:05:27 localhost systemd[1]: var-lib-containers-storage-overlay-5feace6bb6912bc716bafe85ee0d4dae479d7611357a1c27e18a2dcbabd1eaf4-merged.mount: Deactivated successfully. Dec 6 05:05:28 localhost ceph-mon[290022]: Reconfiguring mgr.np0005548789.mzhmje (monmap changed)... Dec 6 05:05:28 localhost ceph-mon[290022]: Reconfiguring daemon mgr.np0005548789.mzhmje on np0005548789.localdomain Dec 6 05:05:28 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:28 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:28 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548790.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:05:28 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548790.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:05:28 localhost nova_compute[282193]: 2025-12-06 10:05:28.295 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updating instance_info_cache with network_info: [{"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 6 05:05:28 localhost nova_compute[282193]: 2025-12-06 10:05:28.435 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Releasing lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 6 05:05:28 localhost nova_compute[282193]: 2025-12-06 10:05:28.436 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 6 05:05:28 localhost nova_compute[282193]: 2025-12-06 10:05:28.437 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:05:28 localhost nova_compute[282193]: 2025-12-06 10:05:28.438 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:05:28 localhost nova_compute[282193]: 2025-12-06 10:05:28.438 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:05:28 localhost nova_compute[282193]: 2025-12-06 10:05:28.438 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 6 05:05:28 localhost ceph-mon[290022]: mon.np0005548789@3(peon) e8 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:05:28 localhost ceph-mon[290022]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2669262318' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:05:29 localhost ceph-mon[290022]: Reconfiguring crash.np0005548790 (monmap changed)... Dec 6 05:05:29 localhost ceph-mon[290022]: Reconfiguring daemon crash.np0005548790 on np0005548790.localdomain Dec 6 05:05:29 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:29 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:29 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Dec 6 05:05:29 localhost nova_compute[282193]: 2025-12-06 10:05:29.394 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:05:29 localhost ceph-mon[290022]: mon.np0005548789@3(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:05:30 localhost ceph-mon[290022]: Reconfiguring osd.0 (monmap changed)... Dec 6 05:05:30 localhost ceph-mon[290022]: Reconfiguring daemon osd.0 on np0005548790.localdomain Dec 6 05:05:30 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:30 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:30 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Dec 6 05:05:30 localhost nova_compute[282193]: 2025-12-06 10:05:30.061 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:05:31 localhost ceph-mon[290022]: Reconfiguring osd.3 (monmap changed)... Dec 6 05:05:31 localhost ceph-mon[290022]: Reconfiguring daemon osd.3 on np0005548790.localdomain Dec 6 05:05:31 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:31 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:31 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548790.vhcezv", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 6 05:05:31 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548790.vhcezv", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 6 05:05:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:05:31 localhost podman[293622]: 2025-12-06 10:05:31.931377283 +0000 UTC m=+0.089073807 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:05:31 localhost podman[293622]: 2025-12-06 10:05:31.971406244 +0000 UTC m=+0.129102778 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 6 05:05:31 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:05:32 localhost ceph-mon[290022]: Reconfiguring mds.mds.np0005548790.vhcezv (monmap changed)... Dec 6 05:05:32 localhost ceph-mon[290022]: Reconfiguring daemon mds.mds.np0005548790.vhcezv on np0005548790.localdomain Dec 6 05:05:32 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:32 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:32 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548790.kvkfyr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:05:32 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548790.kvkfyr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:05:33 localhost ceph-mon[290022]: Reconfiguring mgr.np0005548790.kvkfyr (monmap changed)... Dec 6 05:05:33 localhost ceph-mon[290022]: Reconfiguring daemon mgr.np0005548790.kvkfyr on np0005548790.localdomain Dec 6 05:05:33 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:33 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:33 localhost sshd[293715]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:05:34 localhost nova_compute[282193]: 2025-12-06 10:05:34.395 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:05:34 localhost ceph-mon[290022]: mon.np0005548789@3(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:05:34 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 6 05:05:34 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:34 localhost ceph-mon[290022]: Deploying daemon mon.np0005548788 on np0005548788.localdomain Dec 6 05:05:34 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:34 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:35 localhost nova_compute[282193]: 2025-12-06 10:05:35.065 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:05:35 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:05:35 localhost ceph-mon[290022]: Updating np0005548786.localdomain:/etc/ceph/ceph.conf Dec 6 05:05:35 localhost ceph-mon[290022]: Updating np0005548787.localdomain:/etc/ceph/ceph.conf Dec 6 05:05:35 localhost ceph-mon[290022]: Updating np0005548788.localdomain:/etc/ceph/ceph.conf Dec 6 05:05:35 localhost ceph-mon[290022]: Updating np0005548789.localdomain:/etc/ceph/ceph.conf Dec 6 05:05:35 localhost ceph-mon[290022]: Updating np0005548790.localdomain:/etc/ceph/ceph.conf Dec 6 05:05:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:05:35 localhost podman[293877]: 2025-12-06 10:05:35.962052571 +0000 UTC m=+0.080274668 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 6 05:05:36 localhost podman[293877]: 2025-12-06 10:05:36.03006004 +0000 UTC m=+0.148282117 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible) Dec 6 05:05:36 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:05:36 localhost ceph-mon[290022]: mon.np0005548789@3(peon) e8 adding peer [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] to list of hints Dec 6 05:05:36 localhost ceph-mon[290022]: mon.np0005548789@3(peon) e8 adding peer [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] to list of hints Dec 6 05:05:37 localhost ceph-mon[290022]: mon.np0005548789@3(peon) e8 adding peer [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] to list of hints Dec 6 05:05:37 localhost ceph-mgr[288591]: ms_deliver_dispatch: unhandled message 0x561418816000 mon_map magic: 0 from mon.3 v2:172.18.0.107:3300/0 Dec 6 05:05:37 localhost ceph-mon[290022]: log_channel(cluster) log [INF] : mon.np0005548789 calling monitor election Dec 6 05:05:37 localhost ceph-mon[290022]: paxos.3).electionLogic(38) init, last seen epoch 38 Dec 6 05:05:37 localhost ceph-mon[290022]: mon.np0005548789@3(electing) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 6 05:05:37 localhost ceph-mon[290022]: mon.np0005548789@3(electing) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 6 05:05:39 localhost nova_compute[282193]: 2025-12-06 10:05:39.433 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:05:40 localhost nova_compute[282193]: 2025-12-06 10:05:40.069 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:05:42 localhost ceph-mon[290022]: mon.np0005548789@3(electing) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 6 05:05:42 localhost ceph-mon[290022]: mon.np0005548789@3(electing) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 6 05:05:42 localhost ceph-mon[290022]: mon.np0005548789@3(peon) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 6 05:05:42 localhost ceph-mon[290022]: mon.np0005548786 calling monitor election Dec 6 05:05:42 localhost ceph-mon[290022]: mon.np0005548789 calling monitor election Dec 6 05:05:42 localhost ceph-mon[290022]: mon.np0005548787 calling monitor election Dec 6 05:05:42 localhost ceph-mon[290022]: mon.np0005548790 calling monitor election Dec 6 05:05:42 localhost ceph-mon[290022]: mon.np0005548788 calling monitor election Dec 6 05:05:42 localhost ceph-mon[290022]: mon.np0005548787 is new leader, mons np0005548787,np0005548790,np0005548789 in quorum (ranks 0,2,3) Dec 6 05:05:42 localhost ceph-mon[290022]: overall HEALTH_OK Dec 6 05:05:42 localhost ceph-mon[290022]: mon.np0005548787 calling monitor election Dec 6 05:05:42 localhost ceph-mon[290022]: mon.np0005548787 is new leader, mons np0005548787,np0005548786,np0005548790,np0005548789,np0005548788 in quorum (ranks 0,1,2,3,4) Dec 6 05:05:42 localhost ceph-mon[290022]: overall HEALTH_OK Dec 6 05:05:42 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:43 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:43 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:43 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548786.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:05:43 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548786.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:05:44 localhost nova_compute[282193]: 2025-12-06 10:05:44.445 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:05:44 localhost ceph-mon[290022]: Reconfiguring crash.np0005548786 (monmap changed)... Dec 6 05:05:44 localhost ceph-mon[290022]: Reconfiguring daemon crash.np0005548786 on np0005548786.localdomain Dec 6 05:05:44 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:44 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:44 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:44 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548786.mczynb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:05:44 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548786.mczynb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:05:44 localhost ceph-mon[290022]: mon.np0005548789@3(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:05:45 localhost nova_compute[282193]: 2025-12-06 10:05:45.072 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:05:45 localhost ceph-mon[290022]: Reconfiguring mgr.np0005548786.mczynb (monmap changed)... Dec 6 05:05:45 localhost ceph-mon[290022]: Reconfiguring daemon mgr.np0005548786.mczynb on np0005548786.localdomain Dec 6 05:05:45 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:45 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:45 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548787.umwsra", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:05:45 localhost ceph-mon[290022]: Reconfiguring mgr.np0005548787.umwsra (monmap changed)... Dec 6 05:05:45 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548787.umwsra", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:05:45 localhost ceph-mon[290022]: Reconfiguring daemon mgr.np0005548787.umwsra on np0005548787.localdomain Dec 6 05:05:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:05:45 localhost podman[294080]: 2025-12-06 10:05:45.925998311 +0000 UTC m=+0.085479958 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent) Dec 6 05:05:45 localhost podman[294080]: 2025-12-06 10:05:45.956504479 +0000 UTC m=+0.115986096 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Dec 6 05:05:45 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:05:46 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:46 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:46 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548787.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:05:46 localhost ceph-mon[290022]: Reconfiguring crash.np0005548787 (monmap changed)... Dec 6 05:05:46 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548787.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:05:46 localhost ceph-mon[290022]: Reconfiguring daemon crash.np0005548787 on np0005548787.localdomain Dec 6 05:05:46 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:46 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:46 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548788.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:05:46 localhost ceph-mon[290022]: Reconfiguring crash.np0005548788 (monmap changed)... Dec 6 05:05:46 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548788.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:05:46 localhost ceph-mon[290022]: Reconfiguring daemon crash.np0005548788 on np0005548788.localdomain Dec 6 05:05:46 localhost openstack_network_exporter[243110]: ERROR 10:05:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:05:46 localhost openstack_network_exporter[243110]: ERROR 10:05:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:05:46 localhost openstack_network_exporter[243110]: ERROR 10:05:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:05:46 localhost openstack_network_exporter[243110]: ERROR 10:05:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:05:46 localhost openstack_network_exporter[243110]: Dec 6 05:05:46 localhost openstack_network_exporter[243110]: ERROR 10:05:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:05:46 localhost openstack_network_exporter[243110]: Dec 6 05:05:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:05:47.295 160509 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:05:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:05:47.295 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:05:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:05:47.296 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:05:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:05:47 localhost podman[294098]: 2025-12-06 10:05:47.725584179 +0000 UTC m=+0.068402572 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 6 05:05:47 localhost podman[294098]: 2025-12-06 10:05:47.738240109 +0000 UTC m=+0.081058502 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 6 05:05:47 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:05:48 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:48 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:48 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 6 05:05:48 localhost ceph-mon[290022]: Reconfiguring osd.2 (monmap changed)... Dec 6 05:05:48 localhost ceph-mon[290022]: Reconfiguring daemon osd.2 on np0005548788.localdomain Dec 6 05:05:49 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:49 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:49 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 6 05:05:49 localhost ceph-mon[290022]: Reconfiguring osd.5 (monmap changed)... Dec 6 05:05:49 localhost ceph-mon[290022]: Reconfiguring daemon osd.5 on np0005548788.localdomain Dec 6 05:05:49 localhost nova_compute[282193]: 2025-12-06 10:05:49.481 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:05:49 localhost ceph-mon[290022]: mon.np0005548789@3(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:05:50 localhost nova_compute[282193]: 2025-12-06 10:05:50.075 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:05:50 localhost ceph-mon[290022]: Reconfig service osd.default_drive_group Dec 6 05:05:50 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:50 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:50 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:50 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:50 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:50 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:50 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:50 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:50 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:50 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:50 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548788.erzujf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 6 05:05:50 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:50 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548788.erzujf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 6 05:05:50 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:50 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:50 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:50 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:50 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:51 localhost ceph-mon[290022]: mon.np0005548789@3(peon).osd e89 e89: 6 total, 6 up, 6 in Dec 6 05:05:51 localhost systemd[1]: session-65.scope: Deactivated successfully. Dec 6 05:05:51 localhost systemd[1]: session-65.scope: Consumed 18.331s CPU time. Dec 6 05:05:51 localhost systemd-logind[766]: Session 65 logged out. Waiting for processes to exit. Dec 6 05:05:51 localhost systemd-logind[766]: Removed session 65. Dec 6 05:05:51 localhost ceph-mon[290022]: Reconfiguring mds.mds.np0005548788.erzujf (monmap changed)... Dec 6 05:05:51 localhost ceph-mon[290022]: Reconfiguring daemon mds.mds.np0005548788.erzujf on np0005548788.localdomain Dec 6 05:05:51 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:51 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' Dec 6 05:05:51 localhost ceph-mon[290022]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548788.yvwbqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:05:51 localhost ceph-mon[290022]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548788.yvwbqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:05:51 localhost ceph-mon[290022]: from='client.? 172.18.0.200:0/3205170338' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Dec 6 05:05:51 localhost ceph-mon[290022]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Dec 6 05:05:51 localhost ceph-mon[290022]: Activating manager daemon np0005548787.umwsra Dec 6 05:05:51 localhost ceph-mon[290022]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Dec 6 05:05:51 localhost ceph-mon[290022]: Manager daemon np0005548787.umwsra is now available Dec 6 05:05:51 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005548785.localdomain.devices.0"} : dispatch Dec 6 05:05:51 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005548785.localdomain.devices.0"}]': finished Dec 6 05:05:51 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005548785.localdomain.devices.0"} : dispatch Dec 6 05:05:51 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005548785.localdomain.devices.0"}]': finished Dec 6 05:05:51 localhost sshd[294121]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:05:51 localhost systemd-logind[766]: New session 66 of user ceph-admin. Dec 6 05:05:51 localhost systemd[1]: Started Session 66 of User ceph-admin. Dec 6 05:05:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:05:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:05:52 localhost podman[294162]: 2025-12-06 10:05:52.128159765 +0000 UTC m=+0.099521830 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute) Dec 6 05:05:52 localhost systemd[1]: tmp-crun.7Z0KSX.mount: Deactivated successfully. Dec 6 05:05:52 localhost podman[294161]: 2025-12-06 10:05:52.179894954 +0000 UTC m=+0.154379915 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, container_name=openstack_network_exporter, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, name=ubi9-minimal, architecture=x86_64, vendor=Red Hat, Inc., release=1755695350, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 6 05:05:52 localhost podman[294162]: 2025-12-06 10:05:52.197693102 +0000 UTC m=+0.169055077 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible) Dec 6 05:05:52 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:05:52 localhost podman[294161]: 2025-12-06 10:05:52.218102098 +0000 UTC m=+0.192587039 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, version=9.6, container_name=openstack_network_exporter, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, config_id=edpm, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Dec 6 05:05:52 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:05:52 localhost ceph-mon[290022]: removing stray HostCache host record np0005548785.localdomain.devices.0 Dec 6 05:05:52 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548787.umwsra/mirror_snapshot_schedule"} : dispatch Dec 6 05:05:52 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548787.umwsra/trash_purge_schedule"} : dispatch Dec 6 05:05:52 localhost podman[294271]: 2025-12-06 10:05:52.789861788 +0000 UTC m=+0.097146137 container exec ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, maintainer=Guillaume Abrioux , io.openshift.expose-services=, version=7, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, release=1763362218, vendor=Red Hat, Inc., GIT_CLEAN=True, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, vcs-type=git, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main) Dec 6 05:05:52 localhost podman[294271]: 2025-12-06 10:05:52.899872138 +0000 UTC m=+0.207156517 container exec_died ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, RELEASE=main, version=7, vcs-type=git, distribution-scope=public, release=1763362218, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, architecture=x86_64, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7) Dec 6 05:05:53 localhost systemd[1]: tmp-crun.wIj4F5.mount: Deactivated successfully. Dec 6 05:05:53 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:05:53 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:05:53 localhost podman[241090]: time="2025-12-06T10:05:53Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:05:53 localhost podman[241090]: @ - - [06/Dec/2025:10:05:53 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156104 "" "Go-http-client/1.1" Dec 6 05:05:53 localhost podman[241090]: @ - - [06/Dec/2025:10:05:53 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19227 "" "Go-http-client/1.1" Dec 6 05:05:54 localhost nova_compute[282193]: 2025-12-06 10:05:54.494 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:05:54 localhost ceph-mon[290022]: [06/Dec/2025:10:05:52] ENGINE Bus STARTING Dec 6 05:05:54 localhost ceph-mon[290022]: [06/Dec/2025:10:05:52] ENGINE Serving on https://172.18.0.105:7150 Dec 6 05:05:54 localhost ceph-mon[290022]: [06/Dec/2025:10:05:52] ENGINE Client ('172.18.0.105', 55368) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Dec 6 05:05:54 localhost ceph-mon[290022]: [06/Dec/2025:10:05:53] ENGINE Serving on http://172.18.0.105:8765 Dec 6 05:05:54 localhost ceph-mon[290022]: [06/Dec/2025:10:05:53] ENGINE Bus STARTED Dec 6 05:05:54 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:05:54 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:05:54 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:05:54 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:05:54 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:05:54 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:05:54 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:05:54 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:05:54 localhost ceph-mon[290022]: mon.np0005548789@3(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:05:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:05:54 localhost podman[294477]: 2025-12-06 10:05:54.811903393 +0000 UTC m=+0.103610095 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 6 05:05:54 localhost podman[294477]: 2025-12-06 10:05:54.828427621 +0000 UTC m=+0.120134323 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:05:54 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:05:55 localhost nova_compute[282193]: 2025-12-06 10:05:55.081 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:05:56 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:05:56 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:05:56 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:05:56 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config rm", "who": "osd/host:np0005548787", "name": "osd_memory_target"} : dispatch Dec 6 05:05:56 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:05:56 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 6 05:05:56 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 6 05:05:56 localhost ceph-mon[290022]: Adjusting osd_memory_target on np0005548790.localdomain to 836.6M Dec 6 05:05:56 localhost ceph-mon[290022]: Unable to set osd_memory_target on np0005548790.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 6 05:05:56 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:05:56 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:05:56 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:05:56 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 6 05:05:56 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:05:56 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 6 05:05:56 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 6 05:05:56 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 6 05:05:56 localhost ceph-mon[290022]: Adjusting osd_memory_target on np0005548789.localdomain to 836.6M Dec 6 05:05:56 localhost ceph-mon[290022]: Adjusting osd_memory_target on np0005548788.localdomain to 836.6M Dec 6 05:05:56 localhost ceph-mon[290022]: Unable to set osd_memory_target on np0005548789.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 6 05:05:56 localhost ceph-mon[290022]: Unable to set osd_memory_target on np0005548788.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Dec 6 05:05:56 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:05:56 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:05:56 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config rm", "who": "osd/host:np0005548786", "name": "osd_memory_target"} : dispatch Dec 6 05:05:56 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:05:56 localhost ceph-mon[290022]: Updating np0005548786.localdomain:/etc/ceph/ceph.conf Dec 6 05:05:56 localhost ceph-mon[290022]: Updating np0005548787.localdomain:/etc/ceph/ceph.conf Dec 6 05:05:56 localhost ceph-mon[290022]: Updating np0005548788.localdomain:/etc/ceph/ceph.conf Dec 6 05:05:56 localhost ceph-mon[290022]: Updating np0005548789.localdomain:/etc/ceph/ceph.conf Dec 6 05:05:56 localhost ceph-mon[290022]: Updating np0005548790.localdomain:/etc/ceph/ceph.conf Dec 6 05:05:57 localhost ceph-mon[290022]: Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:05:57 localhost ceph-mon[290022]: Updating np0005548787.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:05:57 localhost ceph-mon[290022]: Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:05:57 localhost ceph-mon[290022]: Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:05:57 localhost ceph-mon[290022]: Updating np0005548786.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:05:58 localhost ceph-mon[290022]: Updating np0005548787.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 6 05:05:58 localhost ceph-mon[290022]: Updating np0005548788.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 6 05:05:58 localhost ceph-mon[290022]: Updating np0005548790.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 6 05:05:58 localhost ceph-mon[290022]: Updating np0005548789.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 6 05:05:58 localhost ceph-mon[290022]: Updating np0005548786.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 6 05:05:58 localhost ceph-mon[290022]: Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring Dec 6 05:05:58 localhost ceph-mon[290022]: Updating np0005548787.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring Dec 6 05:05:58 localhost ceph-mon[290022]: Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring Dec 6 05:05:58 localhost ceph-mon[290022]: Updating np0005548786.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring Dec 6 05:05:58 localhost ceph-mon[290022]: Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring Dec 6 05:05:58 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:05:58 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:05:58 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:05:58 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:05:58 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:05:58 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:05:58 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:05:58 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:05:58 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:05:58 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:05:59 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:05:59 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 6 05:05:59 localhost nova_compute[282193]: 2025-12-06 10:05:59.509 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:05:59 localhost ceph-mon[290022]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0. Dec 6 05:05:59 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:05:59.583406) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 6 05:05:59 localhost ceph-mon[290022]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16 Dec 6 05:05:59 localhost ceph-mon[290022]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015559583457, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 2849, "num_deletes": 255, "total_data_size": 9200145, "memory_usage": 9797712, "flush_reason": "Manual Compaction"} Dec 6 05:05:59 localhost ceph-mon[290022]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started Dec 6 05:05:59 localhost ceph-mon[290022]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015559627207, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 5563548, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11456, "largest_seqno": 14300, "table_properties": {"data_size": 5551678, "index_size": 7415, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3397, "raw_key_size": 30447, "raw_average_key_size": 22, "raw_value_size": 5525769, "raw_average_value_size": 4129, "num_data_blocks": 320, "num_entries": 1338, "num_filter_entries": 1338, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015498, "oldest_key_time": 1765015498, "file_creation_time": 1765015559, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8b48a877-4508-4eb4-a052-67f753f228b0", "db_session_id": "ETDWGFPM6GCTACWNDM5G", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}} Dec 6 05:05:59 localhost ceph-mon[290022]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 43896 microseconds, and 11919 cpu microseconds. Dec 6 05:05:59 localhost ceph-mon[290022]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 6 05:05:59 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:05:59.627291) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 5563548 bytes OK Dec 6 05:05:59 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:05:59.627329) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started Dec 6 05:05:59 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:05:59.629110) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done Dec 6 05:05:59 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:05:59.629143) EVENT_LOG_v1 {"time_micros": 1765015559629134, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 6 05:05:59 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:05:59.629170) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 6 05:05:59 localhost ceph-mon[290022]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 9186391, prev total WAL file size 9202615, number of live WAL files 2. Dec 6 05:05:59 localhost ceph-mon[290022]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:05:59 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:05:59.631151) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130323931' seq:72057594037927935, type:22 .. '7061786F73003130353433' seq:0, type:0; will stop at (end) Dec 6 05:05:59 localhost ceph-mon[290022]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 6 05:05:59 localhost ceph-mon[290022]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(5433KB)], [15(10MB)] Dec 6 05:05:59 localhost ceph-mon[290022]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015559631211, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 16742055, "oldest_snapshot_seqno": -1} Dec 6 05:05:59 localhost ceph-mon[290022]: mon.np0005548789@3(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:05:59 localhost ceph-mon[290022]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 10057 keys, 15396843 bytes, temperature: kUnknown Dec 6 05:05:59 localhost ceph-mon[290022]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015559753281, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 15396843, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15338725, "index_size": 31905, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25157, "raw_key_size": 267726, "raw_average_key_size": 26, "raw_value_size": 15166053, "raw_average_value_size": 1508, "num_data_blocks": 1223, "num_entries": 10057, "num_filter_entries": 10057, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015444, "oldest_key_time": 0, "file_creation_time": 1765015559, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8b48a877-4508-4eb4-a052-67f753f228b0", "db_session_id": "ETDWGFPM6GCTACWNDM5G", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}} Dec 6 05:05:59 localhost ceph-mon[290022]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 6 05:05:59 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:05:59.753874) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 15396843 bytes Dec 6 05:05:59 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:05:59.756000) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 136.8 rd, 125.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(5.3, 10.7 +0.0 blob) out(14.7 +0.0 blob), read-write-amplify(5.8) write-amplify(2.8) OK, records in: 10609, records dropped: 552 output_compression: NoCompression Dec 6 05:05:59 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:05:59.756035) EVENT_LOG_v1 {"time_micros": 1765015559756018, "job": 6, "event": "compaction_finished", "compaction_time_micros": 122377, "compaction_time_cpu_micros": 30183, "output_level": 6, "num_output_files": 1, "total_output_size": 15396843, "num_input_records": 10609, "num_output_records": 10057, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 6 05:05:59 localhost ceph-mon[290022]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:05:59 localhost ceph-mon[290022]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015559757520, "job": 6, "event": "table_file_deletion", "file_number": 17} Dec 6 05:05:59 localhost ceph-mon[290022]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:05:59 localhost ceph-mon[290022]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015559759648, "job": 6, "event": "table_file_deletion", "file_number": 15} Dec 6 05:05:59 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:05:59.630912) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:05:59 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:05:59.759815) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:05:59 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:05:59.759821) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:05:59 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:05:59.759824) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:05:59 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:05:59.759827) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:05:59 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:05:59.759830) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:06:00 localhost nova_compute[282193]: 2025-12-06 10:06:00.330 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:06:00 localhost ceph-mon[290022]: Reconfiguring daemon osd.2 on np0005548788.localdomain Dec 6 05:06:00 localhost ceph-mon[290022]: Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) Dec 6 05:06:00 localhost ceph-mon[290022]: Health check failed: 1 stray host(s) with 1 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST) Dec 6 05:06:00 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:00 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:00 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:00 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:00 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548788.yvwbqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:06:01 localhost podman[295245]: Dec 6 05:06:01 localhost podman[295245]: 2025-12-06 10:06:01.305467213 +0000 UTC m=+0.062206682 container create d2b7436b69f18c1ac5dddf9cc8daada30eb1dc07b9c72be25c6812486e98de2b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_greider, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , GIT_BRANCH=main, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, version=7, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=) Dec 6 05:06:01 localhost systemd[1]: Started libpod-conmon-d2b7436b69f18c1ac5dddf9cc8daada30eb1dc07b9c72be25c6812486e98de2b.scope. Dec 6 05:06:01 localhost ceph-mon[290022]: Reconfiguring mgr.np0005548788.yvwbqq (monmap changed)... Dec 6 05:06:01 localhost ceph-mon[290022]: Reconfiguring daemon mgr.np0005548788.yvwbqq on np0005548788.localdomain Dec 6 05:06:01 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:01 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:01 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548789.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:06:01 localhost systemd[1]: Started libcrun container. Dec 6 05:06:01 localhost podman[295245]: 2025-12-06 10:06:01.369918503 +0000 UTC m=+0.126658012 container init d2b7436b69f18c1ac5dddf9cc8daada30eb1dc07b9c72be25c6812486e98de2b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_greider, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, RELEASE=main, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True) Dec 6 05:06:01 localhost podman[295245]: 2025-12-06 10:06:01.380982593 +0000 UTC m=+0.137722142 container start d2b7436b69f18c1ac5dddf9cc8daada30eb1dc07b9c72be25c6812486e98de2b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_greider, distribution-scope=public, version=7, ceph=True, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, vcs-type=git, CEPH_POINT_RELEASE=, release=1763362218, io.buildah.version=1.41.4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, RELEASE=main, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, name=rhceph, GIT_BRANCH=main, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7) Dec 6 05:06:01 localhost podman[295245]: 2025-12-06 10:06:01.381364935 +0000 UTC m=+0.138104444 container attach d2b7436b69f18c1ac5dddf9cc8daada30eb1dc07b9c72be25c6812486e98de2b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_greider, release=1763362218, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, name=rhceph, maintainer=Guillaume Abrioux , GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, io.buildah.version=1.41.4, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, architecture=x86_64) Dec 6 05:06:01 localhost podman[295245]: 2025-12-06 10:06:01.285362135 +0000 UTC m=+0.042101694 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:06:01 localhost sleepy_greider[295260]: 167 167 Dec 6 05:06:01 localhost systemd[1]: libpod-d2b7436b69f18c1ac5dddf9cc8daada30eb1dc07b9c72be25c6812486e98de2b.scope: Deactivated successfully. Dec 6 05:06:01 localhost podman[295245]: 2025-12-06 10:06:01.386528964 +0000 UTC m=+0.143268513 container died d2b7436b69f18c1ac5dddf9cc8daada30eb1dc07b9c72be25c6812486e98de2b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_greider, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, description=Red Hat Ceph Storage 7, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, release=1763362218, ceph=True, maintainer=Guillaume Abrioux , io.openshift.expose-services=, GIT_BRANCH=main, CEPH_POINT_RELEASE=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z) Dec 6 05:06:01 localhost podman[295265]: 2025-12-06 10:06:01.47428667 +0000 UTC m=+0.080492994 container remove d2b7436b69f18c1ac5dddf9cc8daada30eb1dc07b9c72be25c6812486e98de2b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_greider, maintainer=Guillaume Abrioux , version=7, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., GIT_CLEAN=True, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, GIT_BRANCH=main, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, RELEASE=main, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 6 05:06:01 localhost systemd[1]: libpod-conmon-d2b7436b69f18c1ac5dddf9cc8daada30eb1dc07b9c72be25c6812486e98de2b.scope: Deactivated successfully. Dec 6 05:06:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:06:02 localhost podman[295332]: 2025-12-06 10:06:02.147828178 +0000 UTC m=+0.082336842 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 6 05:06:02 localhost podman[295332]: 2025-12-06 10:06:02.161988313 +0000 UTC m=+0.096496987 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:06:02 localhost podman[295340]: Dec 6 05:06:02 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:06:02 localhost podman[295340]: 2025-12-06 10:06:02.176280892 +0000 UTC m=+0.091845673 container create 27728b71d424cec212ea7f06f18a262351af4d75463fd3c5baa0c399e67e5fd8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_cerf, io.openshift.expose-services=, release=1763362218, maintainer=Guillaume Abrioux , version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, GIT_CLEAN=True, vcs-type=git, build-date=2025-11-26T19:44:28Z, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 6 05:06:02 localhost systemd[1]: Started libpod-conmon-27728b71d424cec212ea7f06f18a262351af4d75463fd3c5baa0c399e67e5fd8.scope. Dec 6 05:06:02 localhost systemd[1]: Started libcrun container. Dec 6 05:06:02 localhost podman[295340]: 2025-12-06 10:06:02.236062279 +0000 UTC m=+0.151627060 container init 27728b71d424cec212ea7f06f18a262351af4d75463fd3c5baa0c399e67e5fd8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_cerf, version=7, ceph=True, vcs-type=git, name=rhceph, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, RELEASE=main, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7) Dec 6 05:06:02 localhost podman[295340]: 2025-12-06 10:06:02.144561517 +0000 UTC m=+0.060126338 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:06:02 localhost podman[295340]: 2025-12-06 10:06:02.243948621 +0000 UTC m=+0.159513382 container start 27728b71d424cec212ea7f06f18a262351af4d75463fd3c5baa0c399e67e5fd8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_cerf, io.openshift.expose-services=, vcs-type=git, GIT_CLEAN=True, version=7, architecture=x86_64, ceph=True, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, distribution-scope=public, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 6 05:06:02 localhost podman[295340]: 2025-12-06 10:06:02.244146957 +0000 UTC m=+0.159711728 container attach 27728b71d424cec212ea7f06f18a262351af4d75463fd3c5baa0c399e67e5fd8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_cerf, description=Red Hat Ceph Storage 7, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, ceph=True, release=1763362218, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, vcs-type=git, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, CEPH_POINT_RELEASE=, distribution-scope=public, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 6 05:06:02 localhost trusting_cerf[295372]: 167 167 Dec 6 05:06:02 localhost systemd[1]: libpod-27728b71d424cec212ea7f06f18a262351af4d75463fd3c5baa0c399e67e5fd8.scope: Deactivated successfully. Dec 6 05:06:02 localhost podman[295340]: 2025-12-06 10:06:02.247212131 +0000 UTC m=+0.162776952 container died 27728b71d424cec212ea7f06f18a262351af4d75463fd3c5baa0c399e67e5fd8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_cerf, name=rhceph, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , GIT_CLEAN=True, release=1763362218, vcs-type=git, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 6 05:06:02 localhost systemd[1]: var-lib-containers-storage-overlay-35e8464fa50cf9a0c483b6f77be42c86b5d1684205d337dab60ed9d5f56d512a-merged.mount: Deactivated successfully. Dec 6 05:06:02 localhost systemd[1]: var-lib-containers-storage-overlay-b495a4f05e0dc37ec094e7c8bf98335362978e7cce241a47835b5dfb87962f36-merged.mount: Deactivated successfully. Dec 6 05:06:02 localhost podman[295377]: 2025-12-06 10:06:02.341716466 +0000 UTC m=+0.082146276 container remove 27728b71d424cec212ea7f06f18a262351af4d75463fd3c5baa0c399e67e5fd8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_cerf, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, RELEASE=main, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, com.redhat.component=rhceph-container, release=1763362218, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, GIT_BRANCH=main, GIT_CLEAN=True) Dec 6 05:06:02 localhost systemd[1]: libpod-conmon-27728b71d424cec212ea7f06f18a262351af4d75463fd3c5baa0c399e67e5fd8.scope: Deactivated successfully. Dec 6 05:06:02 localhost ceph-mon[290022]: Reconfiguring crash.np0005548789 (monmap changed)... Dec 6 05:06:02 localhost ceph-mon[290022]: Reconfiguring daemon crash.np0005548789 on np0005548789.localdomain Dec 6 05:06:02 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:02 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:02 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:02 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Dec 6 05:06:03 localhost podman[295451]: Dec 6 05:06:03 localhost podman[295451]: 2025-12-06 10:06:03.182029728 +0000 UTC m=+0.082378393 container create 424b5e056fec469cf17d237fee5d6f0e6b98151fe7205ceb8269f649a2060307 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_einstein, maintainer=Guillaume Abrioux , GIT_CLEAN=True, ceph=True, io.openshift.expose-services=, com.redhat.component=rhceph-container, GIT_BRANCH=main, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, RELEASE=main, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, version=7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 05:06:03 localhost systemd[1]: Started libpod-conmon-424b5e056fec469cf17d237fee5d6f0e6b98151fe7205ceb8269f649a2060307.scope. Dec 6 05:06:03 localhost systemd[1]: Started libcrun container. Dec 6 05:06:03 localhost podman[295451]: 2025-12-06 10:06:03.148687763 +0000 UTC m=+0.049036478 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:06:03 localhost podman[295451]: 2025-12-06 10:06:03.262995626 +0000 UTC m=+0.163344261 container init 424b5e056fec469cf17d237fee5d6f0e6b98151fe7205ceb8269f649a2060307 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_einstein, architecture=x86_64, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, version=7, GIT_CLEAN=True, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container) Dec 6 05:06:03 localhost podman[295451]: 2025-12-06 10:06:03.272668493 +0000 UTC m=+0.173017198 container start 424b5e056fec469cf17d237fee5d6f0e6b98151fe7205ceb8269f649a2060307 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_einstein, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, release=1763362218, io.openshift.expose-services=, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, RELEASE=main, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, ceph=True, name=rhceph, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 6 05:06:03 localhost reverent_einstein[295466]: 167 167 Dec 6 05:06:03 localhost podman[295451]: 2025-12-06 10:06:03.274871661 +0000 UTC m=+0.175220286 container attach 424b5e056fec469cf17d237fee5d6f0e6b98151fe7205ceb8269f649a2060307 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_einstein, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, distribution-scope=public, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, name=rhceph, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, ceph=True, release=1763362218, maintainer=Guillaume Abrioux , io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 6 05:06:03 localhost systemd[1]: libpod-424b5e056fec469cf17d237fee5d6f0e6b98151fe7205ceb8269f649a2060307.scope: Deactivated successfully. Dec 6 05:06:03 localhost podman[295451]: 2025-12-06 10:06:03.279259695 +0000 UTC m=+0.179608400 container died 424b5e056fec469cf17d237fee5d6f0e6b98151fe7205ceb8269f649a2060307 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_einstein, GIT_BRANCH=main, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, release=1763362218, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, CEPH_POINT_RELEASE=) Dec 6 05:06:03 localhost systemd[1]: var-lib-containers-storage-overlay-8f520eb1d6688811ab96f45b6932462450546e4d717702c788f390db174d8ebf-merged.mount: Deactivated successfully. Dec 6 05:06:03 localhost podman[295471]: 2025-12-06 10:06:03.376002068 +0000 UTC m=+0.085798288 container remove 424b5e056fec469cf17d237fee5d6f0e6b98151fe7205ceb8269f649a2060307 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_einstein, architecture=x86_64, CEPH_POINT_RELEASE=, GIT_CLEAN=True, name=rhceph, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, version=7, RELEASE=main, io.openshift.expose-services=, distribution-scope=public, GIT_BRANCH=main, vendor=Red Hat, Inc., ceph=True, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 6 05:06:03 localhost systemd[1]: libpod-conmon-424b5e056fec469cf17d237fee5d6f0e6b98151fe7205ceb8269f649a2060307.scope: Deactivated successfully. Dec 6 05:06:03 localhost ceph-mon[290022]: Reconfiguring osd.1 (monmap changed)... Dec 6 05:06:03 localhost ceph-mon[290022]: Reconfiguring daemon osd.1 on np0005548789.localdomain Dec 6 05:06:03 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:03 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:03 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:03 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:03 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Dec 6 05:06:04 localhost podman[295546]: Dec 6 05:06:04 localhost podman[295546]: 2025-12-06 10:06:04.245244869 +0000 UTC m=+0.083295080 container create 39fa33ad0c258ccb4f1f30dee83495adefc8cb07665c59cd57bd8ec282ac60da (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_matsumoto, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , ceph=True, description=Red Hat Ceph Storage 7, RELEASE=main, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, version=7, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218) Dec 6 05:06:04 localhost systemd[1]: Started libpod-conmon-39fa33ad0c258ccb4f1f30dee83495adefc8cb07665c59cd57bd8ec282ac60da.scope. Dec 6 05:06:04 localhost systemd[1]: Started libcrun container. Dec 6 05:06:04 localhost podman[295546]: 2025-12-06 10:06:04.212626848 +0000 UTC m=+0.050677109 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:06:04 localhost podman[295546]: 2025-12-06 10:06:04.314120806 +0000 UTC m=+0.152171027 container init 39fa33ad0c258ccb4f1f30dee83495adefc8cb07665c59cd57bd8ec282ac60da (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_matsumoto, version=7, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhceph, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, release=1763362218, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, vcs-type=git, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc.) Dec 6 05:06:04 localhost podman[295546]: 2025-12-06 10:06:04.324198795 +0000 UTC m=+0.162249016 container start 39fa33ad0c258ccb4f1f30dee83495adefc8cb07665c59cd57bd8ec282ac60da (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_matsumoto, GIT_CLEAN=True, io.openshift.expose-services=, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, version=7, architecture=x86_64, io.buildah.version=1.41.4, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, ceph=True, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, vcs-type=git, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 6 05:06:04 localhost podman[295546]: 2025-12-06 10:06:04.324497715 +0000 UTC m=+0.162547926 container attach 39fa33ad0c258ccb4f1f30dee83495adefc8cb07665c59cd57bd8ec282ac60da (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_matsumoto, distribution-scope=public, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., version=7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7) Dec 6 05:06:04 localhost elegant_matsumoto[295561]: 167 167 Dec 6 05:06:04 localhost systemd[1]: libpod-39fa33ad0c258ccb4f1f30dee83495adefc8cb07665c59cd57bd8ec282ac60da.scope: Deactivated successfully. Dec 6 05:06:04 localhost podman[295546]: 2025-12-06 10:06:04.328332593 +0000 UTC m=+0.166382874 container died 39fa33ad0c258ccb4f1f30dee83495adefc8cb07665c59cd57bd8ec282ac60da (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_matsumoto, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, vcs-type=git, version=7, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, name=rhceph, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, vendor=Red Hat, Inc., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, RELEASE=main) Dec 6 05:06:04 localhost podman[295566]: 2025-12-06 10:06:04.429179652 +0000 UTC m=+0.088023146 container remove 39fa33ad0c258ccb4f1f30dee83495adefc8cb07665c59cd57bd8ec282ac60da (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_matsumoto, io.openshift.tags=rhceph ceph, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., ceph=True, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , architecture=x86_64, GIT_BRANCH=main, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218) Dec 6 05:06:04 localhost systemd[1]: libpod-conmon-39fa33ad0c258ccb4f1f30dee83495adefc8cb07665c59cd57bd8ec282ac60da.scope: Deactivated successfully. Dec 6 05:06:04 localhost nova_compute[282193]: 2025-12-06 10:06:04.532 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:06:04 localhost ceph-mon[290022]: Reconfiguring osd.4 (monmap changed)... Dec 6 05:06:04 localhost ceph-mon[290022]: Reconfiguring daemon osd.4 on np0005548789.localdomain Dec 6 05:06:04 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:04 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:04 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:04 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:04 localhost ceph-mon[290022]: Reconfiguring mds.mds.np0005548789.vxwwsq (monmap changed)... Dec 6 05:06:04 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548789.vxwwsq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 6 05:06:04 localhost ceph-mon[290022]: Reconfiguring daemon mds.mds.np0005548789.vxwwsq on np0005548789.localdomain Dec 6 05:06:04 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:04 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:04 localhost ceph-mon[290022]: Reconfiguring mgr.np0005548789.mzhmje (monmap changed)... Dec 6 05:06:04 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548789.mzhmje", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:06:04 localhost ceph-mon[290022]: Reconfiguring daemon mgr.np0005548789.mzhmje on np0005548789.localdomain Dec 6 05:06:04 localhost ceph-mon[290022]: mon.np0005548789@3(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:06:05 localhost podman[295637]: Dec 6 05:06:05 localhost podman[295637]: 2025-12-06 10:06:05.023896066 +0000 UTC m=+0.052614358 container create 2afb96b3c5e2b359b811b8ec902c3934fae9f86e8f85c96316b51b589a22ae5c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_carver, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, ceph=True, release=1763362218, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, architecture=x86_64, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , version=7, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7) Dec 6 05:06:05 localhost systemd[1]: Started libpod-conmon-2afb96b3c5e2b359b811b8ec902c3934fae9f86e8f85c96316b51b589a22ae5c.scope. Dec 6 05:06:05 localhost systemd[1]: Started libcrun container. Dec 6 05:06:05 localhost nova_compute[282193]: 2025-12-06 10:06:05.085 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:06:05 localhost podman[295637]: 2025-12-06 10:06:05.087321895 +0000 UTC m=+0.116040227 container init 2afb96b3c5e2b359b811b8ec902c3934fae9f86e8f85c96316b51b589a22ae5c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_carver, distribution-scope=public, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, name=rhceph, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, GIT_CLEAN=True, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main) Dec 6 05:06:05 localhost podman[295637]: 2025-12-06 10:06:04.99699106 +0000 UTC m=+0.025709452 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:06:05 localhost podman[295637]: 2025-12-06 10:06:05.097357833 +0000 UTC m=+0.126076165 container start 2afb96b3c5e2b359b811b8ec902c3934fae9f86e8f85c96316b51b589a22ae5c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_carver, GIT_BRANCH=main, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, io.buildah.version=1.41.4, release=1763362218, GIT_CLEAN=True, ceph=True, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, name=rhceph, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, com.redhat.component=rhceph-container) Dec 6 05:06:05 localhost podman[295637]: 2025-12-06 10:06:05.097705295 +0000 UTC m=+0.126423677 container attach 2afb96b3c5e2b359b811b8ec902c3934fae9f86e8f85c96316b51b589a22ae5c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_carver, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, architecture=x86_64, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, name=rhceph, CEPH_POINT_RELEASE=, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , release=1763362218, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 6 05:06:05 localhost hungry_carver[295652]: 167 167 Dec 6 05:06:05 localhost systemd[1]: libpod-2afb96b3c5e2b359b811b8ec902c3934fae9f86e8f85c96316b51b589a22ae5c.scope: Deactivated successfully. Dec 6 05:06:05 localhost podman[295637]: 2025-12-06 10:06:05.100121998 +0000 UTC m=+0.128840350 container died 2afb96b3c5e2b359b811b8ec902c3934fae9f86e8f85c96316b51b589a22ae5c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_carver, io.openshift.tags=rhceph ceph, ceph=True, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, maintainer=Guillaume Abrioux , name=rhceph, version=7, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, vcs-type=git, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, CEPH_POINT_RELEASE=) Dec 6 05:06:05 localhost podman[295657]: 2025-12-06 10:06:05.194325203 +0000 UTC m=+0.084703913 container remove 2afb96b3c5e2b359b811b8ec902c3934fae9f86e8f85c96316b51b589a22ae5c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_carver, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., ceph=True, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, release=1763362218, io.buildah.version=1.41.4, GIT_CLEAN=True, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, architecture=x86_64, vcs-type=git) Dec 6 05:06:05 localhost systemd[1]: libpod-conmon-2afb96b3c5e2b359b811b8ec902c3934fae9f86e8f85c96316b51b589a22ae5c.scope: Deactivated successfully. Dec 6 05:06:05 localhost systemd[1]: var-lib-containers-storage-overlay-59db1e8215f0fe5400974701a833d93b40b3de2c44c8e0932413596395e39162-merged.mount: Deactivated successfully. Dec 6 05:06:06 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:06 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:06 localhost ceph-mon[290022]: Reconfiguring crash.np0005548790 (monmap changed)... Dec 6 05:06:06 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548790.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:06:06 localhost ceph-mon[290022]: Reconfiguring daemon crash.np0005548790 on np0005548790.localdomain Dec 6 05:06:06 localhost ceph-mon[290022]: Saving service mon spec with placement label:mon Dec 6 05:06:06 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:06 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:06 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:06 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Dec 6 05:06:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:06:06 localhost systemd[1]: tmp-crun.FX5wOE.mount: Deactivated successfully. Dec 6 05:06:06 localhost podman[295673]: 2025-12-06 10:06:06.910919262 +0000 UTC m=+0.072450857 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true) Dec 6 05:06:07 localhost podman[295673]: 2025-12-06 10:06:07.01824654 +0000 UTC m=+0.179778105 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible) Dec 6 05:06:07 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:06:07 localhost ceph-mon[290022]: Reconfiguring osd.0 (monmap changed)... Dec 6 05:06:07 localhost ceph-mon[290022]: Reconfiguring daemon osd.0 on np0005548790.localdomain Dec 6 05:06:07 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:07 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:07 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:07 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:07 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Dec 6 05:06:08 localhost ceph-mon[290022]: Reconfiguring osd.3 (monmap changed)... Dec 6 05:06:08 localhost ceph-mon[290022]: Reconfiguring daemon osd.3 on np0005548790.localdomain Dec 6 05:06:08 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:08 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:08 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:08 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:08 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548790.vhcezv", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 6 05:06:09 localhost ceph-mon[290022]: Reconfiguring mds.mds.np0005548790.vhcezv (monmap changed)... Dec 6 05:06:09 localhost ceph-mon[290022]: Reconfiguring daemon mds.mds.np0005548790.vhcezv on np0005548790.localdomain Dec 6 05:06:09 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:09 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:09 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548790.kvkfyr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:06:09 localhost nova_compute[282193]: 2025-12-06 10:06:09.585 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:06:09 localhost ceph-mon[290022]: mon.np0005548789@3(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:06:10 localhost nova_compute[282193]: 2025-12-06 10:06:10.087 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:06:10 localhost ceph-mon[290022]: Reconfiguring mgr.np0005548790.kvkfyr (monmap changed)... Dec 6 05:06:10 localhost ceph-mon[290022]: Reconfiguring daemon mgr.np0005548790.kvkfyr on np0005548790.localdomain Dec 6 05:06:10 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:10 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:10 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 6 05:06:11 localhost ceph-mon[290022]: Reconfiguring mon.np0005548790 (monmap changed)... Dec 6 05:06:11 localhost ceph-mon[290022]: Reconfiguring daemon mon.np0005548790 on np0005548790.localdomain Dec 6 05:06:11 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:11 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:11 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:06:11 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:11 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:12 localhost ceph-mon[290022]: Reconfiguring mon.np0005548786 (monmap changed)... Dec 6 05:06:12 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 6 05:06:12 localhost ceph-mon[290022]: Reconfiguring daemon mon.np0005548786 on np0005548786.localdomain Dec 6 05:06:12 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:12 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:12 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:12 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 6 05:06:13 localhost ceph-mon[290022]: Reconfiguring mon.np0005548787 (monmap changed)... Dec 6 05:06:13 localhost ceph-mon[290022]: Reconfiguring daemon mon.np0005548787 on np0005548787.localdomain Dec 6 05:06:13 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:13 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:13 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 6 05:06:14 localhost ceph-mon[290022]: Reconfiguring mon.np0005548788 (monmap changed)... Dec 6 05:06:14 localhost ceph-mon[290022]: Reconfiguring daemon mon.np0005548788 on np0005548788.localdomain Dec 6 05:06:14 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:14 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:14 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 6 05:06:14 localhost nova_compute[282193]: 2025-12-06 10:06:14.586 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:06:14 localhost ceph-mon[290022]: mon.np0005548789@3(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:06:14 localhost podman[295768]: Dec 6 05:06:14 localhost podman[295768]: 2025-12-06 10:06:14.756930201 +0000 UTC m=+0.079988930 container create e790297f1d12ae2c0d0be942b8e0d538a7152314fb9c74e0b299aed536ed7e60 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_blackwell, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, maintainer=Guillaume Abrioux , name=rhceph, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, distribution-scope=public, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, version=7) Dec 6 05:06:14 localhost systemd[1]: Started libpod-conmon-e790297f1d12ae2c0d0be942b8e0d538a7152314fb9c74e0b299aed536ed7e60.scope. Dec 6 05:06:14 localhost systemd[1]: Started libcrun container. Dec 6 05:06:14 localhost podman[295768]: 2025-12-06 10:06:14.725328409 +0000 UTC m=+0.048387178 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:06:14 localhost podman[295768]: 2025-12-06 10:06:14.83732259 +0000 UTC m=+0.160381289 container init e790297f1d12ae2c0d0be942b8e0d538a7152314fb9c74e0b299aed536ed7e60 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_blackwell, GIT_BRANCH=main, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , ceph=True, CEPH_POINT_RELEASE=, version=7, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, name=rhceph, vcs-type=git, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 05:06:14 localhost podman[295768]: 2025-12-06 10:06:14.848573196 +0000 UTC m=+0.171631935 container start e790297f1d12ae2c0d0be942b8e0d538a7152314fb9c74e0b299aed536ed7e60 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_blackwell, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, architecture=x86_64, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , name=rhceph, vcs-type=git, GIT_BRANCH=main, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, distribution-scope=public) Dec 6 05:06:14 localhost podman[295768]: 2025-12-06 10:06:14.848912016 +0000 UTC m=+0.171970735 container attach e790297f1d12ae2c0d0be942b8e0d538a7152314fb9c74e0b299aed536ed7e60 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_blackwell, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, ceph=True, architecture=x86_64, vcs-type=git, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, GIT_CLEAN=True) Dec 6 05:06:14 localhost clever_blackwell[295783]: 167 167 Dec 6 05:06:14 localhost systemd[1]: libpod-e790297f1d12ae2c0d0be942b8e0d538a7152314fb9c74e0b299aed536ed7e60.scope: Deactivated successfully. Dec 6 05:06:14 localhost podman[295768]: 2025-12-06 10:06:14.854104066 +0000 UTC m=+0.177162805 container died e790297f1d12ae2c0d0be942b8e0d538a7152314fb9c74e0b299aed536ed7e60 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_blackwell, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.openshift.expose-services=, release=1763362218, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, distribution-scope=public, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, version=7, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 6 05:06:14 localhost podman[295788]: 2025-12-06 10:06:14.964719876 +0000 UTC m=+0.100334495 container remove e790297f1d12ae2c0d0be942b8e0d538a7152314fb9c74e0b299aed536ed7e60 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_blackwell, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.component=rhceph-container, GIT_CLEAN=True, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, ceph=True, RELEASE=main, release=1763362218, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=) Dec 6 05:06:14 localhost systemd[1]: libpod-conmon-e790297f1d12ae2c0d0be942b8e0d538a7152314fb9c74e0b299aed536ed7e60.scope: Deactivated successfully. Dec 6 05:06:15 localhost nova_compute[282193]: 2025-12-06 10:06:15.092 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:06:15 localhost sshd[295805]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:06:15 localhost ceph-mon[290022]: Reconfiguring mon.np0005548789 (monmap changed)... Dec 6 05:06:15 localhost ceph-mon[290022]: Reconfiguring daemon mon.np0005548789 on np0005548789.localdomain Dec 6 05:06:15 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:15 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:15 localhost systemd[1]: tmp-crun.Wd2UIq.mount: Deactivated successfully. Dec 6 05:06:15 localhost systemd[1]: var-lib-containers-storage-overlay-1f909b356b073b3bb180e8c5d856c28d9d626a91a2582c8acd167fdd174544bc-merged.mount: Deactivated successfully. Dec 6 05:06:16 localhost ceph-mgr[288591]: ms_deliver_dispatch: unhandled message 0x56140ed13600 mon_map magic: 0 from mon.3 v2:172.18.0.107:3300/0 Dec 6 05:06:16 localhost ceph-mon[290022]: mon.np0005548789@3(peon) e10 my rank is now 2 (was 3) Dec 6 05:06:16 localhost ceph-mgr[288591]: client.0 ms_handle_reset on v2:172.18.0.107:3300/0 Dec 6 05:06:16 localhost ceph-mgr[288591]: client.0 ms_handle_reset on v2:172.18.0.107:3300/0 Dec 6 05:06:16 localhost ceph-mgr[288591]: ms_deliver_dispatch: unhandled message 0x56140ed131e0 mon_map magic: 0 from mon.2 v2:172.18.0.107:3300/0 Dec 6 05:06:16 localhost ceph-mon[290022]: log_channel(cluster) log [INF] : mon.np0005548789 calling monitor election Dec 6 05:06:16 localhost ceph-mon[290022]: paxos.2).electionLogic(44) init, last seen epoch 44 Dec 6 05:06:16 localhost ceph-mon[290022]: mon.np0005548789@2(electing) e10 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 6 05:06:16 localhost ceph-mon[290022]: mon.np0005548789@2(electing) e10 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 6 05:06:16 localhost openstack_network_exporter[243110]: ERROR 10:06:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:06:16 localhost openstack_network_exporter[243110]: ERROR 10:06:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:06:16 localhost openstack_network_exporter[243110]: ERROR 10:06:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:06:16 localhost openstack_network_exporter[243110]: ERROR 10:06:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:06:16 localhost openstack_network_exporter[243110]: Dec 6 05:06:16 localhost openstack_network_exporter[243110]: ERROR 10:06:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:06:16 localhost openstack_network_exporter[243110]: Dec 6 05:06:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:06:16 localhost podman[295807]: 2025-12-06 10:06:16.935343059 +0000 UTC m=+0.089308915 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 6 05:06:16 localhost podman[295807]: 2025-12-06 10:06:16.969312683 +0000 UTC m=+0.123278529 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:06:16 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:06:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:06:17 localhost podman[295825]: 2025-12-06 10:06:17.917077988 +0000 UTC m=+0.079856596 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 6 05:06:17 localhost podman[295825]: 2025-12-06 10:06:17.925808056 +0000 UTC m=+0.088586704 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 6 05:06:17 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:06:18 localhost ceph-mon[290022]: mon.np0005548789@2(electing) e10 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 6 05:06:18 localhost ceph-mon[290022]: mon.np0005548789@2(peon) e10 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 6 05:06:18 localhost ceph-mon[290022]: Remove daemons mon.np0005548786 Dec 6 05:06:18 localhost ceph-mon[290022]: Safe to remove mon.np0005548786: new quorum should be ['np0005548787', 'np0005548790', 'np0005548789', 'np0005548788'] (from ['np0005548787', 'np0005548790', 'np0005548789', 'np0005548788']) Dec 6 05:06:18 localhost ceph-mon[290022]: Removing monitor np0005548786 from monmap... Dec 6 05:06:18 localhost ceph-mon[290022]: Removing daemon mon.np0005548786 from np0005548786.localdomain -- ports [] Dec 6 05:06:18 localhost ceph-mon[290022]: mon.np0005548789 calling monitor election Dec 6 05:06:18 localhost ceph-mon[290022]: mon.np0005548790 calling monitor election Dec 6 05:06:18 localhost ceph-mon[290022]: mon.np0005548788 calling monitor election Dec 6 05:06:18 localhost ceph-mon[290022]: mon.np0005548787 calling monitor election Dec 6 05:06:18 localhost ceph-mon[290022]: mon.np0005548787 is new leader, mons np0005548787,np0005548790,np0005548789,np0005548788 in quorum (ranks 0,1,2,3) Dec 6 05:06:18 localhost ceph-mon[290022]: Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm Dec 6 05:06:18 localhost ceph-mon[290022]: [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm Dec 6 05:06:18 localhost ceph-mon[290022]: stray daemon mgr.np0005548785.vhqlsq on host np0005548785.localdomain not managed by cephadm Dec 6 05:06:18 localhost ceph-mon[290022]: [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm Dec 6 05:06:18 localhost ceph-mon[290022]: stray host np0005548785.localdomain has 1 stray daemons: ['mgr.np0005548785.vhqlsq'] Dec 6 05:06:19 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:19 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:19 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:06:19 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:19 localhost nova_compute[282193]: 2025-12-06 10:06:19.624 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:06:19 localhost ceph-mon[290022]: mon.np0005548789@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:06:20 localhost nova_compute[282193]: 2025-12-06 10:06:20.098 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:06:20 localhost ceph-mon[290022]: Updating np0005548786.localdomain:/etc/ceph/ceph.conf Dec 6 05:06:20 localhost ceph-mon[290022]: Updating np0005548787.localdomain:/etc/ceph/ceph.conf Dec 6 05:06:20 localhost ceph-mon[290022]: Updating np0005548788.localdomain:/etc/ceph/ceph.conf Dec 6 05:06:20 localhost ceph-mon[290022]: Updating np0005548789.localdomain:/etc/ceph/ceph.conf Dec 6 05:06:20 localhost ceph-mon[290022]: Updating np0005548790.localdomain:/etc/ceph/ceph.conf Dec 6 05:06:20 localhost ceph-mon[290022]: Removed label mon from host np0005548786.localdomain Dec 6 05:06:20 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:20 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:21 localhost ceph-mon[290022]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0. Dec 6 05:06:21 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:06:21.041340) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 6 05:06:21 localhost ceph-mon[290022]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19 Dec 6 05:06:21 localhost ceph-mon[290022]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015581041435, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 1243, "num_deletes": 256, "total_data_size": 2170248, "memory_usage": 2212160, "flush_reason": "Manual Compaction"} Dec 6 05:06:21 localhost ceph-mon[290022]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started Dec 6 05:06:21 localhost ceph-mon[290022]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015581059393, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 1158430, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14305, "largest_seqno": 15543, "table_properties": {"data_size": 1152840, "index_size": 2869, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13681, "raw_average_key_size": 20, "raw_value_size": 1140793, "raw_average_value_size": 1731, "num_data_blocks": 120, "num_entries": 659, "num_filter_entries": 659, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015559, "oldest_key_time": 1765015559, "file_creation_time": 1765015581, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8b48a877-4508-4eb4-a052-67f753f228b0", "db_session_id": "ETDWGFPM6GCTACWNDM5G", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}} Dec 6 05:06:21 localhost ceph-mon[290022]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 18090 microseconds, and 4560 cpu microseconds. Dec 6 05:06:21 localhost ceph-mon[290022]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 6 05:06:21 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:06:21.059446) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 1158430 bytes OK Dec 6 05:06:21 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:06:21.059469) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started Dec 6 05:06:21 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:06:21.061182) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done Dec 6 05:06:21 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:06:21.061203) EVENT_LOG_v1 {"time_micros": 1765015581061197, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 6 05:06:21 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:06:21.061226) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 6 05:06:21 localhost ceph-mon[290022]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 2163775, prev total WAL file size 2163775, number of live WAL files 2. Dec 6 05:06:21 localhost ceph-mon[290022]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:06:21 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:06:21.061975) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760031303330' seq:72057594037927935, type:22 .. '6B760031323837' seq:0, type:0; will stop at (end) Dec 6 05:06:21 localhost ceph-mon[290022]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 6 05:06:21 localhost ceph-mon[290022]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(1131KB)], [18(14MB)] Dec 6 05:06:21 localhost ceph-mon[290022]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015581062055, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 16555273, "oldest_snapshot_seqno": -1} Dec 6 05:06:21 localhost ceph-mon[290022]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 10182 keys, 15590708 bytes, temperature: kUnknown Dec 6 05:06:21 localhost ceph-mon[290022]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015581144671, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 15590708, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15531949, "index_size": 32226, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25477, "raw_key_size": 272518, "raw_average_key_size": 26, "raw_value_size": 15357091, "raw_average_value_size": 1508, "num_data_blocks": 1220, "num_entries": 10182, "num_filter_entries": 10182, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015444, "oldest_key_time": 0, "file_creation_time": 1765015581, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8b48a877-4508-4eb4-a052-67f753f228b0", "db_session_id": "ETDWGFPM6GCTACWNDM5G", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}} Dec 6 05:06:21 localhost ceph-mon[290022]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 6 05:06:21 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:06:21.145057) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 15590708 bytes Dec 6 05:06:21 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:06:21.147232) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 200.0 rd, 188.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 14.7 +0.0 blob) out(14.9 +0.0 blob), read-write-amplify(27.7) write-amplify(13.5) OK, records in: 10716, records dropped: 534 output_compression: NoCompression Dec 6 05:06:21 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:06:21.147264) EVENT_LOG_v1 {"time_micros": 1765015581147250, "job": 8, "event": "compaction_finished", "compaction_time_micros": 82759, "compaction_time_cpu_micros": 44371, "output_level": 6, "num_output_files": 1, "total_output_size": 15590708, "num_input_records": 10716, "num_output_records": 10182, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 6 05:06:21 localhost ceph-mon[290022]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:06:21 localhost ceph-mon[290022]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015581147548, "job": 8, "event": "table_file_deletion", "file_number": 20} Dec 6 05:06:21 localhost ceph-mon[290022]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:06:21 localhost ceph-mon[290022]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015581149955, "job": 8, "event": "table_file_deletion", "file_number": 18} Dec 6 05:06:21 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:06:21.061862) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:06:21 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:06:21.149990) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:06:21 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:06:21.149996) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:06:21 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:06:21.149998) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:06:21 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:06:21.150001) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:06:21 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:06:21.150004) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:06:21 localhost ceph-mon[290022]: Updating np0005548786.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:06:21 localhost ceph-mon[290022]: Updating np0005548787.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:06:21 localhost ceph-mon[290022]: Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:06:21 localhost ceph-mon[290022]: Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:06:21 localhost ceph-mon[290022]: Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:06:21 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:21 localhost ceph-mon[290022]: Removed label mgr from host np0005548786.localdomain Dec 6 05:06:21 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:21 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:21 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:21 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:21 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:21 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:21 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:21 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:21 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:22 localhost ceph-mon[290022]: Removing daemon mgr.np0005548786.mczynb from np0005548786.localdomain -- ports [8765] Dec 6 05:06:22 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:06:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:06:22 localhost systemd[1]: tmp-crun.Nv1yZ4.mount: Deactivated successfully. Dec 6 05:06:22 localhost podman[296170]: 2025-12-06 10:06:22.949849648 +0000 UTC m=+0.099534659 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 6 05:06:22 localhost podman[296170]: 2025-12-06 10:06:22.963376403 +0000 UTC m=+0.113061394 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125) Dec 6 05:06:22 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:06:23 localhost podman[296169]: 2025-12-06 10:06:23.044723813 +0000 UTC m=+0.196159489 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, container_name=openstack_network_exporter, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, maintainer=Red Hat, Inc., version=9.6, io.buildah.version=1.33.7) Dec 6 05:06:23 localhost podman[296169]: 2025-12-06 10:06:23.061664884 +0000 UTC m=+0.213100530 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, architecture=x86_64, version=9.6) Dec 6 05:06:23 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:06:23 localhost ceph-mon[290022]: Removed label _admin from host np0005548786.localdomain Dec 6 05:06:23 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth rm", "entity": "mgr.np0005548786.mczynb"} : dispatch Dec 6 05:06:23 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd='[{"prefix": "auth rm", "entity": "mgr.np0005548786.mczynb"}]': finished Dec 6 05:06:23 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:23 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:23 localhost podman[241090]: time="2025-12-06T10:06:23Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:06:23 localhost podman[241090]: @ - - [06/Dec/2025:10:06:23 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156104 "" "Go-http-client/1.1" Dec 6 05:06:23 localhost podman[241090]: @ - - [06/Dec/2025:10:06:23 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19219 "" "Go-http-client/1.1" Dec 6 05:06:24 localhost nova_compute[282193]: 2025-12-06 10:06:24.182 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:06:24 localhost nova_compute[282193]: 2025-12-06 10:06:24.183 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:06:24 localhost ceph-mon[290022]: Removing key for mgr.np0005548786.mczynb Dec 6 05:06:24 localhost nova_compute[282193]: 2025-12-06 10:06:24.658 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:06:24 localhost ceph-mon[290022]: mon.np0005548789@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:06:25 localhost nova_compute[282193]: 2025-12-06 10:06:25.097 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:06:25 localhost nova_compute[282193]: 2025-12-06 10:06:25.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:06:25 localhost nova_compute[282193]: 2025-12-06 10:06:25.202 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:06:25 localhost nova_compute[282193]: 2025-12-06 10:06:25.202 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:06:25 localhost nova_compute[282193]: 2025-12-06 10:06:25.202 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:06:25 localhost nova_compute[282193]: 2025-12-06 10:06:25.203 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Auditing locally available compute resources for np0005548789.localdomain (node: np0005548789.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 6 05:06:25 localhost nova_compute[282193]: 2025-12-06 10:06:25.203 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:06:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:06:25 localhost podman[296243]: 2025-12-06 10:06:25.351983558 +0000 UTC m=+0.085739836 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:06:25 localhost podman[296243]: 2025-12-06 10:06:25.363407013 +0000 UTC m=+0.097163271 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0) Dec 6 05:06:25 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:06:25 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:25 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:25 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:06:25 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:25 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:25 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:25 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548786.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:06:25 localhost nova_compute[282193]: 2025-12-06 10:06:25.692 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:06:25 localhost nova_compute[282193]: 2025-12-06 10:06:25.786 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:06:25 localhost nova_compute[282193]: 2025-12-06 10:06:25.786 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:06:25 localhost nova_compute[282193]: 2025-12-06 10:06:25.989 282197 WARNING nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 6 05:06:25 localhost nova_compute[282193]: 2025-12-06 10:06:25.991 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Hypervisor/Node resource view: name=np0005548789.localdomain free_ram=11546MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 6 05:06:25 localhost nova_compute[282193]: 2025-12-06 10:06:25.991 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:06:25 localhost nova_compute[282193]: 2025-12-06 10:06:25.992 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:06:26 localhost nova_compute[282193]: 2025-12-06 10:06:26.106 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 6 05:06:26 localhost nova_compute[282193]: 2025-12-06 10:06:26.107 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 6 05:06:26 localhost nova_compute[282193]: 2025-12-06 10:06:26.107 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Final resource view: name=np0005548789.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 6 05:06:26 localhost nova_compute[282193]: 2025-12-06 10:06:26.159 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:06:26 localhost ceph-mon[290022]: Removing np0005548786.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:06:26 localhost ceph-mon[290022]: Removing np0005548786.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 6 05:06:26 localhost ceph-mon[290022]: Removing np0005548786.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring Dec 6 05:06:26 localhost ceph-mon[290022]: Reconfiguring crash.np0005548786 (monmap changed)... Dec 6 05:06:26 localhost ceph-mon[290022]: Reconfiguring daemon crash.np0005548786 on np0005548786.localdomain Dec 6 05:06:26 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:26 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:26 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 6 05:06:26 localhost ceph-mon[290022]: mon.np0005548789@2(peon) e10 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:06:26 localhost ceph-mon[290022]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1605363813' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:06:26 localhost nova_compute[282193]: 2025-12-06 10:06:26.617 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:06:26 localhost nova_compute[282193]: 2025-12-06 10:06:26.625 282197 DEBUG nova.compute.provider_tree [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed in ProviderTree for provider: 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 6 05:06:26 localhost nova_compute[282193]: 2025-12-06 10:06:26.644 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 6 05:06:26 localhost nova_compute[282193]: 2025-12-06 10:06:26.647 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Compute_service record updated for np0005548789.localdomain:np0005548789.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 6 05:06:26 localhost nova_compute[282193]: 2025-12-06 10:06:26.647 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:06:27 localhost ceph-mon[290022]: Reconfiguring mon.np0005548787 (monmap changed)... Dec 6 05:06:27 localhost ceph-mon[290022]: Reconfiguring daemon mon.np0005548787 on np0005548787.localdomain Dec 6 05:06:27 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:27 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:27 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:27 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548787.umwsra", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:06:27 localhost nova_compute[282193]: 2025-12-06 10:06:27.643 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:06:27 localhost nova_compute[282193]: 2025-12-06 10:06:27.644 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:06:27 localhost nova_compute[282193]: 2025-12-06 10:06:27.644 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 6 05:06:27 localhost nova_compute[282193]: 2025-12-06 10:06:27.645 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 6 05:06:27 localhost nova_compute[282193]: 2025-12-06 10:06:27.947 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 6 05:06:27 localhost nova_compute[282193]: 2025-12-06 10:06:27.948 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquired lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 6 05:06:27 localhost nova_compute[282193]: 2025-12-06 10:06:27.948 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 6 05:06:27 localhost nova_compute[282193]: 2025-12-06 10:06:27.948 282197 DEBUG nova.objects.instance [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 6 05:06:28 localhost nova_compute[282193]: 2025-12-06 10:06:28.292 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updating instance_info_cache with network_info: [{"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 6 05:06:28 localhost nova_compute[282193]: 2025-12-06 10:06:28.315 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Releasing lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 6 05:06:28 localhost nova_compute[282193]: 2025-12-06 10:06:28.315 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 6 05:06:28 localhost nova_compute[282193]: 2025-12-06 10:06:28.316 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:06:28 localhost nova_compute[282193]: 2025-12-06 10:06:28.317 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:06:28 localhost nova_compute[282193]: 2025-12-06 10:06:28.317 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:06:28 localhost nova_compute[282193]: 2025-12-06 10:06:28.317 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 6 05:06:28 localhost ceph-mon[290022]: Reconfiguring mgr.np0005548787.umwsra (monmap changed)... Dec 6 05:06:28 localhost ceph-mon[290022]: Reconfiguring daemon mgr.np0005548787.umwsra on np0005548787.localdomain Dec 6 05:06:28 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:28 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:28 localhost ceph-mon[290022]: Reconfiguring crash.np0005548787 (monmap changed)... Dec 6 05:06:28 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548787.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:06:28 localhost ceph-mon[290022]: Reconfiguring daemon crash.np0005548787 on np0005548787.localdomain Dec 6 05:06:29 localhost nova_compute[282193]: 2025-12-06 10:06:29.183 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:06:29 localhost ceph-mon[290022]: mon.np0005548789@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:06:29 localhost nova_compute[282193]: 2025-12-06 10:06:29.689 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:06:29 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:29 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:29 localhost ceph-mon[290022]: Reconfiguring crash.np0005548788 (monmap changed)... Dec 6 05:06:29 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548788.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:06:29 localhost ceph-mon[290022]: Reconfiguring daemon crash.np0005548788 on np0005548788.localdomain Dec 6 05:06:29 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:29 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:29 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 6 05:06:30 localhost nova_compute[282193]: 2025-12-06 10:06:30.099 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:06:30 localhost ceph-mon[290022]: Reconfiguring osd.2 (monmap changed)... Dec 6 05:06:30 localhost ceph-mon[290022]: Reconfiguring daemon osd.2 on np0005548788.localdomain Dec 6 05:06:30 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:30 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:30 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 6 05:06:31 localhost sshd[296303]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:06:32 localhost ceph-mon[290022]: Reconfiguring osd.5 (monmap changed)... Dec 6 05:06:32 localhost ceph-mon[290022]: Reconfiguring daemon osd.5 on np0005548788.localdomain Dec 6 05:06:32 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:32 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:32 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548788.erzujf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 6 05:06:32 localhost nova_compute[282193]: 2025-12-06 10:06:32.177 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:06:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:06:32 localhost podman[296305]: 2025-12-06 10:06:32.90160308 +0000 UTC m=+0.067962851 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:06:32 localhost podman[296305]: 2025-12-06 10:06:32.910360944 +0000 UTC m=+0.076720725 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 6 05:06:32 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:06:33 localhost ceph-mon[290022]: Reconfiguring mds.mds.np0005548788.erzujf (monmap changed)... Dec 6 05:06:33 localhost ceph-mon[290022]: Reconfiguring daemon mds.mds.np0005548788.erzujf on np0005548788.localdomain Dec 6 05:06:34 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:34 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:34 localhost ceph-mon[290022]: Reconfiguring mgr.np0005548788.yvwbqq (monmap changed)... Dec 6 05:06:34 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548788.yvwbqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:06:34 localhost ceph-mon[290022]: Reconfiguring daemon mgr.np0005548788.yvwbqq on np0005548788.localdomain Dec 6 05:06:34 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:34 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:34 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:34 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:34 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 6 05:06:34 localhost ceph-mon[290022]: mon.np0005548789@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:06:34 localhost nova_compute[282193]: 2025-12-06 10:06:34.735 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:06:35 localhost nova_compute[282193]: 2025-12-06 10:06:35.104 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:06:35 localhost ceph-mon[290022]: Added label _no_schedule to host np0005548786.localdomain Dec 6 05:06:35 localhost ceph-mon[290022]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005548786.localdomain Dec 6 05:06:35 localhost ceph-mon[290022]: Reconfiguring mon.np0005548788 (monmap changed)... Dec 6 05:06:35 localhost ceph-mon[290022]: Reconfiguring daemon mon.np0005548788 on np0005548788.localdomain Dec 6 05:06:35 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:35 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:35 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548789.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:06:35 localhost podman[296380]: Dec 6 05:06:35 localhost podman[296380]: 2025-12-06 10:06:35.571489249 +0000 UTC m=+0.079783016 container create 7acbf198423f655343a8463600bc90307233376140e2f24c9876a660ff4d6c9e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_cannon, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, RELEASE=main, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, GIT_CLEAN=True, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, description=Red Hat Ceph Storage 7, name=rhceph, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, version=7, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z) Dec 6 05:06:35 localhost systemd[1]: Started libpod-conmon-7acbf198423f655343a8463600bc90307233376140e2f24c9876a660ff4d6c9e.scope. Dec 6 05:06:35 localhost systemd[1]: Started libcrun container. Dec 6 05:06:35 localhost podman[296380]: 2025-12-06 10:06:35.632340703 +0000 UTC m=+0.140634500 container init 7acbf198423f655343a8463600bc90307233376140e2f24c9876a660ff4d6c9e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_cannon, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, distribution-scope=public, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, release=1763362218, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_BRANCH=main, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, name=rhceph, maintainer=Guillaume Abrioux , version=7) Dec 6 05:06:35 localhost podman[296380]: 2025-12-06 10:06:35.541028291 +0000 UTC m=+0.049322118 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:06:35 localhost systemd[1]: tmp-crun.kfwi4w.mount: Deactivated successfully. Dec 6 05:06:35 localhost podman[296380]: 2025-12-06 10:06:35.644515711 +0000 UTC m=+0.152809478 container start 7acbf198423f655343a8463600bc90307233376140e2f24c9876a660ff4d6c9e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_cannon, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., ceph=True, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, vcs-type=git, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, version=7, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, name=rhceph, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 6 05:06:35 localhost podman[296380]: 2025-12-06 10:06:35.644741537 +0000 UTC m=+0.153035394 container attach 7acbf198423f655343a8463600bc90307233376140e2f24c9876a660ff4d6c9e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_cannon, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, RELEASE=main, release=1763362218, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, architecture=x86_64, version=7, io.openshift.expose-services=) Dec 6 05:06:35 localhost beautiful_cannon[296395]: 167 167 Dec 6 05:06:35 localhost systemd[1]: libpod-7acbf198423f655343a8463600bc90307233376140e2f24c9876a660ff4d6c9e.scope: Deactivated successfully. Dec 6 05:06:35 localhost podman[296380]: 2025-12-06 10:06:35.64815002 +0000 UTC m=+0.156443847 container died 7acbf198423f655343a8463600bc90307233376140e2f24c9876a660ff4d6c9e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_cannon, io.openshift.tags=rhceph ceph, RELEASE=main, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, GIT_CLEAN=True, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, release=1763362218, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, name=rhceph, io.buildah.version=1.41.4, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public) Dec 6 05:06:35 localhost podman[296400]: 2025-12-06 10:06:35.750047891 +0000 UTC m=+0.092302763 container remove 7acbf198423f655343a8463600bc90307233376140e2f24c9876a660ff4d6c9e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_cannon, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., GIT_CLEAN=True, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, version=7, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-type=git, ceph=True, description=Red Hat Ceph Storage 7) Dec 6 05:06:35 localhost systemd[1]: libpod-conmon-7acbf198423f655343a8463600bc90307233376140e2f24c9876a660ff4d6c9e.scope: Deactivated successfully. Dec 6 05:06:36 localhost ceph-mon[290022]: Reconfiguring crash.np0005548789 (monmap changed)... Dec 6 05:06:36 localhost ceph-mon[290022]: Reconfiguring daemon crash.np0005548789 on np0005548789.localdomain Dec 6 05:06:36 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:36 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:36 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Dec 6 05:06:36 localhost podman[296470]: Dec 6 05:06:36 localhost podman[296470]: 2025-12-06 10:06:36.48350542 +0000 UTC m=+0.076506997 container create 9c2ed5345c63f305eb307d7337e26b651042829d16ea7dd849ab319eae129ee8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_hertz, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, architecture=x86_64, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, vcs-type=git, release=1763362218, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, name=rhceph, ceph=True, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 6 05:06:36 localhost systemd[1]: Started libpod-conmon-9c2ed5345c63f305eb307d7337e26b651042829d16ea7dd849ab319eae129ee8.scope. Dec 6 05:06:36 localhost systemd[1]: Started libcrun container. Dec 6 05:06:36 localhost podman[296470]: 2025-12-06 10:06:36.456139745 +0000 UTC m=+0.049141352 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:06:36 localhost podman[296470]: 2025-12-06 10:06:36.557669166 +0000 UTC m=+0.150670743 container init 9c2ed5345c63f305eb307d7337e26b651042829d16ea7dd849ab319eae129ee8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_hertz, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, RELEASE=main, architecture=x86_64, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, ceph=True, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, name=rhceph, vendor=Red Hat, Inc., GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 6 05:06:36 localhost podman[296470]: 2025-12-06 10:06:36.568945716 +0000 UTC m=+0.161947263 container start 9c2ed5345c63f305eb307d7337e26b651042829d16ea7dd849ab319eae129ee8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_hertz, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, GIT_BRANCH=main, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, RELEASE=main, release=1763362218, com.redhat.component=rhceph-container) Dec 6 05:06:36 localhost podman[296470]: 2025-12-06 10:06:36.569294327 +0000 UTC m=+0.162295904 container attach 9c2ed5345c63f305eb307d7337e26b651042829d16ea7dd849ab319eae129ee8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_hertz, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , RELEASE=main, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, architecture=x86_64, com.redhat.component=rhceph-container, name=rhceph, ceph=True, version=7, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4) Dec 6 05:06:36 localhost laughing_hertz[296485]: 167 167 Dec 6 05:06:36 localhost systemd[1]: libpod-9c2ed5345c63f305eb307d7337e26b651042829d16ea7dd849ab319eae129ee8.scope: Deactivated successfully. Dec 6 05:06:36 localhost podman[296470]: 2025-12-06 10:06:36.57273028 +0000 UTC m=+0.165732027 container died 9c2ed5345c63f305eb307d7337e26b651042829d16ea7dd849ab319eae129ee8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_hertz, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, name=rhceph, architecture=x86_64, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, version=7, release=1763362218, vendor=Red Hat, Inc., ceph=True, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 6 05:06:36 localhost systemd[1]: var-lib-containers-storage-overlay-252214fefd9f3b08647c332f2c01ba82bae23c6bd44d8548be826b87712bb741-merged.mount: Deactivated successfully. Dec 6 05:06:36 localhost systemd[1]: var-lib-containers-storage-overlay-3f50b70123f50032802f0b24776f76228a9f8438e9eaf8f09694e1551019184e-merged.mount: Deactivated successfully. Dec 6 05:06:36 localhost podman[296490]: 2025-12-06 10:06:36.662732133 +0000 UTC m=+0.082677253 container remove 9c2ed5345c63f305eb307d7337e26b651042829d16ea7dd849ab319eae129ee8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=laughing_hertz, vendor=Red Hat, Inc., distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, RELEASE=main, ceph=True, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, name=rhceph, CEPH_POINT_RELEASE=, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, vcs-type=git, com.redhat.component=rhceph-container, io.buildah.version=1.41.4) Dec 6 05:06:36 localhost systemd[1]: libpod-conmon-9c2ed5345c63f305eb307d7337e26b651042829d16ea7dd849ab319eae129ee8.scope: Deactivated successfully. Dec 6 05:06:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:06:37 localhost podman[296549]: 2025-12-06 10:06:37.157658282 +0000 UTC m=+0.090051866 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3) Dec 6 05:06:37 localhost ceph-mon[290022]: Reconfiguring osd.1 (monmap changed)... Dec 6 05:06:37 localhost ceph-mon[290022]: Reconfiguring daemon osd.1 on np0005548789.localdomain Dec 6 05:06:37 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:37 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005548786.localdomain"} : dispatch Dec 6 05:06:37 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005548786.localdomain"}]': finished Dec 6 05:06:37 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:37 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:37 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Dec 6 05:06:37 localhost podman[296549]: 2025-12-06 10:06:37.260286995 +0000 UTC m=+0.192680589 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:06:37 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:06:37 localhost podman[296590]: Dec 6 05:06:37 localhost podman[296590]: 2025-12-06 10:06:37.505005082 +0000 UTC m=+0.055545266 container create 75c31ed3aeea2d333dd05ffdc15716aead2ea96a7458de6b34153b81816aceae (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_aryabhata, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, release=1763362218, com.redhat.component=rhceph-container, distribution-scope=public, vendor=Red Hat, Inc., GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, ceph=True, RELEASE=main, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, architecture=x86_64, version=7, vcs-type=git, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 6 05:06:37 localhost systemd[1]: Started libpod-conmon-75c31ed3aeea2d333dd05ffdc15716aead2ea96a7458de6b34153b81816aceae.scope. Dec 6 05:06:37 localhost systemd[1]: Started libcrun container. Dec 6 05:06:37 localhost podman[296590]: 2025-12-06 10:06:37.572392363 +0000 UTC m=+0.122932507 container init 75c31ed3aeea2d333dd05ffdc15716aead2ea96a7458de6b34153b81816aceae (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_aryabhata, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, GIT_CLEAN=True, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, description=Red Hat Ceph Storage 7, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7) Dec 6 05:06:37 localhost podman[296590]: 2025-12-06 10:06:37.476534634 +0000 UTC m=+0.027074778 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:06:37 localhost podman[296590]: 2025-12-06 10:06:37.583760175 +0000 UTC m=+0.134300279 container start 75c31ed3aeea2d333dd05ffdc15716aead2ea96a7458de6b34153b81816aceae (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_aryabhata, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, vcs-type=git, version=7, GIT_BRANCH=main, RELEASE=main, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, CEPH_POINT_RELEASE=, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , distribution-scope=public) Dec 6 05:06:37 localhost podman[296590]: 2025-12-06 10:06:37.58392323 +0000 UTC m=+0.134463414 container attach 75c31ed3aeea2d333dd05ffdc15716aead2ea96a7458de6b34153b81816aceae (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_aryabhata, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, io.openshift.expose-services=, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, RELEASE=main, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , ceph=True, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, io.openshift.tags=rhceph ceph) Dec 6 05:06:37 localhost loving_aryabhata[296605]: 167 167 Dec 6 05:06:37 localhost systemd[1]: libpod-75c31ed3aeea2d333dd05ffdc15716aead2ea96a7458de6b34153b81816aceae.scope: Deactivated successfully. Dec 6 05:06:37 localhost podman[296590]: 2025-12-06 10:06:37.587071065 +0000 UTC m=+0.137611279 container died 75c31ed3aeea2d333dd05ffdc15716aead2ea96a7458de6b34153b81816aceae (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_aryabhata, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, release=1763362218, maintainer=Guillaume Abrioux , name=rhceph, ceph=True, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True) Dec 6 05:06:37 localhost systemd[1]: var-lib-containers-storage-overlay-bc37b8670e214a7c485435781f98089c5d147ee00813179213a5c932f625f18c-merged.mount: Deactivated successfully. Dec 6 05:06:37 localhost podman[296610]: 2025-12-06 10:06:37.687246395 +0000 UTC m=+0.083362524 container remove 75c31ed3aeea2d333dd05ffdc15716aead2ea96a7458de6b34153b81816aceae (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_aryabhata, io.buildah.version=1.41.4, vcs-type=git, maintainer=Guillaume Abrioux , io.openshift.expose-services=, CEPH_POINT_RELEASE=, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, release=1763362218, architecture=x86_64, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 6 05:06:37 localhost systemd[1]: libpod-conmon-75c31ed3aeea2d333dd05ffdc15716aead2ea96a7458de6b34153b81816aceae.scope: Deactivated successfully. Dec 6 05:06:38 localhost ceph-mon[290022]: Removed host np0005548786.localdomain Dec 6 05:06:38 localhost ceph-mon[290022]: Reconfiguring osd.4 (monmap changed)... Dec 6 05:06:38 localhost ceph-mon[290022]: Reconfiguring daemon osd.4 on np0005548789.localdomain Dec 6 05:06:38 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:38 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:38 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548789.vxwwsq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 6 05:06:38 localhost podman[296685]: Dec 6 05:06:38 localhost podman[296685]: 2025-12-06 10:06:38.469993389 +0000 UTC m=+0.081085525 container create 0859935a88f3f5b52c014ec0400d08fd34e06d89cbcce056ce2b8d384b8a8f9c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_antonelli, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, com.redhat.component=rhceph-container, architecture=x86_64, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 6 05:06:38 localhost systemd[1]: Started libpod-conmon-0859935a88f3f5b52c014ec0400d08fd34e06d89cbcce056ce2b8d384b8a8f9c.scope. Dec 6 05:06:38 localhost systemd[1]: Started libcrun container. Dec 6 05:06:38 localhost podman[296685]: 2025-12-06 10:06:38.526512694 +0000 UTC m=+0.137604840 container init 0859935a88f3f5b52c014ec0400d08fd34e06d89cbcce056ce2b8d384b8a8f9c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_antonelli, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , ceph=True, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, version=7, build-date=2025-11-26T19:44:28Z, distribution-scope=public, name=rhceph, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 6 05:06:38 localhost podman[296685]: 2025-12-06 10:06:38.536181395 +0000 UTC m=+0.147273541 container start 0859935a88f3f5b52c014ec0400d08fd34e06d89cbcce056ce2b8d384b8a8f9c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_antonelli, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , version=7, vendor=Red Hat, Inc., GIT_CLEAN=True, name=rhceph, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, GIT_BRANCH=main, ceph=True, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 6 05:06:38 localhost podman[296685]: 2025-12-06 10:06:38.536458343 +0000 UTC m=+0.147550499 container attach 0859935a88f3f5b52c014ec0400d08fd34e06d89cbcce056ce2b8d384b8a8f9c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_antonelli, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_BRANCH=main, maintainer=Guillaume Abrioux , ceph=True, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, distribution-scope=public, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, vcs-type=git, io.openshift.tags=rhceph ceph, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 6 05:06:38 localhost sleepy_antonelli[296699]: 167 167 Dec 6 05:06:38 localhost systemd[1]: libpod-0859935a88f3f5b52c014ec0400d08fd34e06d89cbcce056ce2b8d384b8a8f9c.scope: Deactivated successfully. Dec 6 05:06:38 localhost podman[296685]: 2025-12-06 10:06:38.439637075 +0000 UTC m=+0.050729261 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:06:38 localhost podman[296685]: 2025-12-06 10:06:38.54033897 +0000 UTC m=+0.151431156 container died 0859935a88f3f5b52c014ec0400d08fd34e06d89cbcce056ce2b8d384b8a8f9c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_antonelli, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, distribution-scope=public, GIT_BRANCH=main, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, name=rhceph, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, io.openshift.expose-services=, vendor=Red Hat, Inc., GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , RELEASE=main, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 6 05:06:38 localhost systemd[1]: var-lib-containers-storage-overlay-9ee284d2c4681523b33fd0049a275ef040599e2e7234f36d380e2e79a5c11755-merged.mount: Deactivated successfully. Dec 6 05:06:38 localhost podman[296705]: 2025-12-06 10:06:38.651530942 +0000 UTC m=+0.098031186 container remove 0859935a88f3f5b52c014ec0400d08fd34e06d89cbcce056ce2b8d384b8a8f9c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_antonelli, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., distribution-scope=public, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, release=1763362218, architecture=x86_64, RELEASE=main, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 05:06:38 localhost systemd[1]: libpod-conmon-0859935a88f3f5b52c014ec0400d08fd34e06d89cbcce056ce2b8d384b8a8f9c.scope: Deactivated successfully. Dec 6 05:06:38 localhost sshd[296719]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:06:38 localhost systemd-logind[766]: New session 67 of user tripleo-admin. Dec 6 05:06:38 localhost systemd[1]: Created slice User Slice of UID 1003. Dec 6 05:06:38 localhost systemd[1]: Starting User Runtime Directory /run/user/1003... Dec 6 05:06:38 localhost systemd[1]: Finished User Runtime Directory /run/user/1003. Dec 6 05:06:38 localhost systemd[1]: Starting User Manager for UID 1003... Dec 6 05:06:39 localhost systemd[296743]: Queued start job for default target Main User Target. Dec 6 05:06:39 localhost systemd[296743]: Created slice User Application Slice. Dec 6 05:06:39 localhost systemd[296743]: Started Mark boot as successful after the user session has run 2 minutes. Dec 6 05:06:39 localhost systemd[296743]: Started Daily Cleanup of User's Temporary Directories. Dec 6 05:06:39 localhost systemd[296743]: Reached target Paths. Dec 6 05:06:39 localhost systemd[296743]: Reached target Timers. Dec 6 05:06:39 localhost systemd[296743]: Starting D-Bus User Message Bus Socket... Dec 6 05:06:39 localhost systemd[296743]: Starting Create User's Volatile Files and Directories... Dec 6 05:06:39 localhost systemd[296743]: Listening on D-Bus User Message Bus Socket. Dec 6 05:06:39 localhost systemd[296743]: Reached target Sockets. Dec 6 05:06:39 localhost systemd[296743]: Finished Create User's Volatile Files and Directories. Dec 6 05:06:39 localhost systemd[296743]: Reached target Basic System. Dec 6 05:06:39 localhost systemd[1]: Started User Manager for UID 1003. Dec 6 05:06:39 localhost systemd[296743]: Reached target Main User Target. Dec 6 05:06:39 localhost systemd[296743]: Startup finished in 164ms. Dec 6 05:06:39 localhost systemd[1]: Started Session 67 of User tripleo-admin. Dec 6 05:06:39 localhost ceph-mon[290022]: Reconfiguring mds.mds.np0005548789.vxwwsq (monmap changed)... Dec 6 05:06:39 localhost ceph-mon[290022]: Reconfiguring daemon mds.mds.np0005548789.vxwwsq on np0005548789.localdomain Dec 6 05:06:39 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:39 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:39 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548789.mzhmje", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:06:39 localhost podman[296868]: Dec 6 05:06:39 localhost podman[296868]: 2025-12-06 10:06:39.564277205 +0000 UTC m=+0.086235180 container create d196ebc78888e7951ba278087dd8889e53ed189cfebf386fb454bde91eb13be0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_cori, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, distribution-scope=public, vcs-type=git, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, description=Red Hat Ceph Storage 7, RELEASE=main, version=7) Dec 6 05:06:39 localhost systemd[1]: Started libpod-conmon-d196ebc78888e7951ba278087dd8889e53ed189cfebf386fb454bde91eb13be0.scope. Dec 6 05:06:39 localhost podman[296868]: 2025-12-06 10:06:39.528309191 +0000 UTC m=+0.050267186 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:06:39 localhost systemd[1]: Started libcrun container. Dec 6 05:06:39 localhost podman[296868]: 2025-12-06 10:06:39.650178455 +0000 UTC m=+0.172136440 container init d196ebc78888e7951ba278087dd8889e53ed189cfebf386fb454bde91eb13be0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_cori, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, vcs-type=git, io.openshift.tags=rhceph ceph, name=rhceph, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7) Dec 6 05:06:39 localhost podman[296868]: 2025-12-06 10:06:39.663742263 +0000 UTC m=+0.185700248 container start d196ebc78888e7951ba278087dd8889e53ed189cfebf386fb454bde91eb13be0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_cori, GIT_BRANCH=main, RELEASE=main, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, ceph=True, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, name=rhceph, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public) Dec 6 05:06:39 localhost podman[296868]: 2025-12-06 10:06:39.664115755 +0000 UTC m=+0.186073740 container attach d196ebc78888e7951ba278087dd8889e53ed189cfebf386fb454bde91eb13be0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_cori, release=1763362218, CEPH_POINT_RELEASE=, GIT_CLEAN=True, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.expose-services=, RELEASE=main, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vcs-type=git, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, ceph=True, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 6 05:06:39 localhost amazing_cori[296920]: 167 167 Dec 6 05:06:39 localhost systemd[1]: libpod-d196ebc78888e7951ba278087dd8889e53ed189cfebf386fb454bde91eb13be0.scope: Deactivated successfully. Dec 6 05:06:39 localhost podman[296868]: 2025-12-06 10:06:39.669259369 +0000 UTC m=+0.191217374 container died d196ebc78888e7951ba278087dd8889e53ed189cfebf386fb454bde91eb13be0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_cori, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, architecture=x86_64, io.buildah.version=1.41.4, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, RELEASE=main, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, name=rhceph) Dec 6 05:06:39 localhost ceph-mon[290022]: mon.np0005548789@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:06:39 localhost nova_compute[282193]: 2025-12-06 10:06:39.771 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:06:39 localhost podman[296940]: 2025-12-06 10:06:39.804342052 +0000 UTC m=+0.121833104 container remove d196ebc78888e7951ba278087dd8889e53ed189cfebf386fb454bde91eb13be0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_cori, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, release=1763362218, RELEASE=main, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., name=rhceph) Dec 6 05:06:39 localhost systemd[1]: libpod-conmon-d196ebc78888e7951ba278087dd8889e53ed189cfebf386fb454bde91eb13be0.scope: Deactivated successfully. Dec 6 05:06:39 localhost python3[296949]: ansible-ansible.builtin.lineinfile Invoked with dest=/etc/os-net-config/tripleo_config.yaml insertafter=172.18.0 line= - ip_netmask: 172.18.0.104/24 backup=True path=/etc/os-net-config/tripleo_config.yaml state=present backrefs=False create=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Dec 6 05:06:40 localhost nova_compute[282193]: 2025-12-06 10:06:40.109 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:06:40 localhost ceph-mon[290022]: Reconfiguring mgr.np0005548789.mzhmje (monmap changed)... Dec 6 05:06:40 localhost ceph-mon[290022]: Reconfiguring daemon mgr.np0005548789.mzhmje on np0005548789.localdomain Dec 6 05:06:40 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:40 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:40 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 6 05:06:40 localhost podman[297124]: Dec 6 05:06:40 localhost podman[297124]: 2025-12-06 10:06:40.584065075 +0000 UTC m=+0.077146527 container create 63c9e658dd6d40cb9cf151b4bf28dd978ea740f5e8ebe5c01b412e86c1f74540 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_hodgkin, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, GIT_CLEAN=True, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , name=rhceph, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, com.redhat.component=rhceph-container, version=7, release=1763362218, RELEASE=main, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 6 05:06:40 localhost systemd[1]: Started libpod-conmon-63c9e658dd6d40cb9cf151b4bf28dd978ea740f5e8ebe5c01b412e86c1f74540.scope. Dec 6 05:06:40 localhost systemd[1]: tmp-crun.6bMIOE.mount: Deactivated successfully. Dec 6 05:06:40 localhost systemd[1]: var-lib-containers-storage-overlay-07655610074ce032d6afad17c811578a09539b32e1d7c3f1db53f383da4f24d5-merged.mount: Deactivated successfully. Dec 6 05:06:40 localhost systemd[1]: Started libcrun container. Dec 6 05:06:40 localhost podman[297124]: 2025-12-06 10:06:40.546687449 +0000 UTC m=+0.039768891 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:06:40 localhost podman[297124]: 2025-12-06 10:06:40.657074186 +0000 UTC m=+0.150155638 container init 63c9e658dd6d40cb9cf151b4bf28dd978ea740f5e8ebe5c01b412e86c1f74540 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_hodgkin, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, architecture=x86_64, com.redhat.component=rhceph-container, version=7, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.buildah.version=1.41.4, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, vcs-type=git, maintainer=Guillaume Abrioux ) Dec 6 05:06:40 localhost systemd[1]: tmp-crun.0lWVCy.mount: Deactivated successfully. Dec 6 05:06:40 localhost podman[297124]: 2025-12-06 10:06:40.669018646 +0000 UTC m=+0.162100068 container start 63c9e658dd6d40cb9cf151b4bf28dd978ea740f5e8ebe5c01b412e86c1f74540 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_hodgkin, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, maintainer=Guillaume Abrioux , GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.component=rhceph-container, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc.) Dec 6 05:06:40 localhost podman[297124]: 2025-12-06 10:06:40.669833241 +0000 UTC m=+0.162914693 container attach 63c9e658dd6d40cb9cf151b4bf28dd978ea740f5e8ebe5c01b412e86c1f74540 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_hodgkin, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, RELEASE=main, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, architecture=x86_64, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, ceph=True, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, distribution-scope=public) Dec 6 05:06:40 localhost clever_hodgkin[297172]: 167 167 Dec 6 05:06:40 localhost systemd[1]: libpod-63c9e658dd6d40cb9cf151b4bf28dd978ea740f5e8ebe5c01b412e86c1f74540.scope: Deactivated successfully. Dec 6 05:06:40 localhost podman[297124]: 2025-12-06 10:06:40.675675967 +0000 UTC m=+0.168757449 container died 63c9e658dd6d40cb9cf151b4bf28dd978ea740f5e8ebe5c01b412e86c1f74540 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_hodgkin, ceph=True, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , version=7, release=1763362218, vcs-type=git) Dec 6 05:06:40 localhost python3[297175]: ansible-ansible.legacy.command Invoked with _raw_params=ip a add 172.18.0.104/24 dev vlan21 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 05:06:40 localhost podman[297178]: 2025-12-06 10:06:40.784437846 +0000 UTC m=+0.094906002 container remove 63c9e658dd6d40cb9cf151b4bf28dd978ea740f5e8ebe5c01b412e86c1f74540 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_hodgkin, maintainer=Guillaume Abrioux , GIT_CLEAN=True, ceph=True, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, build-date=2025-11-26T19:44:28Z, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 05:06:40 localhost systemd[1]: libpod-conmon-63c9e658dd6d40cb9cf151b4bf28dd978ea740f5e8ebe5c01b412e86c1f74540.scope: Deactivated successfully. Dec 6 05:06:41 localhost ceph-mon[290022]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0. Dec 6 05:06:41 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:06:41.048354) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 6 05:06:41 localhost ceph-mon[290022]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22 Dec 6 05:06:41 localhost ceph-mon[290022]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015601048426, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1047, "num_deletes": 258, "total_data_size": 1528014, "memory_usage": 1554672, "flush_reason": "Manual Compaction"} Dec 6 05:06:41 localhost ceph-mon[290022]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started Dec 6 05:06:41 localhost ceph-mon[290022]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015601057877, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 889561, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15549, "largest_seqno": 16590, "table_properties": {"data_size": 884640, "index_size": 2328, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12862, "raw_average_key_size": 21, "raw_value_size": 873892, "raw_average_value_size": 1442, "num_data_blocks": 100, "num_entries": 606, "num_filter_entries": 606, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015581, "oldest_key_time": 1765015581, "file_creation_time": 1765015601, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8b48a877-4508-4eb4-a052-67f753f228b0", "db_session_id": "ETDWGFPM6GCTACWNDM5G", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}} Dec 6 05:06:41 localhost ceph-mon[290022]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 9563 microseconds, and 3484 cpu microseconds. Dec 6 05:06:41 localhost ceph-mon[290022]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 6 05:06:41 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:06:41.057926) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 889561 bytes OK Dec 6 05:06:41 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:06:41.057947) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started Dec 6 05:06:41 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:06:41.059741) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done Dec 6 05:06:41 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:06:41.059782) EVENT_LOG_v1 {"time_micros": 1765015601059776, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 6 05:06:41 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:06:41.059809) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 6 05:06:41 localhost ceph-mon[290022]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 1522395, prev total WAL file size 1522719, number of live WAL files 2. Dec 6 05:06:41 localhost ceph-mon[290022]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:06:41 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:06:41.060441) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033353135' seq:72057594037927935, type:22 .. '6C6F676D0033373638' seq:0, type:0; will stop at (end) Dec 6 05:06:41 localhost ceph-mon[290022]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 6 05:06:41 localhost ceph-mon[290022]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(868KB)], [21(14MB)] Dec 6 05:06:41 localhost ceph-mon[290022]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015601060501, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 16480269, "oldest_snapshot_seqno": -1} Dec 6 05:06:41 localhost ceph-mon[290022]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 10244 keys, 16342357 bytes, temperature: kUnknown Dec 6 05:06:41 localhost ceph-mon[290022]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015601165159, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 16342357, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16281772, "index_size": 33860, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25669, "raw_key_size": 275366, "raw_average_key_size": 26, "raw_value_size": 16104502, "raw_average_value_size": 1572, "num_data_blocks": 1290, "num_entries": 10244, "num_filter_entries": 10244, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015444, "oldest_key_time": 0, "file_creation_time": 1765015601, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8b48a877-4508-4eb4-a052-67f753f228b0", "db_session_id": "ETDWGFPM6GCTACWNDM5G", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}} Dec 6 05:06:41 localhost ceph-mon[290022]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 6 05:06:41 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:06:41.165536) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 16342357 bytes Dec 6 05:06:41 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:06:41.167908) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 157.3 rd, 156.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 14.9 +0.0 blob) out(15.6 +0.0 blob), read-write-amplify(36.9) write-amplify(18.4) OK, records in: 10788, records dropped: 544 output_compression: NoCompression Dec 6 05:06:41 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:06:41.167943) EVENT_LOG_v1 {"time_micros": 1765015601167928, "job": 10, "event": "compaction_finished", "compaction_time_micros": 104778, "compaction_time_cpu_micros": 46973, "output_level": 6, "num_output_files": 1, "total_output_size": 16342357, "num_input_records": 10788, "num_output_records": 10244, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 6 05:06:41 localhost ceph-mon[290022]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:06:41 localhost ceph-mon[290022]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015601168204, "job": 10, "event": "table_file_deletion", "file_number": 23} Dec 6 05:06:41 localhost ceph-mon[290022]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:06:41 localhost ceph-mon[290022]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015601170519, "job": 10, "event": "table_file_deletion", "file_number": 21} Dec 6 05:06:41 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:06:41.060371) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:06:41 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:06:41.170618) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:06:41 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:06:41.170626) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:06:41 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:06:41.170630) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:06:41 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:06:41.170634) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:06:41 localhost ceph-mon[290022]: rocksdb: (Original Log Time 2025/12/06-10:06:41.170638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:06:41 localhost ceph-mon[290022]: Reconfiguring mon.np0005548789 (monmap changed)... Dec 6 05:06:41 localhost ceph-mon[290022]: Reconfiguring daemon mon.np0005548789 on np0005548789.localdomain Dec 6 05:06:41 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:41 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:41 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548790.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:06:41 localhost python3[297336]: ansible-ansible.legacy.command Invoked with _raw_params=ping -W1 -c 3 172.18.0.104 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 05:06:41 localhost systemd[1]: var-lib-containers-storage-overlay-c76e2a4d7f790ac67491a32c3668c14c81f35f2fe10b54013ab303d6cc47425d-merged.mount: Deactivated successfully. Dec 6 05:06:42 localhost ceph-mon[290022]: Reconfiguring crash.np0005548790 (monmap changed)... Dec 6 05:06:42 localhost ceph-mon[290022]: Reconfiguring daemon crash.np0005548790 on np0005548790.localdomain Dec 6 05:06:42 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:42 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:42 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Dec 6 05:06:43 localhost ceph-mon[290022]: Reconfiguring osd.0 (monmap changed)... Dec 6 05:06:43 localhost ceph-mon[290022]: Reconfiguring daemon osd.0 on np0005548790.localdomain Dec 6 05:06:43 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:43 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:43 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Dec 6 05:06:44 localhost ceph-mon[290022]: Reconfiguring osd.3 (monmap changed)... Dec 6 05:06:44 localhost ceph-mon[290022]: Reconfiguring daemon osd.3 on np0005548790.localdomain Dec 6 05:06:44 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:44 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:44 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548790.vhcezv", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 6 05:06:44 localhost ceph-mon[290022]: mon.np0005548789@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:06:44 localhost nova_compute[282193]: 2025-12-06 10:06:44.805 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:06:45 localhost nova_compute[282193]: 2025-12-06 10:06:45.112 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:06:45 localhost ceph-mon[290022]: Reconfiguring mds.mds.np0005548790.vhcezv (monmap changed)... Dec 6 05:06:45 localhost ceph-mon[290022]: Reconfiguring daemon mds.mds.np0005548790.vhcezv on np0005548790.localdomain Dec 6 05:06:45 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:45 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:45 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:45 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548790.kvkfyr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:06:46 localhost ceph-mon[290022]: Saving service mon spec with placement label:mon Dec 6 05:06:46 localhost ceph-mon[290022]: Reconfiguring mgr.np0005548790.kvkfyr (monmap changed)... Dec 6 05:06:46 localhost ceph-mon[290022]: Reconfiguring daemon mgr.np0005548790.kvkfyr on np0005548790.localdomain Dec 6 05:06:46 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:46 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:46 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:06:46 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:46 localhost openstack_network_exporter[243110]: ERROR 10:06:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:06:46 localhost openstack_network_exporter[243110]: ERROR 10:06:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:06:46 localhost openstack_network_exporter[243110]: ERROR 10:06:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:06:46 localhost openstack_network_exporter[243110]: ERROR 10:06:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:06:46 localhost openstack_network_exporter[243110]: Dec 6 05:06:46 localhost openstack_network_exporter[243110]: ERROR 10:06:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:06:46 localhost openstack_network_exporter[243110]: Dec 6 05:06:47 localhost ceph-mon[290022]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:06:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:06:47.296 160509 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:06:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:06:47.296 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:06:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:06:47.298 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:06:47 localhost ceph-mgr[288591]: ms_deliver_dispatch: unhandled message 0x56140ed13080 mon_map magic: 0 from mon.2 v2:172.18.0.107:3300/0 Dec 6 05:06:47 localhost ceph-mon[290022]: mon.np0005548789@2(peon) e11 removed from monmap, suicide. Dec 6 05:06:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:06:47 localhost podman[297384]: 2025-12-06 10:06:47.633404436 +0000 UTC m=+0.068828176 container died 8db79eb988f6401c6530def208a6cc95f5f6889aff146370f866953a0dd24fb0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mon-np0005548789, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.openshift.expose-services=, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, RELEASE=main, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, release=1763362218, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, version=7) Dec 6 05:06:47 localhost systemd[1]: var-lib-containers-storage-overlay-58b056c2dffd8d19854bf632a4b426ef2949193a3440de3f8d9216685a5e7198-merged.mount: Deactivated successfully. Dec 6 05:06:47 localhost podman[297384]: 2025-12-06 10:06:47.678547986 +0000 UTC m=+0.113971656 container remove 8db79eb988f6401c6530def208a6cc95f5f6889aff146370f866953a0dd24fb0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mon-np0005548789, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.expose-services=, name=rhceph, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, ceph=True, RELEASE=main, distribution-scope=public, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, architecture=x86_64, maintainer=Guillaume Abrioux ) Dec 6 05:06:47 localhost systemd[1]: tmp-crun.DbiYf8.mount: Deactivated successfully. Dec 6 05:06:47 localhost podman[297400]: 2025-12-06 10:06:47.728214454 +0000 UTC m=+0.101144150 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 6 05:06:47 localhost podman[297400]: 2025-12-06 10:06:47.738095572 +0000 UTC m=+0.111025288 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent) Dec 6 05:06:47 localhost ceph-mgr[288591]: --2- 172.18.0.107:0/2196335751 >> [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] conn(0x56140edb3000 0x56140ecb9600 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request get_initial_auth_request returned -2 Dec 6 05:06:47 localhost ceph-mgr[288591]: client.0 ms_handle_reset on v2:172.18.0.103:3300/0 Dec 6 05:06:47 localhost ceph-mgr[288591]: client.0 ms_handle_reset on v2:172.18.0.103:3300/0 Dec 6 05:06:47 localhost ceph-mgr[288591]: ms_deliver_dispatch: unhandled message 0x56140ed12f20 mon_map magic: 0 from mon.0 v2:172.18.0.105:3300/0 Dec 6 05:06:47 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:06:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:06:48 localhost podman[297490]: 2025-12-06 10:06:48.196691565 +0000 UTC m=+0.089327723 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 6 05:06:48 localhost podman[297490]: 2025-12-06 10:06:48.203843441 +0000 UTC m=+0.096479609 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 6 05:06:48 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:06:48 localhost systemd[1]: ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8@mon.np0005548789.service: Deactivated successfully. Dec 6 05:06:48 localhost systemd[1]: Stopped Ceph mon.np0005548789 for 1939e851-b10c-5c3b-9bb7-8e7f380233e8. Dec 6 05:06:48 localhost systemd[1]: ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8@mon.np0005548789.service: Consumed 7.232s CPU time. Dec 6 05:06:48 localhost systemd[1]: Reloading. Dec 6 05:06:48 localhost systemd-rc-local-generator[297585]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 05:06:48 localhost systemd-sysv-generator[297588]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 05:06:48 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:06:48 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 05:06:48 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:06:48 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:06:48 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 05:06:48 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 05:06:48 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:06:48 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:06:48 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:06:49 localhost sshd[297666]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:06:49 localhost nova_compute[282193]: 2025-12-06 10:06:49.833 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:06:49 localhost podman[297700]: 2025-12-06 10:06:49.930514169 +0000 UTC m=+0.144031372 container exec ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, vcs-type=git, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, release=1763362218, RELEASE=main, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, architecture=x86_64, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 6 05:06:50 localhost podman[297700]: 2025-12-06 10:06:50.034241095 +0000 UTC m=+0.247758278 container exec_died ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.tags=rhceph ceph, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, architecture=x86_64, release=1763362218, com.redhat.component=rhceph-container, version=7, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, distribution-scope=public, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 05:06:50 localhost nova_compute[282193]: 2025-12-06 10:06:50.117 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:06:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:06:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:06:53 localhost podman[297874]: 2025-12-06 10:06:53.126917569 +0000 UTC m=+0.099379616 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:06:53 localhost podman[297874]: 2025-12-06 10:06:53.164286596 +0000 UTC m=+0.136748643 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm) Dec 6 05:06:53 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:06:53 localhost podman[297907]: 2025-12-06 10:06:53.21950869 +0000 UTC m=+0.095169710 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, managed_by=edpm_ansible, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, version=9.6, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., architecture=x86_64, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, build-date=2025-08-20T13:12:41) Dec 6 05:06:53 localhost podman[297907]: 2025-12-06 10:06:53.235215444 +0000 UTC m=+0.110876454 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, architecture=x86_64, config_id=edpm, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.buildah.version=1.33.7, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9) Dec 6 05:06:53 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:06:53 localhost podman[241090]: time="2025-12-06T10:06:53Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:06:53 localhost podman[241090]: @ - - [06/Dec/2025:10:06:53 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 153839 "" "Go-http-client/1.1" Dec 6 05:06:53 localhost podman[241090]: @ - - [06/Dec/2025:10:06:53 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18728 "" "Go-http-client/1.1" Dec 6 05:06:54 localhost nova_compute[282193]: 2025-12-06 10:06:54.864 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:06:55 localhost nova_compute[282193]: 2025-12-06 10:06:55.117 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:06:55 localhost sshd[298181]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:06:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:06:55 localhost podman[298183]: 2025-12-06 10:06:55.932110088 +0000 UTC m=+0.091183240 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Dec 6 05:06:55 localhost podman[298183]: 2025-12-06 10:06:55.970387811 +0000 UTC m=+0.129460943 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 6 05:06:55 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:06:59 localhost nova_compute[282193]: 2025-12-06 10:06:59.888 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:07:00 localhost nova_compute[282193]: 2025-12-06 10:07:00.120 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:07:01 localhost podman[298280]: Dec 6 05:07:01 localhost podman[298280]: 2025-12-06 10:07:01.167789021 +0000 UTC m=+0.064172516 container create 468c4f70a3ff9fbd27850c17b3e27676a0d21c5a99b901d4860f38945e28a73e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_mccarthy, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, RELEASE=main, name=rhceph, maintainer=Guillaume Abrioux , distribution-scope=public, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True) Dec 6 05:07:01 localhost systemd[1]: Started libpod-conmon-468c4f70a3ff9fbd27850c17b3e27676a0d21c5a99b901d4860f38945e28a73e.scope. Dec 6 05:07:01 localhost podman[298280]: 2025-12-06 10:07:01.132427574 +0000 UTC m=+0.028811099 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:07:01 localhost systemd[1]: Started libcrun container. Dec 6 05:07:01 localhost podman[298280]: 2025-12-06 10:07:01.250511084 +0000 UTC m=+0.146894579 container init 468c4f70a3ff9fbd27850c17b3e27676a0d21c5a99b901d4860f38945e28a73e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_mccarthy, architecture=x86_64, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vcs-type=git, name=rhceph, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, io.buildah.version=1.41.4, RELEASE=main, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7) Dec 6 05:07:01 localhost podman[298280]: 2025-12-06 10:07:01.265989271 +0000 UTC m=+0.162372756 container start 468c4f70a3ff9fbd27850c17b3e27676a0d21c5a99b901d4860f38945e28a73e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_mccarthy, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, ceph=True, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, architecture=x86_64, vcs-type=git, GIT_CLEAN=True, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, distribution-scope=public, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, description=Red Hat Ceph Storage 7, RELEASE=main, name=rhceph, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 6 05:07:01 localhost podman[298280]: 2025-12-06 10:07:01.266266209 +0000 UTC m=+0.162649764 container attach 468c4f70a3ff9fbd27850c17b3e27676a0d21c5a99b901d4860f38945e28a73e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_mccarthy, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, RELEASE=main, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, GIT_CLEAN=True, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, name=rhceph, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, CEPH_POINT_RELEASE=) Dec 6 05:07:01 localhost friendly_mccarthy[298312]: 167 167 Dec 6 05:07:01 localhost systemd[1]: libpod-468c4f70a3ff9fbd27850c17b3e27676a0d21c5a99b901d4860f38945e28a73e.scope: Deactivated successfully. Dec 6 05:07:01 localhost podman[298280]: 2025-12-06 10:07:01.270090614 +0000 UTC m=+0.166474169 container died 468c4f70a3ff9fbd27850c17b3e27676a0d21c5a99b901d4860f38945e28a73e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_mccarthy, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, name=rhceph, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, GIT_BRANCH=main, architecture=x86_64, distribution-scope=public, ceph=True, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 6 05:07:01 localhost podman[298334]: 2025-12-06 10:07:01.387959767 +0000 UTC m=+0.105497011 container remove 468c4f70a3ff9fbd27850c17b3e27676a0d21c5a99b901d4860f38945e28a73e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_mccarthy, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, RELEASE=main, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, release=1763362218, ceph=True, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, vcs-type=git, distribution-scope=public, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 6 05:07:01 localhost systemd[1]: libpod-conmon-468c4f70a3ff9fbd27850c17b3e27676a0d21c5a99b901d4860f38945e28a73e.scope: Deactivated successfully. Dec 6 05:07:01 localhost podman[298352]: Dec 6 05:07:01 localhost podman[298352]: 2025-12-06 10:07:01.478040332 +0000 UTC m=+0.063873646 container create e7f2a5d8f5039835ad07dc73e30ae6a9c876fbf00071afec023234b27507e663 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_panini, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, ceph=True, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, distribution-scope=public, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, name=rhceph, CEPH_POINT_RELEASE=, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux ) Dec 6 05:07:01 localhost systemd[1]: Started libpod-conmon-e7f2a5d8f5039835ad07dc73e30ae6a9c876fbf00071afec023234b27507e663.scope. Dec 6 05:07:01 localhost systemd[1]: Started libcrun container. Dec 6 05:07:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74214f1039b49a14641ac13d0ecf4dc532c465ec1c075256312d635b09f82e6b/merged/tmp/config supports timestamps until 2038 (0x7fffffff) Dec 6 05:07:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74214f1039b49a14641ac13d0ecf4dc532c465ec1c075256312d635b09f82e6b/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff) Dec 6 05:07:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74214f1039b49a14641ac13d0ecf4dc532c465ec1c075256312d635b09f82e6b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 6 05:07:01 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74214f1039b49a14641ac13d0ecf4dc532c465ec1c075256312d635b09f82e6b/merged/var/lib/ceph/mon/ceph-np0005548789 supports timestamps until 2038 (0x7fffffff) Dec 6 05:07:01 localhost podman[298352]: 2025-12-06 10:07:01.540798104 +0000 UTC m=+0.126631388 container init e7f2a5d8f5039835ad07dc73e30ae6a9c876fbf00071afec023234b27507e663 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_panini, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, RELEASE=main, build-date=2025-11-26T19:44:28Z, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 05:07:01 localhost podman[298352]: 2025-12-06 10:07:01.443538532 +0000 UTC m=+0.029371856 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:07:01 localhost podman[298352]: 2025-12-06 10:07:01.548101244 +0000 UTC m=+0.133934498 container start e7f2a5d8f5039835ad07dc73e30ae6a9c876fbf00071afec023234b27507e663 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_panini, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, name=rhceph, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, GIT_CLEAN=True, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, ceph=True) Dec 6 05:07:01 localhost podman[298352]: 2025-12-06 10:07:01.548442225 +0000 UTC m=+0.134275559 container attach e7f2a5d8f5039835ad07dc73e30ae6a9c876fbf00071afec023234b27507e663 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_panini, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, version=7, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, GIT_BRANCH=main, distribution-scope=public, release=1763362218, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, name=rhceph, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 6 05:07:01 localhost systemd[1]: libpod-e7f2a5d8f5039835ad07dc73e30ae6a9c876fbf00071afec023234b27507e663.scope: Deactivated successfully. Dec 6 05:07:01 localhost podman[298352]: 2025-12-06 10:07:01.654553903 +0000 UTC m=+0.240387207 container died e7f2a5d8f5039835ad07dc73e30ae6a9c876fbf00071afec023234b27507e663 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_panini, io.k8s.description=Red Hat Ceph Storage 7, version=7, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, name=rhceph, distribution-scope=public, vcs-type=git, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, com.redhat.component=rhceph-container) Dec 6 05:07:01 localhost podman[298406]: 2025-12-06 10:07:01.756813486 +0000 UTC m=+0.087960842 container remove e7f2a5d8f5039835ad07dc73e30ae6a9c876fbf00071afec023234b27507e663 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_panini, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, ceph=True, vcs-type=git, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, RELEASE=main, com.redhat.component=rhceph-container, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, release=1763362218, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 05:07:01 localhost systemd[1]: libpod-conmon-e7f2a5d8f5039835ad07dc73e30ae6a9c876fbf00071afec023234b27507e663.scope: Deactivated successfully. Dec 6 05:07:01 localhost systemd[1]: Reloading. Dec 6 05:07:01 localhost systemd-sysv-generator[298451]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 05:07:01 localhost systemd-rc-local-generator[298448]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 05:07:02 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:07:02 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 05:07:02 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:07:02 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:07:02 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 05:07:02 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 05:07:02 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:07:02 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:07:02 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:07:02 localhost systemd[1]: var-lib-containers-storage-overlay-3def008430c718dad15f8d9856f3d5283bd470351db279e370c4b8357a48142c-merged.mount: Deactivated successfully. Dec 6 05:07:02 localhost systemd[1]: Reloading. Dec 6 05:07:02 localhost systemd-sysv-generator[298493]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Dec 6 05:07:02 localhost systemd-rc-local-generator[298487]: /etc/rc.d/rc.local is not marked executable, skipping. Dec 6 05:07:02 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:07:02 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Dec 6 05:07:02 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:07:02 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:07:02 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 6 05:07:02 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Dec 6 05:07:02 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:07:02 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:07:02 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Dec 6 05:07:02 localhost systemd[1]: Starting Ceph mon.np0005548789 for 1939e851-b10c-5c3b-9bb7-8e7f380233e8... Dec 6 05:07:02 localhost podman[298552]: Dec 6 05:07:02 localhost podman[298552]: 2025-12-06 10:07:02.991216765 +0000 UTC m=+0.073943420 container create fc31a9b04a3a29a488005d8205bdc8f4100d6f62af8b9293fa7a15910a34b090 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mon-np0005548789, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, RELEASE=main, distribution-scope=public, vcs-type=git, io.openshift.tags=rhceph ceph, version=7, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, GIT_CLEAN=True, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218) Dec 6 05:07:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:07:03 localhost systemd[1]: tmp-crun.LQ7hUr.mount: Deactivated successfully. Dec 6 05:07:03 localhost podman[298552]: 2025-12-06 10:07:02.966414588 +0000 UTC m=+0.049141303 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:07:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14bfadbe6baaced7cb4e86831519444fa904b127541fff2989a7149ffd9fa7d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Dec 6 05:07:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14bfadbe6baaced7cb4e86831519444fa904b127541fff2989a7149ffd9fa7d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Dec 6 05:07:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14bfadbe6baaced7cb4e86831519444fa904b127541fff2989a7149ffd9fa7d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Dec 6 05:07:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14bfadbe6baaced7cb4e86831519444fa904b127541fff2989a7149ffd9fa7d8/merged/var/lib/ceph/mon/ceph-np0005548789 supports timestamps until 2038 (0x7fffffff) Dec 6 05:07:03 localhost podman[298552]: 2025-12-06 10:07:03.07732159 +0000 UTC m=+0.160048275 container init fc31a9b04a3a29a488005d8205bdc8f4100d6f62af8b9293fa7a15910a34b090 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mon-np0005548789, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, ceph=True, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, release=1763362218, name=rhceph, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, maintainer=Guillaume Abrioux ) Dec 6 05:07:03 localhost podman[298552]: 2025-12-06 10:07:03.084068844 +0000 UTC m=+0.166795519 container start fc31a9b04a3a29a488005d8205bdc8f4100d6f62af8b9293fa7a15910a34b090 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mon-np0005548789, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, vcs-type=git, CEPH_POINT_RELEASE=, architecture=x86_64, RELEASE=main, release=1763362218, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True) Dec 6 05:07:03 localhost bash[298552]: fc31a9b04a3a29a488005d8205bdc8f4100d6f62af8b9293fa7a15910a34b090 Dec 6 05:07:03 localhost systemd[1]: Started Ceph mon.np0005548789 for 1939e851-b10c-5c3b-9bb7-8e7f380233e8. Dec 6 05:07:03 localhost podman[298565]: 2025-12-06 10:07:03.121583315 +0000 UTC m=+0.083680054 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 6 05:07:03 localhost ceph-mon[298582]: set uid:gid to 167:167 (ceph:ceph) Dec 6 05:07:03 localhost ceph-mon[298582]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mon, pid 2 Dec 6 05:07:03 localhost ceph-mon[298582]: pidfile_write: ignore empty --pid-file Dec 6 05:07:03 localhost ceph-mon[298582]: load: jerasure load: lrc Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: RocksDB version: 7.9.2 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Git sha 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Compile date 2025-09-23 00:00:00 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: DB SUMMARY Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: DB Session ID: JMBO5KX1IJCJ8FWC64EX Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: CURRENT file: CURRENT Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: IDENTITY file: IDENTITY Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: MANIFEST file: MANIFEST-000005 size: 59 Bytes Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: SST files in /var/lib/ceph/mon/ceph-np0005548789/store.db dir, Total Num: 0, files: Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-np0005548789/store.db: 000004.log size: 761 ; Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.error_if_exists: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.create_if_missing: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.paranoid_checks: 1 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.flush_verify_memtable_count: 1 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.env: 0x55b607e7d9e0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.fs: PosixFileSystem Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.info_log: 0x55b608f2cd20 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.max_file_opening_threads: 16 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.statistics: (nil) Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.use_fsync: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.max_log_file_size: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.max_manifest_file_size: 1073741824 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.log_file_time_to_roll: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.keep_log_file_num: 1000 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.recycle_log_file_num: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.allow_fallocate: 1 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.allow_mmap_reads: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.allow_mmap_writes: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.use_direct_reads: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.create_missing_column_families: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.db_log_dir: Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.wal_dir: Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.table_cache_numshardbits: 6 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.WAL_ttl_seconds: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.WAL_size_limit_MB: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.manifest_preallocation_size: 4194304 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.is_fd_close_on_exec: 1 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.advise_random_on_open: 1 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.db_write_buffer_size: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.write_buffer_manager: 0x55b608f3d540 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.access_hint_on_compaction_start: 1 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.random_access_max_buffer_size: 1048576 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.use_adaptive_mutex: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.rate_limiter: (nil) Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.wal_recovery_mode: 2 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.enable_thread_tracking: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.enable_pipelined_write: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.unordered_write: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.allow_concurrent_memtable_write: 1 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.write_thread_max_yield_usec: 100 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.write_thread_slow_yield_usec: 3 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.row_cache: None Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.wal_filter: None Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.avoid_flush_during_recovery: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.allow_ingest_behind: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.two_write_queues: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.manual_wal_flush: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.wal_compression: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.atomic_flush: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.persist_stats_to_disk: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.write_dbid_to_manifest: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.log_readahead_size: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.file_checksum_gen_factory: Unknown Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.best_efforts_recovery: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.allow_data_in_errors: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.db_host_id: __hostname__ Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.enforce_single_del_contracts: true Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.max_background_jobs: 2 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.max_background_compactions: -1 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.max_subcompactions: 1 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.avoid_flush_during_shutdown: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.writable_file_max_buffer_size: 1048576 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.delayed_write_rate : 16777216 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.max_total_wal_size: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.stats_dump_period_sec: 600 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.stats_persist_period_sec: 600 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.stats_history_buffer_size: 1048576 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.max_open_files: -1 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.bytes_per_sync: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.wal_bytes_per_sync: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.strict_bytes_per_sync: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.compaction_readahead_size: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.max_background_flushes: -1 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Compression algorithms supported: Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: #011kZSTD supported: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: #011kXpressCompression supported: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: #011kBZip2Compression supported: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: #011kLZ4Compression supported: 1 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: #011kZlibCompression supported: 1 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: #011kLZ4HCCompression supported: 1 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: #011kSnappyCompression supported: 1 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Fast CRC32 supported: Supported on x86 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: DMutex implementation: pthread_mutex_t Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-np0005548789/store.db/MANIFEST-000005 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.comparator: leveldb.BytewiseComparator Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.merge_operator: Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.compaction_filter: None Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.compaction_filter_factory: None Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.sst_partitioner_factory: None Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.memtable_factory: SkipListFactory Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.table_factory: BlockBasedTable Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b608f2c980)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55b608f29350#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.write_buffer_size: 33554432 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.max_write_buffer_number: 2 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.compression: NoCompression Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.bottommost_compression: Disabled Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.prefix_extractor: nullptr Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.num_levels: 7 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.min_write_buffer_number_to_merge: 1 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.bottommost_compression_opts.level: 32767 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.bottommost_compression_opts.enabled: false Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.compression_opts.window_bits: -14 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.compression_opts.level: 32767 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.compression_opts.strategy: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.compression_opts.parallel_threads: 1 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.compression_opts.enabled: false Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.level0_file_num_compaction_trigger: 4 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.level0_stop_writes_trigger: 36 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.target_file_size_base: 67108864 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.target_file_size_multiplier: 1 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.max_bytes_for_level_base: 268435456 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.max_compaction_bytes: 1677721600 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.arena_block_size: 1048576 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.disable_auto_compactions: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.compaction_style: kCompactionStyleLevel Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.table_properties_collectors: Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.inplace_update_support: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.inplace_update_num_locks: 10000 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.memtable_whole_key_filtering: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.memtable_huge_page_size: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.bloom_locality: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.max_successive_merges: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.optimize_filters_for_hits: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.paranoid_file_checks: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.force_consistency_checks: 1 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.report_bg_io_stats: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.ttl: 2592000 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.periodic_compaction_seconds: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.preclude_last_level_data_seconds: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.preserve_internal_time_seconds: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.enable_blob_files: false Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.min_blob_size: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.blob_file_size: 268435456 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.blob_compression_type: NoCompression Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.enable_blob_garbage_collection: false Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.blob_compaction_readahead_size: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.blob_file_starting_level: 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-np0005548789/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1ec2a3a6-c7d0-4f1e-9923-5a3b782725ae Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015623136019, "job": 1, "event": "recovery_started", "wal_files": [4]} Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015623138512, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1887, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 773, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 651, "raw_average_value_size": 130, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015623, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ec2a3a6-c7d0-4f1e-9923-5a3b782725ae", "db_session_id": "JMBO5KX1IJCJ8FWC64EX", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}} Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015623138622, "job": 1, "event": "recovery_finished"} Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: [db/version_set.cc:5047] Creating manifest 10 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55b608f50e00 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: DB pointer 0x55b609046000 Dec 6 05:07:03 localhost ceph-mon[298582]: mon.np0005548789 does not exist in monmap, will attempt to join an existing cluster Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 1/0 1.84 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.7 0.00 0.00 1 0.002 0 0 0.0 0.0#012 Sum 1/0 1.84 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.7 0.00 0.00 1 0.002 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.7 0.00 0.00 1 0.002 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.7 0.00 0.00 1 0.002 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b608f29350#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] ** Dec 6 05:07:03 localhost ceph-mon[298582]: using public_addr v2:172.18.0.104:0/0 -> [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] Dec 6 05:07:03 localhost ceph-mon[298582]: starting mon.np0005548789 rank -1 at public addrs [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] at bind addrs [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon_data /var/lib/ceph/mon/ceph-np0005548789 fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8 Dec 6 05:07:03 localhost ceph-mon[298582]: mon.np0005548789@-1(???) e0 preinit fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8 Dec 6 05:07:03 localhost podman[298565]: 2025-12-06 10:07:03.155637181 +0000 UTC m=+0.117733890 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:07:03 localhost ceph-mon[298582]: mon.np0005548789@-1(synchronizing) e11 sync_obtain_latest_monmap Dec 6 05:07:03 localhost ceph-mon[298582]: mon.np0005548789@-1(synchronizing) e11 sync_obtain_latest_monmap obtained monmap e11 Dec 6 05:07:03 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:07:03 localhost podman[298636]: Dec 6 05:07:03 localhost podman[298636]: 2025-12-06 10:07:03.234683633 +0000 UTC m=+0.064480784 container create 5b2578171d9c75a479c1559e942a76336a44dc660c216cd71d2b9264fd37055d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_colden, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_CLEAN=True, GIT_BRANCH=main, io.buildah.version=1.41.4, distribution-scope=public, architecture=x86_64, maintainer=Guillaume Abrioux , name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, vcs-type=git, description=Red Hat Ceph Storage 7) Dec 6 05:07:03 localhost systemd[1]: Started libpod-conmon-5b2578171d9c75a479c1559e942a76336a44dc660c216cd71d2b9264fd37055d.scope. Dec 6 05:07:03 localhost systemd[1]: Started libcrun container. Dec 6 05:07:03 localhost podman[298636]: 2025-12-06 10:07:03.297888389 +0000 UTC m=+0.127685540 container init 5b2578171d9c75a479c1559e942a76336a44dc660c216cd71d2b9264fd37055d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_colden, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, architecture=x86_64, RELEASE=main, vcs-type=git, GIT_CLEAN=True, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , name=rhceph, release=1763362218, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z) Dec 6 05:07:03 localhost systemd[1]: tmp-crun.6w0dbN.mount: Deactivated successfully. Dec 6 05:07:03 localhost systemd[1]: libpod-5b2578171d9c75a479c1559e942a76336a44dc660c216cd71d2b9264fd37055d.scope: Deactivated successfully. Dec 6 05:07:03 localhost great_colden[298651]: 167 167 Dec 6 05:07:03 localhost podman[298636]: 2025-12-06 10:07:03.217203517 +0000 UTC m=+0.047000688 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:07:03 localhost podman[298636]: 2025-12-06 10:07:03.318849491 +0000 UTC m=+0.148646652 container start 5b2578171d9c75a479c1559e942a76336a44dc660c216cd71d2b9264fd37055d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_colden, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, io.openshift.expose-services=, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., version=7, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , ceph=True, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_CLEAN=True, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 6 05:07:03 localhost podman[298636]: 2025-12-06 10:07:03.319023927 +0000 UTC m=+0.148821108 container attach 5b2578171d9c75a479c1559e942a76336a44dc660c216cd71d2b9264fd37055d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_colden, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, release=1763362218, name=rhceph, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, RELEASE=main, com.redhat.component=rhceph-container, ceph=True, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 6 05:07:03 localhost podman[298636]: 2025-12-06 10:07:03.32013119 +0000 UTC m=+0.149928341 container died 5b2578171d9c75a479c1559e942a76336a44dc660c216cd71d2b9264fd37055d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_colden, GIT_BRANCH=main, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, RELEASE=main, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 6 05:07:03 localhost podman[298656]: 2025-12-06 10:07:03.406927966 +0000 UTC m=+0.080493308 container remove 5b2578171d9c75a479c1559e942a76336a44dc660c216cd71d2b9264fd37055d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_colden, io.buildah.version=1.41.4, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, vcs-type=git, GIT_BRANCH=main, architecture=x86_64, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, maintainer=Guillaume Abrioux , GIT_CLEAN=True, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 6 05:07:03 localhost systemd[1]: libpod-conmon-5b2578171d9c75a479c1559e942a76336a44dc660c216cd71d2b9264fd37055d.scope: Deactivated successfully. Dec 6 05:07:03 localhost ceph-mon[298582]: mon.np0005548789@-1(synchronizing).mds e16 new map Dec 6 05:07:03 localhost ceph-mon[298582]: mon.np0005548789@-1(synchronizing).mds e16 print_map#012e16#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#01116#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-06T08:18:49.925523+0000#012modified#0112025-12-06T10:03:02.051468+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#01187#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=26356}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[6]#012metadata_pool#0117#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 26356 members: 26356#012[mds.mds.np0005548790.vhcezv{0:26356} state up:active seq 16 addr [v2:172.18.0.108:6808/1621657194,v1:172.18.0.108:6809/1621657194] compat {c=[1],r=[1],i=[17ff]}]#012 #012 #012Standby daemons:#012 #012[mds.mds.np0005548789.vxwwsq{-1:16884} state up:standby seq 1 addr [v2:172.18.0.107:6808/3033303281,v1:172.18.0.107:6809/3033303281] compat {c=[1],r=[1],i=[17ff]}]#012[mds.mds.np0005548788.erzujf{-1:16890} state up:standby seq 1 addr [v2:172.18.0.106:6808/309324236,v1:172.18.0.106:6809/309324236] compat {c=[1],r=[1],i=[17ff]}] Dec 6 05:07:03 localhost ceph-mon[298582]: mon.np0005548789@-1(synchronizing).osd e89 crush map has features 3314933000854323200, adjusting msgr requires Dec 6 05:07:03 localhost ceph-mon[298582]: mon.np0005548789@-1(synchronizing).osd e89 crush map has features 432629239337189376, adjusting msgr requires Dec 6 05:07:03 localhost ceph-mon[298582]: mon.np0005548789@-1(synchronizing).osd e89 crush map has features 432629239337189376, adjusting msgr requires Dec 6 05:07:03 localhost ceph-mon[298582]: mon.np0005548789@-1(synchronizing).osd e89 crush map has features 432629239337189376, adjusting msgr requires Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0. Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:07:03.570689) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015623570797, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 10565, "num_deletes": 254, "total_data_size": 13575200, "memory_usage": 14243736, "flush_reason": "Manual Compaction"} Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring crash.np0005548789 (monmap changed)... Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring daemon crash.np0005548789 on np0005548789.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring osd.1 (monmap changed)... Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring daemon osd.1 on np0005548789.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring osd.4 (monmap changed)... Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring daemon osd.4 on np0005548789.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring mds.mds.np0005548789.vxwwsq (monmap changed)... Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548789.vxwwsq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring daemon mds.mds.np0005548789.vxwwsq on np0005548789.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring mgr.np0005548789.mzhmje (monmap changed)... Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548789.mzhmje", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring daemon mgr.np0005548789.mzhmje on np0005548789.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring crash.np0005548790 (monmap changed)... Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548790.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring daemon crash.np0005548790 on np0005548790.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: Saving service mon spec with placement label:mon Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring osd.0 (monmap changed)... Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring daemon osd.0 on np0005548790.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring osd.3 (monmap changed)... Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring daemon osd.3 on np0005548790.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548790.vhcezv", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring mds.mds.np0005548790.vhcezv (monmap changed)... Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring daemon mds.mds.np0005548790.vhcezv on np0005548790.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548790.kvkfyr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring mgr.np0005548790.kvkfyr (monmap changed)... Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring daemon mgr.np0005548790.kvkfyr on np0005548790.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring mon.np0005548790 (monmap changed)... Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring daemon mon.np0005548790 on np0005548790.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring mon.np0005548786 (monmap changed)... Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring daemon mon.np0005548786 on np0005548786.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring mon.np0005548787 (monmap changed)... Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring daemon mon.np0005548787 on np0005548787.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring mon.np0005548788 (monmap changed)... Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring daemon mon.np0005548788 on np0005548788.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring mon.np0005548789 (monmap changed)... Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring daemon mon.np0005548789 on np0005548789.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: Remove daemons mon.np0005548786 Dec 6 05:07:03 localhost ceph-mon[298582]: Safe to remove mon.np0005548786: new quorum should be ['np0005548787', 'np0005548790', 'np0005548789', 'np0005548788'] (from ['np0005548787', 'np0005548790', 'np0005548789', 'np0005548788']) Dec 6 05:07:03 localhost ceph-mon[298582]: Removing monitor np0005548786 from monmap... Dec 6 05:07:03 localhost ceph-mon[298582]: Removing daemon mon.np0005548786 from np0005548786.localdomain -- ports [] Dec 6 05:07:03 localhost ceph-mon[298582]: mon.np0005548789 calling monitor election Dec 6 05:07:03 localhost ceph-mon[298582]: mon.np0005548790 calling monitor election Dec 6 05:07:03 localhost ceph-mon[298582]: mon.np0005548788 calling monitor election Dec 6 05:07:03 localhost ceph-mon[298582]: mon.np0005548787 calling monitor election Dec 6 05:07:03 localhost ceph-mon[298582]: mon.np0005548787 is new leader, mons np0005548787,np0005548790,np0005548789,np0005548788 in quorum (ranks 0,1,2,3) Dec 6 05:07:03 localhost ceph-mon[298582]: Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm Dec 6 05:07:03 localhost ceph-mon[298582]: [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm Dec 6 05:07:03 localhost ceph-mon[298582]: stray daemon mgr.np0005548785.vhqlsq on host np0005548785.localdomain not managed by cephadm Dec 6 05:07:03 localhost ceph-mon[298582]: [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm Dec 6 05:07:03 localhost ceph-mon[298582]: stray host np0005548785.localdomain has 1 stray daemons: ['mgr.np0005548785.vhqlsq'] Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: Updating np0005548786.localdomain:/etc/ceph/ceph.conf Dec 6 05:07:03 localhost ceph-mon[298582]: Updating np0005548787.localdomain:/etc/ceph/ceph.conf Dec 6 05:07:03 localhost ceph-mon[298582]: Updating np0005548788.localdomain:/etc/ceph/ceph.conf Dec 6 05:07:03 localhost ceph-mon[298582]: Updating np0005548789.localdomain:/etc/ceph/ceph.conf Dec 6 05:07:03 localhost ceph-mon[298582]: Updating np0005548790.localdomain:/etc/ceph/ceph.conf Dec 6 05:07:03 localhost ceph-mon[298582]: Removed label mon from host np0005548786.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: Updating np0005548786.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:07:03 localhost ceph-mon[298582]: Updating np0005548787.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:07:03 localhost ceph-mon[298582]: Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:07:03 localhost ceph-mon[298582]: Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:07:03 localhost ceph-mon[298582]: Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: Removed label mgr from host np0005548786.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: Removing daemon mgr.np0005548786.mczynb from np0005548786.localdomain -- ports [8765] Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: Removed label _admin from host np0005548786.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth rm", "entity": "mgr.np0005548786.mczynb"} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd='[{"prefix": "auth rm", "entity": "mgr.np0005548786.mczynb"}]': finished Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: Removing key for mgr.np0005548786.mczynb Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548786.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: Removing np0005548786.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:07:03 localhost ceph-mon[298582]: Removing np0005548786.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 6 05:07:03 localhost ceph-mon[298582]: Removing np0005548786.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring crash.np0005548786 (monmap changed)... Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring daemon crash.np0005548786 on np0005548786.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring mon.np0005548787 (monmap changed)... Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring daemon mon.np0005548787 on np0005548787.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548787.umwsra", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring mgr.np0005548787.umwsra (monmap changed)... Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring daemon mgr.np0005548787.umwsra on np0005548787.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring crash.np0005548787 (monmap changed)... Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548787.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring daemon crash.np0005548787 on np0005548787.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring crash.np0005548788 (monmap changed)... Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548788.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring daemon crash.np0005548788 on np0005548788.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring osd.2 (monmap changed)... Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring daemon osd.2 on np0005548788.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring osd.5 (monmap changed)... Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring daemon osd.5 on np0005548788.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548788.erzujf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring mds.mds.np0005548788.erzujf (monmap changed)... Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring daemon mds.mds.np0005548788.erzujf on np0005548788.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring mgr.np0005548788.yvwbqq (monmap changed)... Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548788.yvwbqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring daemon mgr.np0005548788.yvwbqq on np0005548788.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: Added label _no_schedule to host np0005548786.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005548786.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring mon.np0005548788 (monmap changed)... Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring daemon mon.np0005548788 on np0005548788.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548789.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring crash.np0005548789 (monmap changed)... Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring daemon crash.np0005548789 on np0005548789.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring osd.1 (monmap changed)... Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring daemon osd.1 on np0005548789.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005548786.localdomain"} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005548786.localdomain"}]': finished Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: Removed host np0005548786.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring osd.4 (monmap changed)... Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring daemon osd.4 on np0005548789.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548789.vxwwsq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring mds.mds.np0005548789.vxwwsq (monmap changed)... Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring daemon mds.mds.np0005548789.vxwwsq on np0005548789.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548789.mzhmje", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring mgr.np0005548789.mzhmje (monmap changed)... Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring daemon mgr.np0005548789.mzhmje on np0005548789.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring mon.np0005548789 (monmap changed)... Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring daemon mon.np0005548789 on np0005548789.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548790.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring crash.np0005548790 (monmap changed)... Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring daemon crash.np0005548790 on np0005548790.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring osd.0 (monmap changed)... Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring daemon osd.0 on np0005548790.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring osd.3 (monmap changed)... Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring daemon osd.3 on np0005548790.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548790.vhcezv", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring mds.mds.np0005548790.vhcezv (monmap changed)... Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring daemon mds.mds.np0005548790.vhcezv on np0005548790.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548790.kvkfyr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: Saving service mon spec with placement label:mon Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring mgr.np0005548790.kvkfyr (monmap changed)... Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring daemon mgr.np0005548790.kvkfyr on np0005548790.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: mon.np0005548790 calling monitor election Dec 6 05:07:03 localhost ceph-mon[298582]: mon.np0005548788 calling monitor election Dec 6 05:07:03 localhost ceph-mon[298582]: mon.np0005548790 calling monitor election Dec 6 05:07:03 localhost ceph-mon[298582]: Health check failed: 1/3 mons down, quorum np0005548787,np0005548790 (MON_DOWN) Dec 6 05:07:03 localhost ceph-mon[298582]: Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm Dec 6 05:07:03 localhost ceph-mon[298582]: [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm Dec 6 05:07:03 localhost ceph-mon[298582]: stray daemon mgr.np0005548785.vhqlsq on host np0005548785.localdomain not managed by cephadm Dec 6 05:07:03 localhost ceph-mon[298582]: [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm Dec 6 05:07:03 localhost ceph-mon[298582]: stray host np0005548785.localdomain has 1 stray daemons: ['mgr.np0005548785.vhqlsq'] Dec 6 05:07:03 localhost ceph-mon[298582]: mon.np0005548787 calling monitor election Dec 6 05:07:03 localhost ceph-mon[298582]: mon.np0005548787 is new leader, mons np0005548787,np0005548790,np0005548788 in quorum (ranks 0,1,2) Dec 6 05:07:03 localhost ceph-mon[298582]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum np0005548787,np0005548790) Dec 6 05:07:03 localhost ceph-mon[298582]: Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm Dec 6 05:07:03 localhost ceph-mon[298582]: [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm Dec 6 05:07:03 localhost ceph-mon[298582]: stray daemon mgr.np0005548785.vhqlsq on host np0005548785.localdomain not managed by cephadm Dec 6 05:07:03 localhost ceph-mon[298582]: [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm Dec 6 05:07:03 localhost ceph-mon[298582]: stray host np0005548785.localdomain has 1 stray daemons: ['mgr.np0005548785.vhqlsq'] Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: Updating np0005548787.localdomain:/etc/ceph/ceph.conf Dec 6 05:07:03 localhost ceph-mon[298582]: Updating np0005548788.localdomain:/etc/ceph/ceph.conf Dec 6 05:07:03 localhost ceph-mon[298582]: Updating np0005548789.localdomain:/etc/ceph/ceph.conf Dec 6 05:07:03 localhost ceph-mon[298582]: Updating np0005548790.localdomain:/etc/ceph/ceph.conf Dec 6 05:07:03 localhost ceph-mon[298582]: Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:07:03 localhost ceph-mon[298582]: Updating np0005548787.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:07:03 localhost ceph-mon[298582]: Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:07:03 localhost ceph-mon[298582]: Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring mgr.np0005548787.umwsra (monmap changed)... Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548787.umwsra", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring daemon mgr.np0005548787.umwsra on np0005548787.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring crash.np0005548787 (monmap changed)... Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548787.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring daemon crash.np0005548787 on np0005548787.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548788.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring crash.np0005548788 (monmap changed)... Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring daemon crash.np0005548788 on np0005548788.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring osd.2 (monmap changed)... Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring daemon osd.2 on np0005548788.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring osd.5 (monmap changed)... Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring daemon osd.5 on np0005548788.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548788.erzujf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring mds.mds.np0005548788.erzujf (monmap changed)... Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring daemon mds.mds.np0005548788.erzujf on np0005548788.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548788.yvwbqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring mgr.np0005548788.yvwbqq (monmap changed)... Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring daemon mgr.np0005548788.yvwbqq on np0005548788.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:03 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548789.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:07:03 localhost ceph-mon[298582]: Deploying daemon mon.np0005548789 on np0005548789.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring crash.np0005548789 (monmap changed)... Dec 6 05:07:03 localhost ceph-mon[298582]: Reconfiguring daemon crash.np0005548789 on np0005548789.localdomain Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015623639395, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 13470744, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6, "largest_seqno": 10570, "table_properties": {"data_size": 13410806, "index_size": 32440, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26437, "raw_key_size": 282537, "raw_average_key_size": 26, "raw_value_size": 13230942, "raw_average_value_size": 1253, "num_data_blocks": 1228, "num_entries": 10554, "num_filter_entries": 10554, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015623, "oldest_key_time": 1765015623, "file_creation_time": 1765015623, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ec2a3a6-c7d0-4f1e-9923-5a3b782725ae", "db_session_id": "JMBO5KX1IJCJ8FWC64EX", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}} Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 68757 microseconds, and 18247 cpu microseconds. Dec 6 05:07:03 localhost ceph-mon[298582]: mon.np0005548789@-1(synchronizing).paxosservice(auth 1..38) refresh upgraded, format 0 -> 3 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:07:03.639451) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 13470744 bytes OK Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:07:03.639474) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:07:03.641197) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:07:03.641214) EVENT_LOG_v1 {"time_micros": 1765015623641210, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0} Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:07:03.641232) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 13501231, prev total WAL file size 13501231, number of live WAL files 2. Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:07:03.642973) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130353432' seq:72057594037927935, type:22 .. '7061786F73003130373934' seq:0, type:0; will stop at (end) Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(12MB) 8(1887B)] Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015623643079, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 13472631, "oldest_snapshot_seqno": -1} Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 10305 keys, 13467468 bytes, temperature: kUnknown Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015623735710, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 13467468, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13408207, "index_size": 32408, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25797, "raw_key_size": 277795, "raw_average_key_size": 26, "raw_value_size": 13231593, "raw_average_value_size": 1283, "num_data_blocks": 1227, "num_entries": 10305, "num_filter_entries": 10305, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015623, "oldest_key_time": 0, "file_creation_time": 1765015623, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ec2a3a6-c7d0-4f1e-9923-5a3b782725ae", "db_session_id": "JMBO5KX1IJCJ8FWC64EX", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}} Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:07:03.736138) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 13467468 bytes Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:07:03.737994) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 145.2 rd, 145.1 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(12.8, 0.0 +0.0 blob) out(12.8 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 10559, records dropped: 254 output_compression: NoCompression Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:07:03.738023) EVENT_LOG_v1 {"time_micros": 1765015623738011, "job": 4, "event": "compaction_finished", "compaction_time_micros": 92803, "compaction_time_cpu_micros": 37440, "output_level": 6, "num_output_files": 1, "total_output_size": 13467468, "num_input_records": 10559, "num_output_records": 10305, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015623740084, "job": 4, "event": "table_file_deletion", "file_number": 14} Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015623740155, "job": 4, "event": "table_file_deletion", "file_number": 8} Dec 6 05:07:03 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:07:03.642884) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:07:04 localhost podman[298725]: Dec 6 05:07:04 localhost podman[298725]: 2025-12-06 10:07:04.043788763 +0000 UTC m=+0.090157019 container create 1c47cdca4720e7c1521a521e400d75678e3cc3373bfa8298b0d876aaf9b1dcfe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_bell, com.redhat.component=rhceph-container, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, GIT_BRANCH=main, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, vcs-type=git, description=Red Hat Ceph Storage 7, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218) Dec 6 05:07:04 localhost systemd[1]: Started libpod-conmon-1c47cdca4720e7c1521a521e400d75678e3cc3373bfa8298b0d876aaf9b1dcfe.scope. Dec 6 05:07:04 localhost systemd[1]: Started libcrun container. Dec 6 05:07:04 localhost podman[298725]: 2025-12-06 10:07:04.004262412 +0000 UTC m=+0.050630708 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:07:04 localhost podman[298725]: 2025-12-06 10:07:04.119234887 +0000 UTC m=+0.165603133 container init 1c47cdca4720e7c1521a521e400d75678e3cc3373bfa8298b0d876aaf9b1dcfe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_bell, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, ceph=True, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, CEPH_POINT_RELEASE=, GIT_BRANCH=main, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, distribution-scope=public, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 6 05:07:04 localhost podman[298725]: 2025-12-06 10:07:04.127719943 +0000 UTC m=+0.174088159 container start 1c47cdca4720e7c1521a521e400d75678e3cc3373bfa8298b0d876aaf9b1dcfe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_bell, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, GIT_BRANCH=main, ceph=True, version=7, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , RELEASE=main, GIT_CLEAN=True, release=1763362218, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4) Dec 6 05:07:04 localhost podman[298725]: 2025-12-06 10:07:04.127991841 +0000 UTC m=+0.174360097 container attach 1c47cdca4720e7c1521a521e400d75678e3cc3373bfa8298b0d876aaf9b1dcfe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_bell, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, release=1763362218, version=7, architecture=x86_64, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, com.redhat.component=rhceph-container, RELEASE=main, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, ceph=True, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph) Dec 6 05:07:04 localhost awesome_bell[298741]: 167 167 Dec 6 05:07:04 localhost systemd[1]: libpod-1c47cdca4720e7c1521a521e400d75678e3cc3373bfa8298b0d876aaf9b1dcfe.scope: Deactivated successfully. Dec 6 05:07:04 localhost podman[298725]: 2025-12-06 10:07:04.132520418 +0000 UTC m=+0.178888644 container died 1c47cdca4720e7c1521a521e400d75678e3cc3373bfa8298b0d876aaf9b1dcfe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_bell, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, distribution-scope=public, vendor=Red Hat, Inc., ceph=True, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_CLEAN=True, io.openshift.expose-services=, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main) Dec 6 05:07:04 localhost systemd[1]: var-lib-containers-storage-overlay-1e459a916f5587573c12b07987f77c2114d8738bd5da4dde2c80fb1f51c79aef-merged.mount: Deactivated successfully. Dec 6 05:07:04 localhost systemd[1]: var-lib-containers-storage-overlay-709a1348378cc1ce32c956b4ca6e0317ba8edd21e8a24e892fd2a6acf0761bf1-merged.mount: Deactivated successfully. Dec 6 05:07:04 localhost podman[298746]: 2025-12-06 10:07:04.243804203 +0000 UTC m=+0.097723747 container remove 1c47cdca4720e7c1521a521e400d75678e3cc3373bfa8298b0d876aaf9b1dcfe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_bell, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container) Dec 6 05:07:04 localhost systemd[1]: libpod-conmon-1c47cdca4720e7c1521a521e400d75678e3cc3373bfa8298b0d876aaf9b1dcfe.scope: Deactivated successfully. Dec 6 05:07:04 localhost nova_compute[282193]: 2025-12-06 10:07:04.919 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:07:05 localhost podman[298822]: Dec 6 05:07:05 localhost podman[298822]: 2025-12-06 10:07:05.036747795 +0000 UTC m=+0.064088853 container create 1068ab8665323a1dd198fb433dc2a5f9d3b9f76d18377601e5540405d926a74f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_gould, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, ceph=True, vcs-type=git, architecture=x86_64) Dec 6 05:07:05 localhost systemd[1]: Started libpod-conmon-1068ab8665323a1dd198fb433dc2a5f9d3b9f76d18377601e5540405d926a74f.scope. Dec 6 05:07:05 localhost systemd[1]: Started libcrun container. Dec 6 05:07:05 localhost podman[298822]: 2025-12-06 10:07:05.101409184 +0000 UTC m=+0.128750222 container init 1068ab8665323a1dd198fb433dc2a5f9d3b9f76d18377601e5540405d926a74f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_gould, ceph=True, RELEASE=main, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, version=7, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z) Dec 6 05:07:05 localhost podman[298822]: 2025-12-06 10:07:05.003699668 +0000 UTC m=+0.031040766 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:07:05 localhost podman[298822]: 2025-12-06 10:07:05.108163087 +0000 UTC m=+0.135504145 container start 1068ab8665323a1dd198fb433dc2a5f9d3b9f76d18377601e5540405d926a74f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_gould, distribution-scope=public, vendor=Red Hat, Inc., GIT_CLEAN=True, com.redhat.component=rhceph-container, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, RELEASE=main) Dec 6 05:07:05 localhost podman[298822]: 2025-12-06 10:07:05.108400124 +0000 UTC m=+0.135741182 container attach 1068ab8665323a1dd198fb433dc2a5f9d3b9f76d18377601e5540405d926a74f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_gould, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, RELEASE=main, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, ceph=True, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , GIT_BRANCH=main, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, vendor=Red Hat, Inc., GIT_CLEAN=True) Dec 6 05:07:05 localhost reverent_gould[298837]: 167 167 Dec 6 05:07:05 localhost systemd[1]: libpod-1068ab8665323a1dd198fb433dc2a5f9d3b9f76d18377601e5540405d926a74f.scope: Deactivated successfully. Dec 6 05:07:05 localhost podman[298822]: 2025-12-06 10:07:05.112296822 +0000 UTC m=+0.139637910 container died 1068ab8665323a1dd198fb433dc2a5f9d3b9f76d18377601e5540405d926a74f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_gould, ceph=True, CEPH_POINT_RELEASE=, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, vcs-type=git, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 05:07:05 localhost nova_compute[282193]: 2025-12-06 10:07:05.123 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:07:05 localhost podman[298842]: 2025-12-06 10:07:05.207147811 +0000 UTC m=+0.089729775 container remove 1068ab8665323a1dd198fb433dc2a5f9d3b9f76d18377601e5540405d926a74f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_gould, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, version=7, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, ceph=True, architecture=x86_64, vcs-type=git, io.buildah.version=1.41.4, GIT_BRANCH=main) Dec 6 05:07:05 localhost systemd[1]: libpod-conmon-1068ab8665323a1dd198fb433dc2a5f9d3b9f76d18377601e5540405d926a74f.scope: Deactivated successfully. Dec 6 05:07:05 localhost systemd[1]: var-lib-containers-storage-overlay-925a159f2cffa5eaea442bbe0bb15122915e04eeb6941a7d1c3e124eab1b8933-merged.mount: Deactivated successfully. Dec 6 05:07:05 localhost podman[298917]: Dec 6 05:07:05 localhost podman[298917]: 2025-12-06 10:07:05.972265915 +0000 UTC m=+0.082544300 container create e06626dc7aa398a0e3bc4102d673ae36d62c5d9e4117ad83b9e912fc8a7084f1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=epic_thompson, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, version=7, com.redhat.component=rhceph-container, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, GIT_CLEAN=True, maintainer=Guillaume Abrioux , RELEASE=main, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64) Dec 6 05:07:06 localhost systemd[1]: Started libpod-conmon-e06626dc7aa398a0e3bc4102d673ae36d62c5d9e4117ad83b9e912fc8a7084f1.scope. Dec 6 05:07:06 localhost systemd[1]: Started libcrun container. Dec 6 05:07:06 localhost podman[298917]: 2025-12-06 10:07:05.937781385 +0000 UTC m=+0.048059820 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:07:06 localhost podman[298917]: 2025-12-06 10:07:06.040525382 +0000 UTC m=+0.150803767 container init e06626dc7aa398a0e3bc4102d673ae36d62c5d9e4117ad83b9e912fc8a7084f1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=epic_thompson, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, io.openshift.tags=rhceph ceph, distribution-scope=public, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, vcs-type=git, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, RELEASE=main, io.openshift.expose-services=, GIT_BRANCH=main, maintainer=Guillaume Abrioux , version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 6 05:07:06 localhost podman[298917]: 2025-12-06 10:07:06.049157792 +0000 UTC m=+0.159436187 container start e06626dc7aa398a0e3bc4102d673ae36d62c5d9e4117ad83b9e912fc8a7084f1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=epic_thompson, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, description=Red Hat Ceph Storage 7, vcs-type=git, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, io.openshift.expose-services=, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., io.buildah.version=1.41.4, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, GIT_CLEAN=True, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=) Dec 6 05:07:06 localhost podman[298917]: 2025-12-06 10:07:06.050146311 +0000 UTC m=+0.160424746 container attach e06626dc7aa398a0e3bc4102d673ae36d62c5d9e4117ad83b9e912fc8a7084f1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=epic_thompson, release=1763362218, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, name=rhceph, io.buildah.version=1.41.4, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7) Dec 6 05:07:06 localhost epic_thompson[298932]: 167 167 Dec 6 05:07:06 localhost systemd[1]: libpod-e06626dc7aa398a0e3bc4102d673ae36d62c5d9e4117ad83b9e912fc8a7084f1.scope: Deactivated successfully. Dec 6 05:07:06 localhost podman[298917]: 2025-12-06 10:07:06.055887405 +0000 UTC m=+0.166165820 container died e06626dc7aa398a0e3bc4102d673ae36d62c5d9e4117ad83b9e912fc8a7084f1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=epic_thompson, version=7, RELEASE=main, architecture=x86_64, description=Red Hat Ceph Storage 7, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, maintainer=Guillaume Abrioux , ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218) Dec 6 05:07:06 localhost podman[298937]: 2025-12-06 10:07:06.15557043 +0000 UTC m=+0.087259832 container remove e06626dc7aa398a0e3bc4102d673ae36d62c5d9e4117ad83b9e912fc8a7084f1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=epic_thompson, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, CEPH_POINT_RELEASE=, name=rhceph, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , version=7, distribution-scope=public, architecture=x86_64, RELEASE=main, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, ceph=True, io.buildah.version=1.41.4, release=1763362218) Dec 6 05:07:06 localhost systemd[1]: libpod-conmon-e06626dc7aa398a0e3bc4102d673ae36d62c5d9e4117ad83b9e912fc8a7084f1.scope: Deactivated successfully. Dec 6 05:07:06 localhost systemd[1]: var-lib-containers-storage-overlay-8f4cd1ebd77010274d9e0d70325ae0673bb0ef5291b04d390f9cb0e3289de6b9-merged.mount: Deactivated successfully. Dec 6 05:07:06 localhost sshd[298955]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:07:06 localhost podman[299010]: Dec 6 05:07:06 localhost podman[299010]: 2025-12-06 10:07:06.885822132 +0000 UTC m=+0.079568070 container create c2af45cce646b2f8578525332813b9dfbd806d797ea58cdc3129a408110c8212 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_brahmagupta, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, ceph=True, name=rhceph, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, vcs-type=git, RELEASE=main, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True) Dec 6 05:07:06 localhost systemd[1]: Started libpod-conmon-c2af45cce646b2f8578525332813b9dfbd806d797ea58cdc3129a408110c8212.scope. Dec 6 05:07:06 localhost systemd[1]: Started libcrun container. Dec 6 05:07:06 localhost podman[299010]: 2025-12-06 10:07:06.85291184 +0000 UTC m=+0.046657808 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:07:06 localhost podman[299010]: 2025-12-06 10:07:06.960115771 +0000 UTC m=+0.153861739 container init c2af45cce646b2f8578525332813b9dfbd806d797ea58cdc3129a408110c8212 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_brahmagupta, RELEASE=main, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, GIT_BRANCH=main, ceph=True, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, version=7, vendor=Red Hat, Inc., release=1763362218, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 6 05:07:06 localhost podman[299010]: 2025-12-06 10:07:06.969901697 +0000 UTC m=+0.163647645 container start c2af45cce646b2f8578525332813b9dfbd806d797ea58cdc3129a408110c8212 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_brahmagupta, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, ceph=True, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, RELEASE=main, com.redhat.component=rhceph-container, GIT_BRANCH=main, distribution-scope=public, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , name=rhceph) Dec 6 05:07:06 localhost podman[299010]: 2025-12-06 10:07:06.970157144 +0000 UTC m=+0.163903082 container attach c2af45cce646b2f8578525332813b9dfbd806d797ea58cdc3129a408110c8212 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_brahmagupta, ceph=True, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , architecture=x86_64, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, version=7, io.buildah.version=1.41.4, vcs-type=git, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, CEPH_POINT_RELEASE=, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, RELEASE=main) Dec 6 05:07:06 localhost intelligent_brahmagupta[299025]: 167 167 Dec 6 05:07:06 localhost systemd[1]: libpod-c2af45cce646b2f8578525332813b9dfbd806d797ea58cdc3129a408110c8212.scope: Deactivated successfully. Dec 6 05:07:06 localhost podman[299010]: 2025-12-06 10:07:06.97433261 +0000 UTC m=+0.168078628 container died c2af45cce646b2f8578525332813b9dfbd806d797ea58cdc3129a408110c8212 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_brahmagupta, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, version=7) Dec 6 05:07:07 localhost podman[299030]: 2025-12-06 10:07:07.082224202 +0000 UTC m=+0.092576651 container remove c2af45cce646b2f8578525332813b9dfbd806d797ea58cdc3129a408110c8212 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_brahmagupta, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, GIT_BRANCH=main, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, distribution-scope=public, RELEASE=main, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 6 05:07:07 localhost systemd[1]: libpod-conmon-c2af45cce646b2f8578525332813b9dfbd806d797ea58cdc3129a408110c8212.scope: Deactivated successfully. Dec 6 05:07:07 localhost systemd[1]: var-lib-containers-storage-overlay-fc6f69aced3e7f8dff087557f0f70186d9a2b12da92e58b124dcaec055a2c15c-merged.mount: Deactivated successfully. Dec 6 05:07:07 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:07 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:07 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:07 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:07 localhost ceph-mon[298582]: Reconfiguring osd.1 (monmap changed)... Dec 6 05:07:07 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Dec 6 05:07:07 localhost ceph-mon[298582]: Reconfiguring daemon osd.1 on np0005548789.localdomain Dec 6 05:07:07 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:07 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:07 localhost ceph-mon[298582]: Reconfiguring osd.4 (monmap changed)... Dec 6 05:07:07 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Dec 6 05:07:07 localhost ceph-mon[298582]: Reconfiguring daemon osd.4 on np0005548789.localdomain Dec 6 05:07:07 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:07 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:07 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548789.vxwwsq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 6 05:07:07 localhost ceph-mon[298582]: Reconfiguring mds.mds.np0005548789.vxwwsq (monmap changed)... Dec 6 05:07:07 localhost ceph-mon[298582]: Reconfiguring daemon mds.mds.np0005548789.vxwwsq on np0005548789.localdomain Dec 6 05:07:07 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:07 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:07 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548789.mzhmje", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:07:07 localhost ceph-mon[298582]: Reconfiguring mgr.np0005548789.mzhmje (monmap changed)... Dec 6 05:07:07 localhost ceph-mon[298582]: Reconfiguring daemon mgr.np0005548789.mzhmje on np0005548789.localdomain Dec 6 05:07:07 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:07 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:07 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548790.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:07:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.913 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'name': 'test', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005548789.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'hostId': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.914 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.923 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:07:07 localhost podman[299047]: 2025-12-06 10:07:07.92378358 +0000 UTC m=+0.081380304 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '47aeece0-e4c0-483a-a016-12f020f41712', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:07:07.914468', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '52c2699a-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12046.163835124, 'message_signature': '9df2c86bcc3963e5ecc8e84c9f1bb45ef3fc5312858997808d1e3a3f9736530b'}]}, 'timestamp': '2025-12-06 10:07:07.924928', '_unique_id': '3b0641f2f2774f6eadba0bbcc7715f69'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.927 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.929 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.929 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.930 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4aa5fa3d-9693-49ec-8a21-f32332cb98fb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:07:07.930110', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '52c34e78-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12046.163835124, 'message_signature': 'ce15bfe8ac74efae40d5b297edec05adbc83cd7e69eb2ad13fc0d0d1947cff70'}]}, 'timestamp': '2025-12-06 10:07:07.930647', '_unique_id': '221deea63e47400e9590223fe0df1485'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.931 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.933 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.933 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.bytes volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '19c61693-4e31-4577-96a6-7289da114e23', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:07:07.933184', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '52c3c574-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12046.163835124, 'message_signature': '7b6cf1328bc355f16e1bba6275f291bf892e5b0d56b7d8a92d01efb3fcac1486'}]}, 'timestamp': '2025-12-06 10:07:07.933663', '_unique_id': 'e4f92ff7b3b24711b8908678f557de1f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.934 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.935 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 6 05:07:07 localhost podman[299047]: 2025-12-06 10:07:07.958046013 +0000 UTC m=+0.115642767 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.959 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.latency volume: 1525105336 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.960 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.latency volume: 106716064 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd5b786b6-7792-4235-82c6-e47d33a57f57', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1525105336, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:07:07.936013', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '52c7d93e-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12046.185389074, 'message_signature': 'f4de788b1886a1e94d1d95a60f2c5e7a74b413bffb229f9214729656802fffc1'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 106716064, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:07:07.936013', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '52c7ecd0-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12046.185389074, 'message_signature': 'efff69699c1eaf2ff8269a76c2504737b43b52142b99a4faf4d83efc233a2a03'}]}, 'timestamp': '2025-12-06 10:07:07.960898', '_unique_id': '2cc6d37ad82a469e9669463a43d8eacb'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.962 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.963 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.963 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.bytes volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '366c9c85-0239-4f74-af23-38c37ed100ca', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:07:07.963895', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '52c875c4-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12046.163835124, 'message_signature': 'f2c890f52661e91c8c7cabf35442a6381ecda47b15a55f16874e905d65b50c28'}]}, 'timestamp': '2025-12-06 10:07:07.964474', '_unique_id': '48f021bf456e4817ab25b8d1a942c6be'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.965 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.966 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.966 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '181a0bc5-31e8-4967-8914-d1e99c6dd6bb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:07:07.966715', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '52c8e5d6-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12046.163835124, 'message_signature': '41bcfad1c70daca735e12623cff0867492f976420d7a8d731e8fb966f142071c'}]}, 'timestamp': '2025-12-06 10:07:07.967266', '_unique_id': '653bf89300534705b8392a2f9811a0f9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.968 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.969 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 6 05:07:07 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.978 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.979 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '69e8e7ea-9f9a-419b-a8f4-45a4ca85c848', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:07:07.969384', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '52cabc80-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12046.21875784, 'message_signature': '3b1acce951daeb8f2df45fdb37c669f4b6e8784839305f96ea0e71c45d21aeb2'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:07:07.969384', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '52cad44a-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12046.21875784, 'message_signature': '787994e7d589db1c16d97fec03494b1aa7832f643ccbb2937eb29e0be9944251'}]}, 'timestamp': '2025-12-06 10:07:07.980006', '_unique_id': '66fdd1e49dda442c8d446ec1d8659a09'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.981 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.983 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.983 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b76ba004-8bec-4940-ad6c-2d772732ed07', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:07:07.983186', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '52cb6860-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12046.163835124, 'message_signature': '4a9228bf33b64cb4a02fd747e2cb6a9feaede9180858a850c8d22a4b222dccf6'}]}, 'timestamp': '2025-12-06 10:07:07.983799', '_unique_id': 'ef572655a1d940ad9b07c774bd294a91'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.984 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.986 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.986 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.986 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.requests volume: 47 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.987 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9747ef9b-6cc9-47d2-8179-8c723eb84c1b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 47, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:07:07.986692', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '52cbf230-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12046.185389074, 'message_signature': '544b904a258ed0f47d778c3e8ba7e9ccbf5ad56b617eb9fb8c0ce4a1830e00bd'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:07:07.986692', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '52cc05b8-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12046.185389074, 'message_signature': '552e1901d2ea4d7b71ca8c5616ce7576f997590aee4f84cd768e1906d1a84c22'}]}, 'timestamp': '2025-12-06 10:07:07.987805', '_unique_id': '1b3eae1e797842719f7fc35956c1a9ca'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.989 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.990 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.990 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.latency volume: 1252245154 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.991 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.latency volume: 27668224 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd65cf7b0-1c58-4a60-8988-60c94180c835', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1252245154, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:07:07.990807', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '52cc9384-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12046.185389074, 'message_signature': 'a57e63e5bcd451e8438a6042df4b2205753ef145f5de862e3a4b8b65465b64cb'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 27668224, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:07:07.990807', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '52cca946-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12046.185389074, 'message_signature': '36c731f72f04ca2876d8316b6e24c69c31b9feb9d0a6e79988321146d4c1e23d'}]}, 'timestamp': '2025-12-06 10:07:07.992185', '_unique_id': '25757193fe4a4e298a53984d0f654000'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.993 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:07.994 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/memory.usage volume: 51.80859375 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e48e2965-5612-4722-896e-9bb8a3f84e2b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.80859375, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'timestamp': '2025-12-06T10:07:07.994980', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '52cf05b0-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12046.25626355, 'message_signature': 'fd1a18b72fb96db055035bc7713555f59f97d23098e40bfa64bd1fe85ed62056'}]}, 'timestamp': '2025-12-06 10:07:08.007283', '_unique_id': '82f35eb76d5840feb6532773f863c9ca'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.007 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.008 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.008 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.008 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.008 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8bf035a5-de72-45ac-868b-f63d1cff6898', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:07:08.008552', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '52cf4048-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12046.185389074, 'message_signature': '664176e37d12d22f2c44bc873ee2bc0fc4b61b4867a68538a53d254183b2f5cc'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:07:08.008552', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '52cf4890-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12046.185389074, 'message_signature': '01e06cb25b31774a357500f488d2410b6aa9f461dc7aa5af9fc029d3ef72f98f'}]}, 'timestamp': '2025-12-06 10:07:08.008966', '_unique_id': '7c66c6e5ee7944769d3b55ae14b21215'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.009 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets volume: 60 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'de83d252-486e-430d-a008-3f58dbc5cb58', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 60, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:07:08.009968', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '52cf7798-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12046.163835124, 'message_signature': '7fe4f6e1f0b8ac5ac2071b024875808265d82accd4c3e90930d7e9a78c5786c3'}]}, 'timestamp': '2025-12-06 10:07:08.010182', '_unique_id': 'c91fcc958f6b49a9988e3a3666fd9389'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.010 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/cpu volume: 13670000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5e8c6139-6268-4de2-bd97-3b63156d2cb0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 13670000000, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'timestamp': '2025-12-06T10:07:08.011240', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '52cfa920-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12046.25626355, 'message_signature': '68e736bf6745c7355e638f3829b03f6bf6eab0a874f7d10f3d9b42a31ec0562d'}]}, 'timestamp': '2025-12-06 10:07:08.011445', '_unique_id': '6212b64a1ad4491fb1574aab97e4beb1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.011 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.012 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.012 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.bytes volume: 389120 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.012 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5072cef7-c845-496f-a522-a21854996f5a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 389120, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:07:08.012564', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '52cfdcec-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12046.185389074, 'message_signature': '27d404b395208f8e877a8ed987cdc8ad9d69372ad430fc0f5215f5da8b462806'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:07:08.012564', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '52cfe5f2-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12046.185389074, 'message_signature': 'a518218deb226d4ae0711f5988166ffbe189305d9458b65fd72801d087f37846'}]}, 'timestamp': '2025-12-06 10:07:08.012997', '_unique_id': 'ffebc53351d5460a8f409eedc70486d4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.013 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '08bdcd33-17f6-401d-8a4a-f459a1a2e34f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:07:08.014001', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '52d01504-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12046.21875784, 'message_signature': '5fd38e117d64e7affdf65b24021e75eed25af34d4630ead6a2fc4bffe0bac8fb'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:07:08.014001', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '52d01c7a-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12046.21875784, 'message_signature': 'a23372ac6a336bf7026dcc3f60cdce7420823c85b087b4a54ffd3f068f5f4070'}]}, 'timestamp': '2025-12-06 10:07:08.014389', '_unique_id': '0fa6aad46944445480fdc614ef3c0cfc'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.014 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.015 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.015 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.015 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.015 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1c736b2d-ae7f-438e-afa3-d2b5c0414910', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:07:08.015498', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '52d04f6a-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12046.185389074, 'message_signature': '6c457674122fbdfc6856651d05c19f3ea638951eb7632f1df738915d15da0c61'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:07:08.015498', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '52d05758-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12046.185389074, 'message_signature': '9c25934eaf7ac0f78990159a7f822e57121e6c8d3370b4c800f3a6f0e502ec9e'}]}, 'timestamp': '2025-12-06 10:07:08.015897', '_unique_id': 'eb4c72d13593417ca28dafb68450d6fb'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.016 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9294a527-e638-4bbb-9fd7-7a7a4f0a11bf', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:07:08.016883', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '52d085ac-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12046.163835124, 'message_signature': '1c0f2e7d629ed50e88425a4277fbf380b8df158e8996488ca43d293173325742'}]}, 'timestamp': '2025-12-06 10:07:08.017097', '_unique_id': '43f90849c27640dd895276aa66782ac5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.017 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets volume: 114 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4ce4dd8d-f8a6-4fd1-87ac-db8156d93ea1', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 114, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:07:08.018068', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '52d0b3d8-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12046.163835124, 'message_signature': '5c4745602151b45dafbf4342c4ab953d584e21266576a62041485fc393539af9'}]}, 'timestamp': '2025-12-06 10:07:08.018278', '_unique_id': 'b4cbf195fcec4c3bbd88f964d3b48d06'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.018 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.019 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.019 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.019 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '39ec4d80-0b26-440b-96e4-8389f7669a88', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:07:08.019288', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '52d0e376-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12046.21875784, 'message_signature': '7e815b6366e8b3c25972e2176d3a8ba766c034a1f4342b8e3dd63ce838ec0bc2'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:07:08.019288', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '52d0eb00-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12046.21875784, 'message_signature': 'f63d245b5934bf0908f1a305a7258ebdd07abd195b70b3541714fcdc547486b7'}]}, 'timestamp': '2025-12-06 10:07:08.019676', '_unique_id': '24b7aec310e246d19b7d594c1c8ac6e1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.020 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9a9aa897-6a87-444c-8c35-0ea4886203f1', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:07:08.020671', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '52d11a8a-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12046.163835124, 'message_signature': '9fe26e5f08f52558c860a3ac1f5ac0ebfe4cb9b842e026119194719f03230108'}]}, 'timestamp': '2025-12-06 10:07:08.020909', '_unique_id': 'e27de6cf164846758c4e8f74a4a59b2e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:07:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:07:08.021 12 ERROR oslo_messaging.notify.messaging Dec 6 05:07:09 localhost nova_compute[282193]: 2025-12-06 10:07:09.957 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:07:10 localhost nova_compute[282193]: 2025-12-06 10:07:10.126 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:07:11 localhost ceph-mon[298582]: Reconfiguring crash.np0005548790 (monmap changed)... Dec 6 05:07:11 localhost ceph-mon[298582]: Reconfiguring daemon crash.np0005548790 on np0005548790.localdomain Dec 6 05:07:11 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:11 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:11 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Dec 6 05:07:11 localhost ceph-mon[298582]: Reconfiguring osd.0 (monmap changed)... Dec 6 05:07:11 localhost ceph-mon[298582]: Reconfiguring daemon osd.0 on np0005548790.localdomain Dec 6 05:07:11 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:11 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:11 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Dec 6 05:07:11 localhost ceph-mon[298582]: Reconfiguring osd.3 (monmap changed)... Dec 6 05:07:11 localhost ceph-mon[298582]: Reconfiguring daemon osd.3 on np0005548790.localdomain Dec 6 05:07:11 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:11 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:11 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548790.vhcezv", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 6 05:07:11 localhost ceph-mon[298582]: Reconfiguring mds.mds.np0005548790.vhcezv (monmap changed)... Dec 6 05:07:11 localhost ceph-mon[298582]: Reconfiguring daemon mds.mds.np0005548790.vhcezv on np0005548790.localdomain Dec 6 05:07:11 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:11 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:11 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548790.kvkfyr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:07:12 localhost systemd[1]: tmp-crun.vYInN6.mount: Deactivated successfully. Dec 6 05:07:12 localhost podman[299181]: 2025-12-06 10:07:12.85781303 +0000 UTC m=+0.102067398 container exec ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, distribution-scope=public, maintainer=Guillaume Abrioux , RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, release=1763362218, vendor=Red Hat, Inc., name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, ceph=True, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 05:07:12 localhost podman[299181]: 2025-12-06 10:07:12.958085932 +0000 UTC m=+0.202340270 container exec_died ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, distribution-scope=public, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, version=7, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , ceph=True, CEPH_POINT_RELEASE=) Dec 6 05:07:14 localhost nova_compute[282193]: 2025-12-06 10:07:14.993 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:07:15 localhost nova_compute[282193]: 2025-12-06 10:07:15.129 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:07:15 localhost ceph-mon[298582]: Reconfiguring mgr.np0005548790.kvkfyr (monmap changed)... Dec 6 05:07:15 localhost ceph-mon[298582]: Reconfiguring daemon mgr.np0005548790.kvkfyr on np0005548790.localdomain Dec 6 05:07:15 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:15 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:15 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:15 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:15 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:07:15 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:16 localhost openstack_network_exporter[243110]: ERROR 10:07:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:07:16 localhost openstack_network_exporter[243110]: ERROR 10:07:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:07:16 localhost openstack_network_exporter[243110]: ERROR 10:07:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:07:16 localhost openstack_network_exporter[243110]: ERROR 10:07:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:07:16 localhost openstack_network_exporter[243110]: Dec 6 05:07:16 localhost openstack_network_exporter[243110]: ERROR 10:07:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:07:16 localhost openstack_network_exporter[243110]: Dec 6 05:07:17 localhost ceph-mon[298582]: Reconfig service osd.default_drive_group Dec 6 05:07:17 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:17 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:17 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:07:17 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:17 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:17 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:17 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:17 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:17 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:17 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:17 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:17 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:17 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:17 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:17 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 6 05:07:17 localhost ceph-mon[298582]: Reconfiguring daemon osd.2 on np0005548788.localdomain Dec 6 05:07:17 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:17 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:17 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:17 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:17 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' Dec 6 05:07:17 localhost ceph-mon[298582]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 6 05:07:17 localhost ceph-mon[298582]: Reconfiguring daemon osd.5 on np0005548788.localdomain Dec 6 05:07:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:07:17 localhost systemd[1]: session-66.scope: Deactivated successfully. Dec 6 05:07:17 localhost systemd[1]: session-66.scope: Consumed 28.576s CPU time. Dec 6 05:07:17 localhost systemd-logind[766]: Session 66 logged out. Waiting for processes to exit. Dec 6 05:07:17 localhost systemd-logind[766]: Removed session 66. Dec 6 05:07:17 localhost podman[299405]: 2025-12-06 10:07:17.941265102 +0000 UTC m=+0.098001465 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 6 05:07:17 localhost podman[299405]: 2025-12-06 10:07:17.951260824 +0000 UTC m=+0.107997247 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:07:17 localhost sshd[299424]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:07:17 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:07:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:07:18 localhost systemd[1]: tmp-crun.d5zoP1.mount: Deactivated successfully. Dec 6 05:07:18 localhost podman[299426]: 2025-12-06 10:07:18.948401661 +0000 UTC m=+0.108569284 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 6 05:07:18 localhost podman[299426]: 2025-12-06 10:07:18.957746202 +0000 UTC m=+0.117913825 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 6 05:07:18 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:07:20 localhost nova_compute[282193]: 2025-12-06 10:07:20.026 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:07:20 localhost nova_compute[282193]: 2025-12-06 10:07:20.132 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:07:23 localhost sshd[299449]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:07:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:07:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:07:23 localhost podman[241090]: time="2025-12-06T10:07:23Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:07:23 localhost systemd[1]: tmp-crun.ggeiID.mount: Deactivated successfully. Dec 6 05:07:23 localhost podman[299451]: 2025-12-06 10:07:23.942999576 +0000 UTC m=+0.102150911 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., distribution-scope=public, name=ubi9-minimal, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vcs-type=git, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64) Dec 6 05:07:23 localhost podman[299452]: 2025-12-06 10:07:23.995325622 +0000 UTC m=+0.152396644 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm) Dec 6 05:07:24 localhost podman[299451]: 2025-12-06 10:07:24.024051349 +0000 UTC m=+0.183202714 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, build-date=2025-08-20T13:12:41, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, com.redhat.component=ubi9-minimal-container, architecture=x86_64, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, managed_by=edpm_ansible, maintainer=Red Hat, Inc., io.buildah.version=1.33.7) Dec 6 05:07:24 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:07:24 localhost podman[241090]: @ - - [06/Dec/2025:10:07:23 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156104 "" "Go-http-client/1.1" Dec 6 05:07:24 localhost podman[299452]: 2025-12-06 10:07:24.080312825 +0000 UTC m=+0.237383797 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 6 05:07:24 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:07:24 localhost podman[241090]: @ - - [06/Dec/2025:10:07:24 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19210 "" "Go-http-client/1.1" Dec 6 05:07:25 localhost nova_compute[282193]: 2025-12-06 10:07:25.062 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:07:25 localhost nova_compute[282193]: 2025-12-06 10:07:25.135 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:07:25 localhost nova_compute[282193]: 2025-12-06 10:07:25.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:07:25 localhost nova_compute[282193]: 2025-12-06 10:07:25.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:07:25 localhost nova_compute[282193]: 2025-12-06 10:07:25.210 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:07:25 localhost nova_compute[282193]: 2025-12-06 10:07:25.211 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:07:25 localhost nova_compute[282193]: 2025-12-06 10:07:25.211 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:07:25 localhost nova_compute[282193]: 2025-12-06 10:07:25.211 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Auditing locally available compute resources for np0005548789.localdomain (node: np0005548789.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 6 05:07:25 localhost nova_compute[282193]: 2025-12-06 10:07:25.212 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:07:25 localhost ceph-mon[298582]: mon.np0005548789@-1(probing) e11 handle_auth_request failed to assign global_id Dec 6 05:07:25 localhost nova_compute[282193]: 2025-12-06 10:07:25.670 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:07:25 localhost nova_compute[282193]: 2025-12-06 10:07:25.739 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:07:25 localhost nova_compute[282193]: 2025-12-06 10:07:25.739 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:07:25 localhost nova_compute[282193]: 2025-12-06 10:07:25.967 282197 WARNING nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 6 05:07:25 localhost nova_compute[282193]: 2025-12-06 10:07:25.969 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Hypervisor/Node resource view: name=np0005548789.localdomain free_ram=11511MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 6 05:07:25 localhost nova_compute[282193]: 2025-12-06 10:07:25.970 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:07:25 localhost nova_compute[282193]: 2025-12-06 10:07:25.970 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:07:26 localhost nova_compute[282193]: 2025-12-06 10:07:26.067 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 6 05:07:26 localhost nova_compute[282193]: 2025-12-06 10:07:26.068 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 6 05:07:26 localhost nova_compute[282193]: 2025-12-06 10:07:26.068 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Final resource view: name=np0005548789.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 6 05:07:26 localhost nova_compute[282193]: 2025-12-06 10:07:26.108 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:07:26 localhost ceph-mon[298582]: mon.np0005548789@-1(probing) e11 handle_auth_request failed to assign global_id Dec 6 05:07:26 localhost nova_compute[282193]: 2025-12-06 10:07:26.552 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:07:26 localhost nova_compute[282193]: 2025-12-06 10:07:26.560 282197 DEBUG nova.compute.provider_tree [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed in ProviderTree for provider: 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 6 05:07:26 localhost nova_compute[282193]: 2025-12-06 10:07:26.585 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 6 05:07:26 localhost nova_compute[282193]: 2025-12-06 10:07:26.588 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Compute_service record updated for np0005548789.localdomain:np0005548789.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 6 05:07:26 localhost nova_compute[282193]: 2025-12-06 10:07:26.589 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:07:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:07:26 localhost podman[299529]: 2025-12-06 10:07:26.924113647 +0000 UTC m=+0.081839748 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 6 05:07:26 localhost podman[299529]: 2025-12-06 10:07:26.94214079 +0000 UTC m=+0.099866881 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3) Dec 6 05:07:26 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:07:27 localhost nova_compute[282193]: 2025-12-06 10:07:27.590 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:07:27 localhost nova_compute[282193]: 2025-12-06 10:07:27.591 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:07:27 localhost nova_compute[282193]: 2025-12-06 10:07:27.591 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:07:27 localhost nova_compute[282193]: 2025-12-06 10:07:27.591 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 6 05:07:28 localhost systemd[1]: Stopping User Manager for UID 1002... Dec 6 05:07:28 localhost systemd[26209]: Activating special unit Exit the Session... Dec 6 05:07:28 localhost systemd[26209]: Removed slice User Background Tasks Slice. Dec 6 05:07:28 localhost systemd[26209]: Stopped target Main User Target. Dec 6 05:07:28 localhost systemd[26209]: Stopped target Basic System. Dec 6 05:07:28 localhost systemd[26209]: Stopped target Paths. Dec 6 05:07:28 localhost systemd[26209]: Stopped target Sockets. Dec 6 05:07:28 localhost systemd[26209]: Stopped target Timers. Dec 6 05:07:28 localhost systemd[26209]: Stopped Mark boot as successful after the user session has run 2 minutes. Dec 6 05:07:28 localhost systemd[26209]: Stopped Daily Cleanup of User's Temporary Directories. Dec 6 05:07:28 localhost systemd[26209]: Closed D-Bus User Message Bus Socket. Dec 6 05:07:28 localhost systemd[26209]: Stopped Create User's Volatile Files and Directories. Dec 6 05:07:28 localhost systemd[26209]: Removed slice User Application Slice. Dec 6 05:07:28 localhost systemd[26209]: Reached target Shutdown. Dec 6 05:07:28 localhost systemd[26209]: Finished Exit the Session. Dec 6 05:07:28 localhost systemd[26209]: Reached target Exit the Session. Dec 6 05:07:28 localhost systemd[1]: user@1002.service: Deactivated successfully. Dec 6 05:07:28 localhost systemd[1]: Stopped User Manager for UID 1002. Dec 6 05:07:28 localhost systemd[1]: user@1002.service: Consumed 13.160s CPU time, read 188.0K from disk, written 7.0K to disk. Dec 6 05:07:28 localhost systemd[1]: Stopping User Runtime Directory /run/user/1002... Dec 6 05:07:28 localhost systemd[1]: run-user-1002.mount: Deactivated successfully. Dec 6 05:07:28 localhost systemd[1]: user-runtime-dir@1002.service: Deactivated successfully. Dec 6 05:07:28 localhost systemd[1]: Stopped User Runtime Directory /run/user/1002. Dec 6 05:07:28 localhost systemd[1]: Removed slice User Slice of UID 1002. Dec 6 05:07:28 localhost systemd[1]: user-1002.slice: Consumed 4min 24.672s CPU time. Dec 6 05:07:28 localhost nova_compute[282193]: 2025-12-06 10:07:28.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:07:28 localhost nova_compute[282193]: 2025-12-06 10:07:28.181 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 6 05:07:28 localhost nova_compute[282193]: 2025-12-06 10:07:28.182 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 6 05:07:30 localhost nova_compute[282193]: 2025-12-06 10:07:30.010 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 6 05:07:30 localhost nova_compute[282193]: 2025-12-06 10:07:30.012 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquired lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 6 05:07:30 localhost nova_compute[282193]: 2025-12-06 10:07:30.012 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 6 05:07:30 localhost nova_compute[282193]: 2025-12-06 10:07:30.012 282197 DEBUG nova.objects.instance [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 6 05:07:30 localhost nova_compute[282193]: 2025-12-06 10:07:30.103 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:07:30 localhost nova_compute[282193]: 2025-12-06 10:07:30.138 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:07:31 localhost nova_compute[282193]: 2025-12-06 10:07:31.014 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updating instance_info_cache with network_info: [{"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 6 05:07:31 localhost nova_compute[282193]: 2025-12-06 10:07:31.033 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Releasing lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 6 05:07:31 localhost nova_compute[282193]: 2025-12-06 10:07:31.033 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 6 05:07:31 localhost nova_compute[282193]: 2025-12-06 10:07:31.034 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:07:31 localhost nova_compute[282193]: 2025-12-06 10:07:31.035 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:07:31 localhost nova_compute[282193]: 2025-12-06 10:07:31.035 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:07:33 localhost ceph-mon[298582]: mon.np0005548789@-1(synchronizing).osd e89 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375 Dec 6 05:07:33 localhost ceph-mon[298582]: mon.np0005548789@-1(synchronizing).osd e89 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1 Dec 6 05:07:33 localhost ceph-mon[298582]: mon.np0005548789@-1(synchronizing).osd e90 e90: 6 total, 6 up, 6 in Dec 6 05:07:33 localhost ceph-mon[298582]: from='client.? 172.18.0.200:0/2080000025' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Dec 6 05:07:33 localhost ceph-mon[298582]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Dec 6 05:07:33 localhost ceph-mon[298582]: Activating manager daemon np0005548786.mczynb Dec 6 05:07:33 localhost ceph-mon[298582]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Dec 6 05:07:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:07:33 localhost systemd[1]: tmp-crun.XdrC4i.mount: Deactivated successfully. Dec 6 05:07:33 localhost podman[299549]: 2025-12-06 10:07:33.928378479 +0000 UTC m=+0.088562131 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 6 05:07:33 localhost podman[299549]: 2025-12-06 10:07:33.96524644 +0000 UTC m=+0.125430072 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 6 05:07:33 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:07:35 localhost nova_compute[282193]: 2025-12-06 10:07:35.139 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:07:35 localhost nova_compute[282193]: 2025-12-06 10:07:35.146 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:07:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:07:38 localhost podman[299572]: 2025-12-06 10:07:38.920319554 +0000 UTC m=+0.077329982 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:07:38 localhost ceph-mon[298582]: mon.np0005548789@-1(probing) e11 handle_auth_request failed to assign global_id Dec 6 05:07:39 localhost podman[299572]: 2025-12-06 10:07:39.014831313 +0000 UTC m=+0.171841731 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller) Dec 6 05:07:39 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:07:40 localhost nova_compute[282193]: 2025-12-06 10:07:40.147 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:07:40 localhost nova_compute[282193]: 2025-12-06 10:07:40.149 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:07:40 localhost nova_compute[282193]: 2025-12-06 10:07:40.150 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:07:40 localhost nova_compute[282193]: 2025-12-06 10:07:40.150 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:07:40 localhost nova_compute[282193]: 2025-12-06 10:07:40.184 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:07:40 localhost nova_compute[282193]: 2025-12-06 10:07:40.185 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:07:43 localhost systemd[1]: session-67.scope: Deactivated successfully. Dec 6 05:07:43 localhost systemd[1]: session-67.scope: Consumed 1.754s CPU time. Dec 6 05:07:43 localhost systemd-logind[766]: Session 67 logged out. Waiting for processes to exit. Dec 6 05:07:43 localhost systemd-logind[766]: Removed session 67. Dec 6 05:07:43 localhost ceph-mgr[288591]: ms_deliver_dispatch: unhandled message 0x56140ed13600 mon_map magic: 0 from mon.0 v2:172.18.0.105:3300/0 Dec 6 05:07:43 localhost ceph-mon[298582]: mon.np0005548789@-1(probing) e12 my rank is now 3 (was -1) Dec 6 05:07:43 localhost ceph-mon[298582]: log_channel(cluster) log [INF] : mon.np0005548789 calling monitor election Dec 6 05:07:43 localhost ceph-mon[298582]: paxos.3).electionLogic(0) init, first boot, initializing epoch at 1 Dec 6 05:07:43 localhost ceph-mon[298582]: mon.np0005548789@3(electing) e12 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 6 05:07:45 localhost nova_compute[282193]: 2025-12-06 10:07:45.186 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:07:45 localhost nova_compute[282193]: 2025-12-06 10:07:45.189 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:07:45 localhost nova_compute[282193]: 2025-12-06 10:07:45.189 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:07:45 localhost nova_compute[282193]: 2025-12-06 10:07:45.189 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:07:45 localhost nova_compute[282193]: 2025-12-06 10:07:45.235 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:07:45 localhost nova_compute[282193]: 2025-12-06 10:07:45.236 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:07:46 localhost openstack_network_exporter[243110]: ERROR 10:07:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:07:46 localhost openstack_network_exporter[243110]: ERROR 10:07:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:07:46 localhost openstack_network_exporter[243110]: ERROR 10:07:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:07:46 localhost openstack_network_exporter[243110]: ERROR 10:07:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:07:46 localhost openstack_network_exporter[243110]: Dec 6 05:07:46 localhost openstack_network_exporter[243110]: ERROR 10:07:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:07:46 localhost openstack_network_exporter[243110]: Dec 6 05:07:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:07:47.297 160509 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:07:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:07:47.298 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:07:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:07:47.299 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:07:48 localhost ceph-mds[287313]: mds.beacon.mds.np0005548789.vxwwsq missed beacon ack from the monitors Dec 6 05:07:48 localhost ceph-mon[298582]: mon.np0005548789@3(electing) e12 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 6 05:07:48 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code} Dec 6 05:07:48 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout} Dec 6 05:07:48 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 6 05:07:48 localhost ceph-mon[298582]: mgrc update_daemon_metadata mon.np0005548789 metadata {addrs=[v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable),ceph_version_short=18.2.1-361.el9cp,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=np0005548789.localdomain,container_image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=rhel,distro_description=Red Hat Enterprise Linux 9.7 (Plow),distro_version=9.7,hostname=np0005548789.localdomain,kernel_description=#1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023,kernel_version=5.14.0-284.11.1.el9_2.x86_64,mem_swap_kb=1048572,mem_total_kb=16116612,os=Linux} Dec 6 05:07:48 localhost ceph-mon[298582]: mon.np0005548787 calling monitor election Dec 6 05:07:48 localhost ceph-mon[298582]: mon.np0005548788 calling monitor election Dec 6 05:07:48 localhost ceph-mon[298582]: mon.np0005548790 calling monitor election Dec 6 05:07:48 localhost ceph-mon[298582]: mon.np0005548789 calling monitor election Dec 6 05:07:48 localhost ceph-mon[298582]: mon.np0005548787 is new leader, mons np0005548787,np0005548790,np0005548788,np0005548789 in quorum (ranks 0,1,2,3) Dec 6 05:07:48 localhost ceph-mon[298582]: Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm Dec 6 05:07:48 localhost ceph-mon[298582]: [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm Dec 6 05:07:48 localhost ceph-mon[298582]: stray daemon mgr.np0005548785.vhqlsq on host np0005548785.localdomain not managed by cephadm Dec 6 05:07:48 localhost ceph-mon[298582]: [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm Dec 6 05:07:48 localhost ceph-mon[298582]: stray host np0005548785.localdomain has 1 stray daemons: ['mgr.np0005548785.vhqlsq'] Dec 6 05:07:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:07:48 localhost podman[299595]: 2025-12-06 10:07:48.923936718 +0000 UTC m=+0.084450876 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2) Dec 6 05:07:48 localhost podman[299595]: 2025-12-06 10:07:48.932052283 +0000 UTC m=+0.092566471 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:07:48 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:07:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:07:49 localhost podman[299614]: 2025-12-06 10:07:49.921547329 +0000 UTC m=+0.083548559 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 6 05:07:49 localhost systemd[1]: tmp-crun.E1j6X4.mount: Deactivated successfully. Dec 6 05:07:49 localhost podman[299614]: 2025-12-06 10:07:49.953611666 +0000 UTC m=+0.115612906 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 6 05:07:49 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:07:50 localhost nova_compute[282193]: 2025-12-06 10:07:50.236 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:07:50 localhost nova_compute[282193]: 2025-12-06 10:07:50.238 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:07:50 localhost nova_compute[282193]: 2025-12-06 10:07:50.238 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:07:50 localhost nova_compute[282193]: 2025-12-06 10:07:50.238 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:07:50 localhost nova_compute[282193]: 2025-12-06 10:07:50.275 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:07:50 localhost nova_compute[282193]: 2025-12-06 10:07:50.276 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:07:53 localhost ceph-mon[298582]: mon.np0005548789@3(peon).osd e90 _set_new_cache_sizes cache_size:1019699809 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:07:53 localhost systemd[1]: Stopping User Manager for UID 1003... Dec 6 05:07:53 localhost systemd[296743]: Activating special unit Exit the Session... Dec 6 05:07:53 localhost systemd[296743]: Stopped target Main User Target. Dec 6 05:07:53 localhost systemd[296743]: Stopped target Basic System. Dec 6 05:07:53 localhost systemd[296743]: Stopped target Paths. Dec 6 05:07:53 localhost systemd[296743]: Stopped target Sockets. Dec 6 05:07:53 localhost systemd[296743]: Stopped target Timers. Dec 6 05:07:53 localhost systemd[296743]: Stopped Mark boot as successful after the user session has run 2 minutes. Dec 6 05:07:53 localhost systemd[296743]: Stopped Daily Cleanup of User's Temporary Directories. Dec 6 05:07:53 localhost systemd[296743]: Closed D-Bus User Message Bus Socket. Dec 6 05:07:53 localhost systemd[296743]: Stopped Create User's Volatile Files and Directories. Dec 6 05:07:53 localhost systemd[296743]: Removed slice User Application Slice. Dec 6 05:07:53 localhost systemd[296743]: Reached target Shutdown. Dec 6 05:07:53 localhost systemd[296743]: Finished Exit the Session. Dec 6 05:07:53 localhost systemd[296743]: Reached target Exit the Session. Dec 6 05:07:53 localhost systemd[1]: user@1003.service: Deactivated successfully. Dec 6 05:07:53 localhost systemd[1]: Stopped User Manager for UID 1003. Dec 6 05:07:53 localhost systemd[1]: Stopping User Runtime Directory /run/user/1003... Dec 6 05:07:53 localhost systemd[1]: run-user-1003.mount: Deactivated successfully. Dec 6 05:07:53 localhost systemd[1]: user-runtime-dir@1003.service: Deactivated successfully. Dec 6 05:07:53 localhost systemd[1]: Stopped User Runtime Directory /run/user/1003. Dec 6 05:07:53 localhost systemd[1]: Removed slice User Slice of UID 1003. Dec 6 05:07:53 localhost systemd[1]: user-1003.slice: Consumed 2.288s CPU time. Dec 6 05:07:53 localhost podman[241090]: time="2025-12-06T10:07:53Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:07:53 localhost podman[241090]: @ - - [06/Dec/2025:10:07:53 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156104 "" "Go-http-client/1.1" Dec 6 05:07:53 localhost podman[241090]: @ - - [06/Dec/2025:10:07:53 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19220 "" "Go-http-client/1.1" Dec 6 05:07:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:07:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:07:54 localhost systemd[1]: tmp-crun.rzP3aA.mount: Deactivated successfully. Dec 6 05:07:54 localhost podman[299638]: 2025-12-06 10:07:54.927718992 +0000 UTC m=+0.087527559 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, name=ubi9-minimal, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.tags=minimal rhel9, config_id=edpm) Dec 6 05:07:54 localhost podman[299638]: 2025-12-06 10:07:54.964185262 +0000 UTC m=+0.123993759 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, version=9.6, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, config_id=edpm, vcs-type=git, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 6 05:07:54 localhost podman[299639]: 2025-12-06 10:07:54.973943966 +0000 UTC m=+0.129946379 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 6 05:07:54 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:07:55 localhost podman[299639]: 2025-12-06 10:07:55.008943951 +0000 UTC m=+0.164946374 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2) Dec 6 05:07:55 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:07:55 localhost nova_compute[282193]: 2025-12-06 10:07:55.277 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:07:55 localhost nova_compute[282193]: 2025-12-06 10:07:55.279 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:07:55 localhost nova_compute[282193]: 2025-12-06 10:07:55.279 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:07:55 localhost nova_compute[282193]: 2025-12-06 10:07:55.279 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:07:55 localhost nova_compute[282193]: 2025-12-06 10:07:55.325 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:07:55 localhost nova_compute[282193]: 2025-12-06 10:07:55.326 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:07:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:07:57 localhost podman[299677]: 2025-12-06 10:07:57.932041814 +0000 UTC m=+0.089300103 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd) Dec 6 05:07:57 localhost podman[299677]: 2025-12-06 10:07:57.944595692 +0000 UTC m=+0.101853971 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 6 05:07:57 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:07:58 localhost ceph-mon[298582]: mon.np0005548789@3(peon).osd e90 _set_new_cache_sizes cache_size:1020047688 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:08:00 localhost nova_compute[282193]: 2025-12-06 10:08:00.327 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:08:00 localhost nova_compute[282193]: 2025-12-06 10:08:00.329 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:08:00 localhost nova_compute[282193]: 2025-12-06 10:08:00.329 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:08:00 localhost nova_compute[282193]: 2025-12-06 10:08:00.329 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:08:00 localhost nova_compute[282193]: 2025-12-06 10:08:00.349 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:08:00 localhost nova_compute[282193]: 2025-12-06 10:08:00.350 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:08:03 localhost ceph-mon[298582]: mon.np0005548789@3(peon).osd e90 _set_new_cache_sizes cache_size:1020054592 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:08:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:08:04 localhost podman[299696]: 2025-12-06 10:08:04.933354327 +0000 UTC m=+0.090529160 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 6 05:08:04 localhost podman[299696]: 2025-12-06 10:08:04.969158187 +0000 UTC m=+0.126332940 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:08:04 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:08:05 localhost nova_compute[282193]: 2025-12-06 10:08:05.350 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:08:05 localhost nova_compute[282193]: 2025-12-06 10:08:05.352 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:08:05 localhost nova_compute[282193]: 2025-12-06 10:08:05.353 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:08:05 localhost nova_compute[282193]: 2025-12-06 10:08:05.353 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:08:05 localhost nova_compute[282193]: 2025-12-06 10:08:05.390 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:08:05 localhost nova_compute[282193]: 2025-12-06 10:08:05.390 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:08:05 localhost sshd[299720]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:08:06 localhost ceph-mon[298582]: mon.np0005548789@3(peon).osd e91 e91: 6 total, 6 up, 6 in Dec 6 05:08:06 localhost sshd[299722]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:08:06 localhost systemd-logind[766]: New session 69 of user ceph-admin. Dec 6 05:08:06 localhost systemd[1]: Created slice User Slice of UID 1002. Dec 6 05:08:06 localhost systemd[1]: Starting User Runtime Directory /run/user/1002... Dec 6 05:08:06 localhost systemd[1]: Finished User Runtime Directory /run/user/1002. Dec 6 05:08:06 localhost systemd[1]: Starting User Manager for UID 1002... Dec 6 05:08:07 localhost systemd[299726]: Queued start job for default target Main User Target. Dec 6 05:08:07 localhost systemd[299726]: Created slice User Application Slice. Dec 6 05:08:07 localhost systemd[299726]: Started Mark boot as successful after the user session has run 2 minutes. Dec 6 05:08:07 localhost systemd[299726]: Started Daily Cleanup of User's Temporary Directories. Dec 6 05:08:07 localhost systemd[299726]: Reached target Paths. Dec 6 05:08:07 localhost systemd[299726]: Reached target Timers. Dec 6 05:08:07 localhost systemd[299726]: Starting D-Bus User Message Bus Socket... Dec 6 05:08:07 localhost systemd[299726]: Starting Create User's Volatile Files and Directories... Dec 6 05:08:07 localhost systemd[299726]: Listening on D-Bus User Message Bus Socket. Dec 6 05:08:07 localhost systemd[299726]: Finished Create User's Volatile Files and Directories. Dec 6 05:08:07 localhost systemd[299726]: Reached target Sockets. Dec 6 05:08:07 localhost systemd[299726]: Reached target Basic System. Dec 6 05:08:07 localhost systemd[1]: Started User Manager for UID 1002. Dec 6 05:08:07 localhost systemd[299726]: Reached target Main User Target. Dec 6 05:08:07 localhost systemd[299726]: Startup finished in 158ms. Dec 6 05:08:07 localhost systemd[1]: Started Session 69 of User ceph-admin. Dec 6 05:08:07 localhost ceph-mon[298582]: from='client.? 172.18.0.200:0/1889957737' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Dec 6 05:08:07 localhost ceph-mon[298582]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Dec 6 05:08:07 localhost ceph-mon[298582]: Activating manager daemon np0005548790.kvkfyr Dec 6 05:08:07 localhost ceph-mon[298582]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Dec 6 05:08:07 localhost ceph-mon[298582]: Manager daemon np0005548790.kvkfyr is now available Dec 6 05:08:07 localhost ceph-mon[298582]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005548786.localdomain.devices.0"} : dispatch Dec 6 05:08:07 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005548786.localdomain.devices.0"} : dispatch Dec 6 05:08:07 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005548786.localdomain.devices.0"}]': finished Dec 6 05:08:07 localhost ceph-mon[298582]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005548786.localdomain.devices.0"} : dispatch Dec 6 05:08:07 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005548786.localdomain.devices.0"} : dispatch Dec 6 05:08:07 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005548786.localdomain.devices.0"}]': finished Dec 6 05:08:07 localhost ceph-mon[298582]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548790.kvkfyr/mirror_snapshot_schedule"} : dispatch Dec 6 05:08:07 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548790.kvkfyr/mirror_snapshot_schedule"} : dispatch Dec 6 05:08:07 localhost ceph-mon[298582]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548790.kvkfyr/trash_purge_schedule"} : dispatch Dec 6 05:08:07 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548790.kvkfyr/trash_purge_schedule"} : dispatch Dec 6 05:08:08 localhost ceph-mon[298582]: mon.np0005548789@3(peon).osd e91 _set_new_cache_sizes cache_size:1020054729 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:08:08 localhost ceph-mon[298582]: removing stray HostCache host record np0005548786.localdomain.devices.0 Dec 6 05:08:08 localhost podman[299851]: 2025-12-06 10:08:08.234800743 +0000 UTC m=+0.102218351 container exec ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, GIT_BRANCH=main, GIT_CLEAN=True, architecture=x86_64, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, release=1763362218, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 6 05:08:08 localhost podman[299851]: 2025-12-06 10:08:08.366835313 +0000 UTC m=+0.234252901 container exec_died ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, ceph=True, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.buildah.version=1.41.4, vendor=Red Hat, Inc., RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7) Dec 6 05:08:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:08:09 localhost ceph-mon[298582]: [06/Dec/2025:10:08:07] ENGINE Bus STARTING Dec 6 05:08:09 localhost ceph-mon[298582]: [06/Dec/2025:10:08:07] ENGINE Serving on https://172.18.0.108:7150 Dec 6 05:08:09 localhost ceph-mon[298582]: [06/Dec/2025:10:08:07] ENGINE Client ('172.18.0.108', 58740) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Dec 6 05:08:09 localhost ceph-mon[298582]: [06/Dec/2025:10:08:07] ENGINE Serving on http://172.18.0.108:8765 Dec 6 05:08:09 localhost ceph-mon[298582]: [06/Dec/2025:10:08:07] ENGINE Bus STARTED Dec 6 05:08:09 localhost ceph-mon[298582]: Health check cleared: CEPHADM_STRAY_DAEMON (was: 1 stray daemon(s) not managed by cephadm) Dec 6 05:08:09 localhost ceph-mon[298582]: Health check cleared: CEPHADM_STRAY_HOST (was: 1 stray host(s) with 1 daemon(s) not managed by cephadm) Dec 6 05:08:09 localhost ceph-mon[298582]: Cluster is now healthy Dec 6 05:08:09 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:08:09 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:08:09 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:08:09 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:08:09 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:08:09 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:08:09 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:08:09 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:08:09 localhost podman[299989]: 2025-12-06 10:08:09.244037536 +0000 UTC m=+0.091801418 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 6 05:08:09 localhost podman[299989]: 2025-12-06 10:08:09.324386038 +0000 UTC m=+0.172149910 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 6 05:08:09 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:08:10 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:08:10 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:08:10 localhost ceph-mon[298582]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd/host:np0005548787", "name": "osd_memory_target"} : dispatch Dec 6 05:08:10 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd/host:np0005548787", "name": "osd_memory_target"} : dispatch Dec 6 05:08:10 localhost nova_compute[282193]: 2025-12-06 10:08:10.430 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:08:10 localhost nova_compute[282193]: 2025-12-06 10:08:10.432 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:08:10 localhost nova_compute[282193]: 2025-12-06 10:08:10.432 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5041 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:08:10 localhost nova_compute[282193]: 2025-12-06 10:08:10.432 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:08:10 localhost nova_compute[282193]: 2025-12-06 10:08:10.439 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:08:10 localhost nova_compute[282193]: 2025-12-06 10:08:10.439 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:08:10 localhost nova_compute[282193]: 2025-12-06 10:08:10.442 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:08:11 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:08:11 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:08:11 localhost ceph-mon[298582]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 6 05:08:11 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 6 05:08:11 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:08:11 localhost ceph-mon[298582]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 6 05:08:11 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 6 05:08:11 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:08:11 localhost ceph-mon[298582]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 6 05:08:11 localhost ceph-mon[298582]: Adjusting osd_memory_target on np0005548790.localdomain to 836.6M Dec 6 05:08:11 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 6 05:08:11 localhost ceph-mon[298582]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 6 05:08:11 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 6 05:08:11 localhost ceph-mon[298582]: Unable to set osd_memory_target on np0005548790.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 6 05:08:11 localhost ceph-mon[298582]: Adjusting osd_memory_target on np0005548788.localdomain to 836.6M Dec 6 05:08:11 localhost ceph-mon[298582]: Unable to set osd_memory_target on np0005548788.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Dec 6 05:08:11 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:08:11 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:08:11 localhost ceph-mon[298582]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 6 05:08:11 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 6 05:08:11 localhost ceph-mon[298582]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 6 05:08:11 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 6 05:08:11 localhost ceph-mon[298582]: Adjusting osd_memory_target on np0005548789.localdomain to 836.6M Dec 6 05:08:11 localhost ceph-mon[298582]: Unable to set osd_memory_target on np0005548789.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 6 05:08:11 localhost ceph-mon[298582]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:08:11 localhost ceph-mon[298582]: Updating np0005548787.localdomain:/etc/ceph/ceph.conf Dec 6 05:08:11 localhost ceph-mon[298582]: Updating np0005548788.localdomain:/etc/ceph/ceph.conf Dec 6 05:08:11 localhost ceph-mon[298582]: Updating np0005548789.localdomain:/etc/ceph/ceph.conf Dec 6 05:08:11 localhost ceph-mon[298582]: Updating np0005548790.localdomain:/etc/ceph/ceph.conf Dec 6 05:08:11 localhost ceph-mon[298582]: Saving service mon spec with placement label:mon Dec 6 05:08:11 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:08:11 localhost ceph-mon[298582]: Updating np0005548787.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:08:12 localhost ceph-mon[298582]: Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:08:12 localhost ceph-mon[298582]: Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:08:12 localhost ceph-mon[298582]: Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:08:12 localhost ceph-mon[298582]: Updating np0005548787.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 6 05:08:12 localhost ceph-mon[298582]: Updating np0005548789.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 6 05:08:12 localhost ceph-mon[298582]: Updating np0005548788.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 6 05:08:12 localhost ceph-mon[298582]: Updating np0005548790.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 6 05:08:13 localhost ceph-mon[298582]: mon.np0005548789@3(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:08:14 localhost ceph-mon[298582]: Updating np0005548787.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring Dec 6 05:08:14 localhost ceph-mon[298582]: Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring Dec 6 05:08:14 localhost ceph-mon[298582]: Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring Dec 6 05:08:14 localhost ceph-mon[298582]: Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring Dec 6 05:08:14 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:08:14 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:08:14 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:08:14 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:08:14 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:08:14 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:08:14 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:08:14 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:08:14 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:08:14 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:08:14 localhost ceph-mon[298582]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 6 05:08:15 localhost ceph-mon[298582]: Reconfiguring mon.np0005548787 (monmap changed)... Dec 6 05:08:15 localhost ceph-mon[298582]: Reconfiguring daemon mon.np0005548787 on np0005548787.localdomain Dec 6 05:08:15 localhost ceph-mon[298582]: Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) Dec 6 05:08:15 localhost ceph-mon[298582]: Health check failed: 1 stray host(s) with 1 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST) Dec 6 05:08:15 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:08:15 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:08:15 localhost ceph-mon[298582]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548787.umwsra", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:08:15 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548787.umwsra", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:08:15 localhost nova_compute[282193]: 2025-12-06 10:08:15.480 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:08:15 localhost nova_compute[282193]: 2025-12-06 10:08:15.482 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:08:15 localhost nova_compute[282193]: 2025-12-06 10:08:15.482 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5039 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:08:15 localhost nova_compute[282193]: 2025-12-06 10:08:15.482 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:08:15 localhost nova_compute[282193]: 2025-12-06 10:08:15.487 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:08:15 localhost nova_compute[282193]: 2025-12-06 10:08:15.487 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:08:16 localhost ceph-mon[298582]: Reconfiguring mgr.np0005548787.umwsra (monmap changed)... Dec 6 05:08:16 localhost ceph-mon[298582]: Reconfiguring daemon mgr.np0005548787.umwsra on np0005548787.localdomain Dec 6 05:08:16 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:08:16 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:08:16 localhost ceph-mon[298582]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548787.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:08:16 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548787.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:08:16 localhost openstack_network_exporter[243110]: ERROR 10:08:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:08:16 localhost openstack_network_exporter[243110]: ERROR 10:08:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:08:16 localhost openstack_network_exporter[243110]: ERROR 10:08:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:08:16 localhost openstack_network_exporter[243110]: ERROR 10:08:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:08:16 localhost openstack_network_exporter[243110]: Dec 6 05:08:16 localhost openstack_network_exporter[243110]: ERROR 10:08:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:08:16 localhost openstack_network_exporter[243110]: Dec 6 05:08:17 localhost ceph-mon[298582]: Reconfiguring crash.np0005548787 (monmap changed)... Dec 6 05:08:17 localhost ceph-mon[298582]: Reconfiguring daemon crash.np0005548787 on np0005548787.localdomain Dec 6 05:08:17 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:08:17 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:08:17 localhost ceph-mon[298582]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548788.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:08:17 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548788.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:08:17 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:08:17 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:08:17 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:08:17 localhost ceph-mon[298582]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 6 05:08:18 localhost ceph-mon[298582]: mon.np0005548789@3(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:08:18 localhost ceph-mon[298582]: Reconfiguring crash.np0005548788 (monmap changed)... Dec 6 05:08:18 localhost ceph-mon[298582]: Reconfiguring daemon crash.np0005548788 on np0005548788.localdomain Dec 6 05:08:18 localhost ceph-mon[298582]: Reconfiguring osd.2 (monmap changed)... Dec 6 05:08:18 localhost ceph-mon[298582]: Reconfiguring daemon osd.2 on np0005548788.localdomain Dec 6 05:08:18 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0) Dec 6 05:08:18 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.200:0/327302380' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json"} : dispatch Dec 6 05:08:19 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:08:19 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:08:19 localhost ceph-mon[298582]: Reconfiguring osd.5 (monmap changed)... Dec 6 05:08:19 localhost ceph-mon[298582]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 6 05:08:19 localhost ceph-mon[298582]: Reconfiguring daemon osd.5 on np0005548788.localdomain Dec 6 05:08:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:08:19 localhost podman[300776]: 2025-12-06 10:08:19.922827012 +0000 UTC m=+0.082687643 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 6 05:08:19 localhost podman[300776]: 2025-12-06 10:08:19.956133686 +0000 UTC m=+0.115994257 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 6 05:08:19 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:08:20 localhost nova_compute[282193]: 2025-12-06 10:08:20.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:08:20 localhost nova_compute[282193]: 2025-12-06 10:08:20.182 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Dec 6 05:08:20 localhost nova_compute[282193]: 2025-12-06 10:08:20.214 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Dec 6 05:08:20 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:08:20 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:08:20 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:08:20 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:08:20 localhost ceph-mon[298582]: Reconfiguring mds.mds.np0005548788.erzujf (monmap changed)... Dec 6 05:08:20 localhost ceph-mon[298582]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548788.erzujf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 6 05:08:20 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548788.erzujf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 6 05:08:20 localhost ceph-mon[298582]: Reconfiguring daemon mds.mds.np0005548788.erzujf on np0005548788.localdomain Dec 6 05:08:20 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:08:20 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:08:20 localhost ceph-mon[298582]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548788.yvwbqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:08:20 localhost ceph-mon[298582]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548788.yvwbqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:08:20 localhost ceph-mon[298582]: mon.np0005548789@3(peon).osd e92 e92: 6 total, 6 up, 6 in Dec 6 05:08:20 localhost ceph-mgr[288591]: mgr handle_mgr_map Activating! Dec 6 05:08:20 localhost ceph-mgr[288591]: mgr handle_mgr_map I am now activating Dec 6 05:08:20 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005548787"} v 0) Dec 6 05:08:20 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mon metadata", "id": "np0005548787"} : dispatch Dec 6 05:08:20 localhost nova_compute[282193]: 2025-12-06 10:08:20.491 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:08:20 localhost nova_compute[282193]: 2025-12-06 10:08:20.493 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:08:20 localhost nova_compute[282193]: 2025-12-06 10:08:20.493 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:08:20 localhost nova_compute[282193]: 2025-12-06 10:08:20.493 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:08:20 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005548788"} v 0) Dec 6 05:08:20 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mon metadata", "id": "np0005548788"} : dispatch Dec 6 05:08:20 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005548789"} v 0) Dec 6 05:08:20 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mon metadata", "id": "np0005548789"} : dispatch Dec 6 05:08:20 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005548790"} v 0) Dec 6 05:08:20 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mon metadata", "id": "np0005548790"} : dispatch Dec 6 05:08:20 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005548789.vxwwsq"} v 0) Dec 6 05:08:20 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mds metadata", "who": "mds.np0005548789.vxwwsq"} : dispatch Dec 6 05:08:20 localhost ceph-mon[298582]: mon.np0005548789@3(peon).mds e16 all = 0 Dec 6 05:08:20 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005548788.erzujf"} v 0) Dec 6 05:08:20 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mds metadata", "who": "mds.np0005548788.erzujf"} : dispatch Dec 6 05:08:20 localhost ceph-mon[298582]: mon.np0005548789@3(peon).mds e16 all = 0 Dec 6 05:08:20 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005548790.vhcezv"} v 0) Dec 6 05:08:20 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mds metadata", "who": "mds.np0005548790.vhcezv"} : dispatch Dec 6 05:08:20 localhost ceph-mon[298582]: mon.np0005548789@3(peon).mds e16 all = 0 Dec 6 05:08:20 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005548789.mzhmje", "id": "np0005548789.mzhmje"} v 0) Dec 6 05:08:20 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mgr metadata", "who": "np0005548789.mzhmje", "id": "np0005548789.mzhmje"} : dispatch Dec 6 05:08:20 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005548785.vhqlsq", "id": "np0005548785.vhqlsq"} v 0) Dec 6 05:08:20 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mgr metadata", "who": "np0005548785.vhqlsq", "id": "np0005548785.vhqlsq"} : dispatch Dec 6 05:08:20 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005548788.yvwbqq", "id": "np0005548788.yvwbqq"} v 0) Dec 6 05:08:20 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mgr metadata", "who": "np0005548788.yvwbqq", "id": "np0005548788.yvwbqq"} : dispatch Dec 6 05:08:20 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005548787.umwsra", "id": "np0005548787.umwsra"} v 0) Dec 6 05:08:20 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mgr metadata", "who": "np0005548787.umwsra", "id": "np0005548787.umwsra"} : dispatch Dec 6 05:08:20 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) Dec 6 05:08:20 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "osd metadata", "id": 0} : dispatch Dec 6 05:08:20 localhost nova_compute[282193]: 2025-12-06 10:08:20.526 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:08:20 localhost nova_compute[282193]: 2025-12-06 10:08:20.527 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:08:20 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) Dec 6 05:08:20 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "osd metadata", "id": 1} : dispatch Dec 6 05:08:20 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) Dec 6 05:08:20 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "osd metadata", "id": 2} : dispatch Dec 6 05:08:20 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "osd metadata", "id": 3} v 0) Dec 6 05:08:20 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "osd metadata", "id": 3} : dispatch Dec 6 05:08:20 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "osd metadata", "id": 4} v 0) Dec 6 05:08:20 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "osd metadata", "id": 4} : dispatch Dec 6 05:08:20 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "osd metadata", "id": 5} v 0) Dec 6 05:08:20 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "osd metadata", "id": 5} : dispatch Dec 6 05:08:20 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "mds metadata"} v 0) Dec 6 05:08:20 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mds metadata"} : dispatch Dec 6 05:08:20 localhost ceph-mon[298582]: mon.np0005548789@3(peon).mds e16 all = 1 Dec 6 05:08:20 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "osd metadata"} v 0) Dec 6 05:08:20 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "osd metadata"} : dispatch Dec 6 05:08:20 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "mon metadata"} v 0) Dec 6 05:08:20 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mon metadata"} : dispatch Dec 6 05:08:20 localhost systemd[1]: session-69.scope: Deactivated successfully. Dec 6 05:08:20 localhost systemd[1]: session-69.scope: Consumed 6.077s CPU time. Dec 6 05:08:20 localhost systemd-logind[766]: Session 69 logged out. Waiting for processes to exit. Dec 6 05:08:20 localhost ceph-mgr[288591]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 6 05:08:20 localhost ceph-mgr[288591]: mgr load Constructed class from module: balancer Dec 6 05:08:20 localhost ceph-mgr[288591]: [balancer INFO root] Starting Dec 6 05:08:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:08:20 localhost ceph-mgr[288591]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 6 05:08:20 localhost systemd-logind[766]: Removed session 69. Dec 6 05:08:20 localhost ceph-mgr[288591]: [balancer INFO root] Optimize plan auto_2025-12-06_10:08:20 Dec 6 05:08:20 localhost ceph-mgr[288591]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 6 05:08:20 localhost ceph-mgr[288591]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later Dec 6 05:08:20 localhost ceph-mgr[288591]: mgr load Constructed class from module: cephadm Dec 6 05:08:20 localhost ceph-mgr[288591]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 6 05:08:20 localhost ceph-mgr[288591]: mgr load Constructed class from module: crash Dec 6 05:08:20 localhost ceph-mgr[288591]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 6 05:08:20 localhost ceph-mgr[288591]: mgr load Constructed class from module: devicehealth Dec 6 05:08:20 localhost ceph-mgr[288591]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 6 05:08:20 localhost ceph-mgr[288591]: mgr load Constructed class from module: iostat Dec 6 05:08:20 localhost ceph-mgr[288591]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 6 05:08:20 localhost ceph-mgr[288591]: mgr load Constructed class from module: nfs Dec 6 05:08:20 localhost ceph-mgr[288591]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 6 05:08:20 localhost ceph-mgr[288591]: mgr load Constructed class from module: orchestrator Dec 6 05:08:20 localhost ceph-mgr[288591]: [devicehealth INFO root] Starting Dec 6 05:08:20 localhost ceph-mgr[288591]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 6 05:08:20 localhost ceph-mgr[288591]: mgr load Constructed class from module: pg_autoscaler Dec 6 05:08:20 localhost ceph-mgr[288591]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 6 05:08:20 localhost ceph-mgr[288591]: mgr load Constructed class from module: progress Dec 6 05:08:20 localhost ceph-mgr[288591]: [progress INFO root] Loading... Dec 6 05:08:20 localhost ceph-mgr[288591]: [progress INFO root] Loaded [, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ] historic events Dec 6 05:08:20 localhost ceph-mgr[288591]: [progress INFO root] Loaded OSDMap, ready. Dec 6 05:08:20 localhost ceph-mgr[288591]: [pg_autoscaler INFO root] _maybe_adjust Dec 6 05:08:20 localhost ceph-mgr[288591]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 6 05:08:20 localhost ceph-mgr[288591]: [rbd_support INFO root] recovery thread starting Dec 6 05:08:20 localhost ceph-mgr[288591]: [rbd_support INFO root] starting setup Dec 6 05:08:20 localhost ceph-mgr[288591]: mgr load Constructed class from module: rbd_support Dec 6 05:08:20 localhost ceph-mgr[288591]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 6 05:08:20 localhost ceph-mgr[288591]: mgr load Constructed class from module: restful Dec 6 05:08:20 localhost ceph-mgr[288591]: [restful INFO root] server_addr: :: server_port: 8003 Dec 6 05:08:20 localhost ceph-mgr[288591]: [restful WARNING root] server not running: no certificate configured Dec 6 05:08:20 localhost ceph-mgr[288591]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 6 05:08:20 localhost ceph-mgr[288591]: mgr load Constructed class from module: status Dec 6 05:08:20 localhost ceph-mgr[288591]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 6 05:08:20 localhost ceph-mgr[288591]: mgr load Constructed class from module: telemetry Dec 6 05:08:20 localhost ceph-mgr[288591]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5) Dec 6 05:08:20 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548789.mzhmje/mirror_snapshot_schedule"} v 0) Dec 6 05:08:20 localhost ceph-mon[298582]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548789.mzhmje/mirror_snapshot_schedule"} : dispatch Dec 6 05:08:20 localhost ceph-mgr[288591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 6 05:08:20 localhost podman[300811]: 2025-12-06 10:08:20.660297753 +0000 UTC m=+0.089699475 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 6 05:08:20 localhost ceph-mgr[288591]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 6 05:08:20 localhost ceph-mgr[288591]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 6 05:08:20 localhost podman[300811]: 2025-12-06 10:08:20.672173911 +0000 UTC m=+0.101575633 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:08:20 localhost ceph-mgr[288591]: [rbd_support INFO root] load_schedules: images, start_after= Dec 6 05:08:20 localhost ceph-mgr[288591]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 6 05:08:20 localhost ceph-mgr[288591]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Dec 6 05:08:20 localhost ceph-mgr[288591]: mgr load Constructed class from module: volumes Dec 6 05:08:20 localhost ceph-mgr[288591]: client.0 error registering admin socket command: (17) File exists Dec 6 05:08:20 localhost ceph-mgr[288591]: client.0 error registering admin socket command: (17) File exists Dec 6 05:08:20 localhost ceph-mgr[288591]: client.0 error registering admin socket command: (17) File exists Dec 6 05:08:20 localhost ceph-mgr[288591]: client.0 error registering admin socket command: (17) File exists Dec 6 05:08:20 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:08:20.682+0000 7f03f12ef640 -1 client.0 error registering admin socket command: (17) File exists Dec 6 05:08:20 localhost ceph-mgr[288591]: client.0 error registering admin socket command: (17) File exists Dec 6 05:08:20 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:08:20.682+0000 7f03f12ef640 -1 client.0 error registering admin socket command: (17) File exists Dec 6 05:08:20 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:08:20.682+0000 7f03f12ef640 -1 client.0 error registering admin socket command: (17) File exists Dec 6 05:08:20 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:08:20.682+0000 7f03f12ef640 -1 client.0 error registering admin socket command: (17) File exists Dec 6 05:08:20 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:08:20.682+0000 7f03f12ef640 -1 client.0 error registering admin socket command: (17) File exists Dec 6 05:08:20 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:08:20 localhost ceph-mgr[288591]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 6 05:08:20 localhost ceph-mgr[288591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting Dec 6 05:08:20 localhost ceph-mgr[288591]: [rbd_support INFO root] PerfHandler: starting Dec 6 05:08:20 localhost ceph-mgr[288591]: client.0 error registering admin socket command: (17) File exists Dec 6 05:08:20 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:08:20.692+0000 7f03eeaea640 -1 client.0 error registering admin socket command: (17) File exists Dec 6 05:08:20 localhost ceph-mgr[288591]: client.0 error registering admin socket command: (17) File exists Dec 6 05:08:20 localhost ceph-mgr[288591]: client.0 error registering admin socket command: (17) File exists Dec 6 05:08:20 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:08:20.692+0000 7f03eeaea640 -1 client.0 error registering admin socket command: (17) File exists Dec 6 05:08:20 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:08:20.692+0000 7f03eeaea640 -1 client.0 error registering admin socket command: (17) File exists Dec 6 05:08:20 localhost ceph-mgr[288591]: client.0 error registering admin socket command: (17) File exists Dec 6 05:08:20 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:08:20.692+0000 7f03eeaea640 -1 client.0 error registering admin socket command: (17) File exists Dec 6 05:08:20 localhost ceph-mgr[288591]: client.0 error registering admin socket command: (17) File exists Dec 6 05:08:20 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:08:20.692+0000 7f03eeaea640 -1 client.0 error registering admin socket command: (17) File exists Dec 6 05:08:20 localhost ceph-mgr[288591]: [rbd_support INFO root] load_task_task: vms, start_after= Dec 6 05:08:20 localhost ceph-mgr[288591]: [rbd_support INFO root] load_task_task: volumes, start_after= Dec 6 05:08:20 localhost ceph-mgr[288591]: [rbd_support INFO root] load_task_task: images, start_after= Dec 6 05:08:20 localhost ceph-mgr[288591]: [rbd_support INFO root] load_task_task: backups, start_after= Dec 6 05:08:20 localhost ceph-mgr[288591]: [rbd_support INFO root] TaskHandler: starting Dec 6 05:08:20 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548789.mzhmje/trash_purge_schedule"} v 0) Dec 6 05:08:20 localhost ceph-mon[298582]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548789.mzhmje/trash_purge_schedule"} : dispatch Dec 6 05:08:20 localhost ceph-mgr[288591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 6 05:08:20 localhost ceph-mgr[288591]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 6 05:08:20 localhost ceph-mgr[288591]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 6 05:08:20 localhost ceph-mgr[288591]: [rbd_support INFO root] load_schedules: images, start_after= Dec 6 05:08:20 localhost ceph-mgr[288591]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 6 05:08:20 localhost ceph-mgr[288591]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting Dec 6 05:08:20 localhost ceph-mgr[288591]: [rbd_support INFO root] setup complete Dec 6 05:08:20 localhost sshd[300957]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:08:20 localhost systemd-logind[766]: New session 71 of user ceph-admin. Dec 6 05:08:20 localhost systemd[1]: Started Session 71 of User ceph-admin. Dec 6 05:08:21 localhost nova_compute[282193]: 2025-12-06 10:08:21.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:08:21 localhost ceph-mon[298582]: from='client.? 172.18.0.200:0/2304971504' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Dec 6 05:08:21 localhost ceph-mon[298582]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Dec 6 05:08:21 localhost ceph-mon[298582]: Activating manager daemon np0005548789.mzhmje Dec 6 05:08:21 localhost ceph-mon[298582]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Dec 6 05:08:21 localhost ceph-mon[298582]: Manager daemon np0005548789.mzhmje is now available Dec 6 05:08:21 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548789.mzhmje/mirror_snapshot_schedule"} : dispatch Dec 6 05:08:21 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548789.mzhmje/mirror_snapshot_schedule"} : dispatch Dec 6 05:08:21 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548789.mzhmje/trash_purge_schedule"} : dispatch Dec 6 05:08:21 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548789.mzhmje/trash_purge_schedule"} : dispatch Dec 6 05:08:21 localhost ceph-mgr[288591]: log_channel(cluster) log [DBG] : pgmap v3: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail Dec 6 05:08:21 localhost ceph-mgr[288591]: [cephadm INFO cherrypy.error] [06/Dec/2025:10:08:21] ENGINE Bus STARTING Dec 6 05:08:21 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : [06/Dec/2025:10:08:21] ENGINE Bus STARTING Dec 6 05:08:21 localhost podman[301071]: 2025-12-06 10:08:21.795005367 +0000 UTC m=+0.068918429 container exec ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, release=1763362218, version=7, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, distribution-scope=public, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, GIT_CLEAN=True, maintainer=Guillaume Abrioux , io.openshift.expose-services=, CEPH_POINT_RELEASE=, name=rhceph, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z) Dec 6 05:08:21 localhost ceph-mgr[288591]: [cephadm INFO cherrypy.error] [06/Dec/2025:10:08:21] ENGINE Serving on https://172.18.0.107:7150 Dec 6 05:08:21 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : [06/Dec/2025:10:08:21] ENGINE Serving on https://172.18.0.107:7150 Dec 6 05:08:21 localhost ceph-mgr[288591]: [cephadm INFO cherrypy.error] [06/Dec/2025:10:08:21] ENGINE Client ('172.18.0.107', 48298) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Dec 6 05:08:21 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : [06/Dec/2025:10:08:21] ENGINE Client ('172.18.0.107', 48298) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Dec 6 05:08:21 localhost podman[301071]: 2025-12-06 10:08:21.86310668 +0000 UTC m=+0.137019782 container exec_died ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, maintainer=Guillaume Abrioux , GIT_BRANCH=main, io.openshift.tags=rhceph ceph, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.component=rhceph-container, distribution-scope=public, architecture=x86_64, vcs-type=git, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 6 05:08:21 localhost ceph-mgr[288591]: [cephadm INFO cherrypy.error] [06/Dec/2025:10:08:21] ENGINE Serving on http://172.18.0.107:8765 Dec 6 05:08:21 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : [06/Dec/2025:10:08:21] ENGINE Serving on http://172.18.0.107:8765 Dec 6 05:08:21 localhost ceph-mgr[288591]: [cephadm INFO cherrypy.error] [06/Dec/2025:10:08:21] ENGINE Bus STARTED Dec 6 05:08:21 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : [06/Dec/2025:10:08:21] ENGINE Bus STARTED Dec 6 05:08:22 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548787.localdomain.devices.0}] v 0) Dec 6 05:08:22 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548787.localdomain}] v 0) Dec 6 05:08:22 localhost ceph-mon[298582]: [06/Dec/2025:10:08:21] ENGINE Bus STARTING Dec 6 05:08:22 localhost ceph-mon[298582]: [06/Dec/2025:10:08:21] ENGINE Serving on https://172.18.0.107:7150 Dec 6 05:08:22 localhost ceph-mon[298582]: [06/Dec/2025:10:08:21] ENGINE Client ('172.18.0.107', 48298) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Dec 6 05:08:22 localhost ceph-mon[298582]: [06/Dec/2025:10:08:21] ENGINE Serving on http://172.18.0.107:8765 Dec 6 05:08:22 localhost ceph-mon[298582]: [06/Dec/2025:10:08:21] ENGINE Bus STARTED Dec 6 05:08:22 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain.devices.0}] v 0) Dec 6 05:08:22 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain}] v 0) Dec 6 05:08:22 localhost ceph-mgr[288591]: log_channel(cluster) log [DBG] : pgmap v4: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail Dec 6 05:08:22 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain.devices.0}] v 0) Dec 6 05:08:22 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain}] v 0) Dec 6 05:08:22 localhost ceph-mgr[288591]: [devicehealth INFO root] Check health Dec 6 05:08:22 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain.devices.0}] v 0) Dec 6 05:08:22 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain}] v 0) Dec 6 05:08:23 localhost ceph-mon[298582]: mon.np0005548789@3(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:08:23 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:23 localhost ceph-mon[298582]: Health check cleared: CEPHADM_STRAY_DAEMON (was: 1 stray daemon(s) not managed by cephadm) Dec 6 05:08:23 localhost ceph-mon[298582]: Health check cleared: CEPHADM_STRAY_HOST (was: 1 stray host(s) with 1 daemon(s) not managed by cephadm) Dec 6 05:08:23 localhost ceph-mon[298582]: Cluster is now healthy Dec 6 05:08:23 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:23 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:23 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:23 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:23 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:23 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:23 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:23 localhost podman[241090]: time="2025-12-06T10:08:23Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:08:23 localhost podman[241090]: @ - - [06/Dec/2025:10:08:23 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156104 "" "Go-http-client/1.1" Dec 6 05:08:23 localhost podman[241090]: @ - - [06/Dec/2025:10:08:23 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19220 "" "Go-http-client/1.1" Dec 6 05:08:24 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain.devices.0}] v 0) Dec 6 05:08:24 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain.devices.0}] v 0) Dec 6 05:08:24 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain}] v 0) Dec 6 05:08:24 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain}] v 0) Dec 6 05:08:24 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) Dec 6 05:08:24 localhost ceph-mon[298582]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 6 05:08:24 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548787.localdomain.devices.0}] v 0) Dec 6 05:08:24 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) Dec 6 05:08:24 localhost ceph-mon[298582]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 6 05:08:24 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0) Dec 6 05:08:24 localhost ceph-mon[298582]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 6 05:08:24 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548787.localdomain}] v 0) Dec 6 05:08:24 localhost ceph-mgr[288591]: [cephadm INFO root] Adjusting osd_memory_target on np0005548788.localdomain to 836.6M Dec 6 05:08:24 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005548788.localdomain to 836.6M Dec 6 05:08:24 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Dec 6 05:08:24 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0) Dec 6 05:08:24 localhost ceph-mon[298582]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 6 05:08:24 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "config rm", "who": "osd/host:np0005548787", "name": "osd_memory_target"} v 0) Dec 6 05:08:24 localhost ceph-mon[298582]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config rm", "who": "osd/host:np0005548787", "name": "osd_memory_target"} : dispatch Dec 6 05:08:24 localhost ceph-mgr[288591]: [cephadm INFO root] Adjusting osd_memory_target on np0005548789.localdomain to 836.6M Dec 6 05:08:24 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005548789.localdomain to 836.6M Dec 6 05:08:24 localhost ceph-mgr[288591]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005548788.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Dec 6 05:08:24 localhost ceph-mgr[288591]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005548788.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Dec 6 05:08:24 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Dec 6 05:08:24 localhost ceph-mgr[288591]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005548789.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 6 05:08:24 localhost ceph-mgr[288591]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005548789.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 6 05:08:24 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain.devices.0}] v 0) Dec 6 05:08:24 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain}] v 0) Dec 6 05:08:24 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) Dec 6 05:08:24 localhost ceph-mon[298582]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 6 05:08:24 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0) Dec 6 05:08:24 localhost ceph-mon[298582]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 6 05:08:24 localhost ceph-mgr[288591]: [cephadm INFO root] Adjusting osd_memory_target on np0005548790.localdomain to 836.6M Dec 6 05:08:24 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005548790.localdomain to 836.6M Dec 6 05:08:24 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Dec 6 05:08:24 localhost ceph-mgr[288591]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005548790.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 6 05:08:24 localhost ceph-mgr[288591]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005548790.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 6 05:08:24 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 6 05:08:24 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 6 05:08:24 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Dec 6 05:08:24 localhost ceph-mon[298582]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:08:24 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Updating np0005548787.localdomain:/etc/ceph/ceph.conf Dec 6 05:08:24 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Updating np0005548787.localdomain:/etc/ceph/ceph.conf Dec 6 05:08:24 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Updating np0005548788.localdomain:/etc/ceph/ceph.conf Dec 6 05:08:24 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Updating np0005548789.localdomain:/etc/ceph/ceph.conf Dec 6 05:08:24 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Updating np0005548788.localdomain:/etc/ceph/ceph.conf Dec 6 05:08:24 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Updating np0005548789.localdomain:/etc/ceph/ceph.conf Dec 6 05:08:24 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Updating np0005548790.localdomain:/etc/ceph/ceph.conf Dec 6 05:08:24 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Updating np0005548790.localdomain:/etc/ceph/ceph.conf Dec 6 05:08:24 localhost ceph-mgr[288591]: log_channel(cluster) log [DBG] : pgmap v5: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail Dec 6 05:08:24 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:24 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:24 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:24 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 6 05:08:24 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 6 05:08:24 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:24 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 6 05:08:24 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 6 05:08:24 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 6 05:08:24 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 6 05:08:24 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:24 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 6 05:08:24 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 6 05:08:24 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:24 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config rm", "who": "osd/host:np0005548787", "name": "osd_memory_target"} : dispatch Dec 6 05:08:24 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config rm", "who": "osd/host:np0005548787", "name": "osd_memory_target"} : dispatch Dec 6 05:08:24 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:24 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:24 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 6 05:08:24 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 6 05:08:24 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 6 05:08:24 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 6 05:08:24 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:08:25 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:08:25 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:08:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:08:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:08:25 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:08:25 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:08:25 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Updating np0005548787.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:08:25 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Updating np0005548787.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:08:25 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:08:25 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:08:25 localhost podman[301502]: 2025-12-06 10:08:25.171037723 +0000 UTC m=+0.100588293 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, name=ubi9-minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, release=1755695350, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, config_id=edpm, maintainer=Red Hat, Inc.) Dec 6 05:08:25 localhost podman[301502]: 2025-12-06 10:08:25.183598681 +0000 UTC m=+0.113149161 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, vcs-type=git, vendor=Red Hat, Inc., version=9.6, distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Dec 6 05:08:25 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:08:25 localhost podman[301503]: 2025-12-06 10:08:25.27410705 +0000 UTC m=+0.201860456 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 6 05:08:25 localhost podman[301503]: 2025-12-06 10:08:25.282419941 +0000 UTC m=+0.210173377 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:08:25 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:08:25 localhost sshd[301630]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:08:25 localhost nova_compute[282193]: 2025-12-06 10:08:25.528 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:08:25 localhost nova_compute[282193]: 2025-12-06 10:08:25.530 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:08:25 localhost nova_compute[282193]: 2025-12-06 10:08:25.530 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:08:25 localhost nova_compute[282193]: 2025-12-06 10:08:25.531 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:08:25 localhost nova_compute[282193]: 2025-12-06 10:08:25.559 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:08:25 localhost nova_compute[282193]: 2025-12-06 10:08:25.559 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:08:25 localhost ceph-mgr[288591]: mgr.server handle_open ignoring open from mgr.np0005548790.kvkfyr 172.18.0.108:0/2122066654; not ready for session (expect reconnect) Dec 6 05:08:25 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Updating np0005548788.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 6 05:08:25 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Updating np0005548788.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 6 05:08:25 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Updating np0005548787.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 6 05:08:25 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Updating np0005548787.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 6 05:08:25 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Updating np0005548789.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 6 05:08:25 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Updating np0005548789.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 6 05:08:25 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Updating np0005548790.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 6 05:08:25 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Updating np0005548790.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 6 05:08:25 localhost ceph-mon[298582]: Adjusting osd_memory_target on np0005548788.localdomain to 836.6M Dec 6 05:08:25 localhost ceph-mon[298582]: Adjusting osd_memory_target on np0005548789.localdomain to 836.6M Dec 6 05:08:25 localhost ceph-mon[298582]: Unable to set osd_memory_target on np0005548788.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Dec 6 05:08:25 localhost ceph-mon[298582]: Unable to set osd_memory_target on np0005548789.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 6 05:08:25 localhost ceph-mon[298582]: Adjusting osd_memory_target on np0005548790.localdomain to 836.6M Dec 6 05:08:25 localhost ceph-mon[298582]: Unable to set osd_memory_target on np0005548790.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 6 05:08:25 localhost ceph-mon[298582]: Updating np0005548787.localdomain:/etc/ceph/ceph.conf Dec 6 05:08:25 localhost ceph-mon[298582]: Updating np0005548788.localdomain:/etc/ceph/ceph.conf Dec 6 05:08:25 localhost ceph-mon[298582]: Updating np0005548789.localdomain:/etc/ceph/ceph.conf Dec 6 05:08:25 localhost ceph-mon[298582]: Updating np0005548790.localdomain:/etc/ceph/ceph.conf Dec 6 05:08:25 localhost ceph-mon[298582]: Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:08:25 localhost ceph-mon[298582]: Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:08:25 localhost ceph-mon[298582]: Updating np0005548787.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:08:25 localhost ceph-mon[298582]: Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:08:25 localhost ceph-mon[298582]: Updating np0005548788.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 6 05:08:25 localhost ceph-mon[298582]: Updating np0005548787.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 6 05:08:25 localhost ceph-mon[298582]: Updating np0005548789.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 6 05:08:25 localhost ceph-mon[298582]: Updating np0005548790.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 6 05:08:26 localhost nova_compute[282193]: 2025-12-06 10:08:26.204 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:08:26 localhost nova_compute[282193]: 2025-12-06 10:08:26.226 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:08:26 localhost nova_compute[282193]: 2025-12-06 10:08:26.226 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:08:26 localhost nova_compute[282193]: 2025-12-06 10:08:26.226 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:08:26 localhost nova_compute[282193]: 2025-12-06 10:08:26.227 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Auditing locally available compute resources for np0005548789.localdomain (node: np0005548789.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 6 05:08:26 localhost nova_compute[282193]: 2025-12-06 10:08:26.227 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:08:26 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring Dec 6 05:08:26 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring Dec 6 05:08:26 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005548790.kvkfyr", "id": "np0005548790.kvkfyr"} v 0) Dec 6 05:08:26 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mgr metadata", "who": "np0005548790.kvkfyr", "id": "np0005548790.kvkfyr"} : dispatch Dec 6 05:08:26 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Updating np0005548787.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring Dec 6 05:08:26 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Updating np0005548787.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring Dec 6 05:08:26 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring Dec 6 05:08:26 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring Dec 6 05:08:26 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring Dec 6 05:08:26 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring Dec 6 05:08:26 localhost ceph-mgr[288591]: log_channel(cluster) log [DBG] : pgmap v6: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail; 41 KiB/s rd, 0 B/s wr, 23 op/s Dec 6 05:08:26 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:08:26 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1224196971' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:08:26 localhost nova_compute[282193]: 2025-12-06 10:08:26.717 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:08:26 localhost nova_compute[282193]: 2025-12-06 10:08:26.786 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:08:26 localhost nova_compute[282193]: 2025-12-06 10:08:26.787 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:08:26 localhost nova_compute[282193]: 2025-12-06 10:08:26.982 282197 WARNING nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 6 05:08:26 localhost nova_compute[282193]: 2025-12-06 10:08:26.983 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Hypervisor/Node resource view: name=np0005548789.localdomain free_ram=11480MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 6 05:08:26 localhost nova_compute[282193]: 2025-12-06 10:08:26.983 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:08:26 localhost nova_compute[282193]: 2025-12-06 10:08:26.983 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:08:27 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain.devices.0}] v 0) Dec 6 05:08:27 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain}] v 0) Dec 6 05:08:27 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain.devices.0}] v 0) Dec 6 05:08:27 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548787.localdomain.devices.0}] v 0) Dec 6 05:08:27 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain}] v 0) Dec 6 05:08:27 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548787.localdomain}] v 0) Dec 6 05:08:27 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain.devices.0}] v 0) Dec 6 05:08:27 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain}] v 0) Dec 6 05:08:27 localhost ceph-mgr[288591]: log_channel(cluster) log [DBG] : pgmap v7: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail; 36 KiB/s rd, 0 B/s wr, 20 op/s Dec 6 05:08:27 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Dec 6 05:08:27 localhost ceph-mgr[288591]: [progress INFO root] update: starting ev e5b18223-81f9-4f8a-872e-b03e40a6e131 (Updating node-proxy deployment (+4 -> 4)) Dec 6 05:08:27 localhost ceph-mgr[288591]: [progress INFO root] complete: finished ev e5b18223-81f9-4f8a-872e-b03e40a6e131 (Updating node-proxy deployment (+4 -> 4)) Dec 6 05:08:27 localhost ceph-mgr[288591]: [progress INFO root] Completed event e5b18223-81f9-4f8a-872e-b03e40a6e131 (Updating node-proxy deployment (+4 -> 4)) in 0 seconds Dec 6 05:08:27 localhost nova_compute[282193]: 2025-12-06 10:08:27.292 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 6 05:08:27 localhost nova_compute[282193]: 2025-12-06 10:08:27.293 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 6 05:08:27 localhost nova_compute[282193]: 2025-12-06 10:08:27.293 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Final resource view: name=np0005548789.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 6 05:08:27 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Dec 6 05:08:27 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Dec 6 05:08:27 localhost nova_compute[282193]: 2025-12-06 10:08:27.362 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Refreshing inventories for resource provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Dec 6 05:08:27 localhost ceph-mon[298582]: Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring Dec 6 05:08:27 localhost ceph-mon[298582]: Updating np0005548787.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring Dec 6 05:08:27 localhost ceph-mon[298582]: Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring Dec 6 05:08:27 localhost ceph-mon[298582]: Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring Dec 6 05:08:27 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:27 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:27 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:27 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:27 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:27 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:27 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:27 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:27 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:27 localhost nova_compute[282193]: 2025-12-06 10:08:27.445 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Updating ProviderTree inventory for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Dec 6 05:08:27 localhost nova_compute[282193]: 2025-12-06 10:08:27.445 282197 DEBUG nova.compute.provider_tree [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Updating inventory in ProviderTree for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 6 05:08:27 localhost nova_compute[282193]: 2025-12-06 10:08:27.459 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Refreshing aggregate associations for resource provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Dec 6 05:08:27 localhost nova_compute[282193]: 2025-12-06 10:08:27.483 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Refreshing trait associations for resource provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad, traits: HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_RESCUE_BFV,HW_CPU_X86_AVX2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SHA,HW_CPU_X86_BMI2,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSSE3,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_AVX,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AMD_SVM,HW_CPU_X86_FMA3,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_F16C,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_ABM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Dec 6 05:08:27 localhost nova_compute[282193]: 2025-12-06 10:08:27.523 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:08:27 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005548788.yvwbqq (monmap changed)... Dec 6 05:08:27 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005548788.yvwbqq (monmap changed)... Dec 6 05:08:27 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005548788.yvwbqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Dec 6 05:08:27 localhost ceph-mon[298582]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548788.yvwbqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:08:27 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "mgr services"} v 0) Dec 6 05:08:27 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mgr services"} : dispatch Dec 6 05:08:27 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 6 05:08:27 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 6 05:08:27 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005548788.yvwbqq on np0005548788.localdomain Dec 6 05:08:27 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005548788.yvwbqq on np0005548788.localdomain Dec 6 05:08:27 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:08:27 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3249850813' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:08:27 localhost nova_compute[282193]: 2025-12-06 10:08:27.974 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:08:27 localhost nova_compute[282193]: 2025-12-06 10:08:27.981 282197 DEBUG nova.compute.provider_tree [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed in ProviderTree for provider: 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 6 05:08:28 localhost nova_compute[282193]: 2025-12-06 10:08:28.005 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 6 05:08:28 localhost nova_compute[282193]: 2025-12-06 10:08:28.008 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Compute_service record updated for np0005548789.localdomain:np0005548789.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 6 05:08:28 localhost nova_compute[282193]: 2025-12-06 10:08:28.008 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.025s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:08:28 localhost ceph-mon[298582]: mon.np0005548789@3(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:08:28 localhost ceph-mon[298582]: Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) Dec 6 05:08:28 localhost ceph-mon[298582]: Health check failed: 1 stray host(s) with 1 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST) Dec 6 05:08:28 localhost ceph-mon[298582]: Reconfiguring mgr.np0005548788.yvwbqq (monmap changed)... Dec 6 05:08:28 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548788.yvwbqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:08:28 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548788.yvwbqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:08:28 localhost ceph-mon[298582]: Reconfiguring daemon mgr.np0005548788.yvwbqq on np0005548788.localdomain Dec 6 05:08:28 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain.devices.0}] v 0) Dec 6 05:08:28 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain}] v 0) Dec 6 05:08:28 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005548788 (monmap changed)... Dec 6 05:08:28 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005548788 (monmap changed)... Dec 6 05:08:28 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Dec 6 05:08:28 localhost ceph-mon[298582]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 6 05:08:28 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Dec 6 05:08:28 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Dec 6 05:08:28 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 6 05:08:28 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 6 05:08:28 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005548788 on np0005548788.localdomain Dec 6 05:08:28 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005548788 on np0005548788.localdomain Dec 6 05:08:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:08:28 localhost podman[302085]: 2025-12-06 10:08:28.918078552 +0000 UTC m=+0.079182798 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd) Dec 6 05:08:28 localhost podman[302085]: 2025-12-06 10:08:28.929703003 +0000 UTC m=+0.090807259 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Dec 6 05:08:28 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:08:28 localhost nova_compute[282193]: 2025-12-06 10:08:28.986 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:08:28 localhost nova_compute[282193]: 2025-12-06 10:08:28.987 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 6 05:08:28 localhost nova_compute[282193]: 2025-12-06 10:08:28.987 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 6 05:08:29 localhost nova_compute[282193]: 2025-12-06 10:08:29.054 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 6 05:08:29 localhost nova_compute[282193]: 2025-12-06 10:08:29.054 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquired lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 6 05:08:29 localhost nova_compute[282193]: 2025-12-06 10:08:29.054 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 6 05:08:29 localhost nova_compute[282193]: 2025-12-06 10:08:29.055 282197 DEBUG nova.objects.instance [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 6 05:08:29 localhost sshd[302105]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:08:29 localhost sshd[302106]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:08:29 localhost ceph-mgr[288591]: log_channel(cluster) log [DBG] : pgmap v8: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail; 27 KiB/s rd, 0 B/s wr, 14 op/s Dec 6 05:08:29 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain.devices.0}] v 0) Dec 6 05:08:29 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain}] v 0) Dec 6 05:08:29 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005548789 (monmap changed)... Dec 6 05:08:29 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005548789 (monmap changed)... Dec 6 05:08:29 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005548789.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Dec 6 05:08:29 localhost ceph-mon[298582]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548789.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:08:29 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 6 05:08:29 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 6 05:08:29 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005548789 on np0005548789.localdomain Dec 6 05:08:29 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005548789 on np0005548789.localdomain Dec 6 05:08:29 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:29 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:29 localhost ceph-mon[298582]: Reconfiguring mon.np0005548788 (monmap changed)... Dec 6 05:08:29 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 6 05:08:29 localhost ceph-mon[298582]: Reconfiguring daemon mon.np0005548788 on np0005548788.localdomain Dec 6 05:08:29 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:29 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:29 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548789.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:08:29 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548789.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:08:29 localhost nova_compute[282193]: 2025-12-06 10:08:29.733 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updating instance_info_cache with network_info: [{"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 6 05:08:29 localhost nova_compute[282193]: 2025-12-06 10:08:29.763 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Releasing lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 6 05:08:29 localhost nova_compute[282193]: 2025-12-06 10:08:29.763 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 6 05:08:29 localhost nova_compute[282193]: 2025-12-06 10:08:29.764 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:08:29 localhost nova_compute[282193]: 2025-12-06 10:08:29.764 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:08:29 localhost nova_compute[282193]: 2025-12-06 10:08:29.764 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:08:29 localhost nova_compute[282193]: 2025-12-06 10:08:29.765 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:08:29 localhost nova_compute[282193]: 2025-12-06 10:08:29.765 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 6 05:08:29 localhost nova_compute[282193]: 2025-12-06 10:08:29.954 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:08:30 localhost podman[302161]: Dec 6 05:08:30 localhost podman[302161]: 2025-12-06 10:08:30.045334052 +0000 UTC m=+0.075230809 container create 5dc8672dcc55de10c05d73a12defb379021b8c2ffc14b1de89fbefa3cd7e65e5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_beaver, CEPH_POINT_RELEASE=, vcs-type=git, release=1763362218, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, maintainer=Guillaume Abrioux , io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True) Dec 6 05:08:30 localhost systemd[1]: Started libpod-conmon-5dc8672dcc55de10c05d73a12defb379021b8c2ffc14b1de89fbefa3cd7e65e5.scope. Dec 6 05:08:30 localhost systemd[1]: Started libcrun container. Dec 6 05:08:30 localhost podman[302161]: 2025-12-06 10:08:30.017578875 +0000 UTC m=+0.047475702 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:08:30 localhost podman[302161]: 2025-12-06 10:08:30.120937311 +0000 UTC m=+0.150834088 container init 5dc8672dcc55de10c05d73a12defb379021b8c2ffc14b1de89fbefa3cd7e65e5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_beaver, architecture=x86_64, distribution-scope=public, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, name=rhceph, io.buildah.version=1.41.4, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, release=1763362218, vcs-type=git, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 6 05:08:30 localhost podman[302161]: 2025-12-06 10:08:30.134237392 +0000 UTC m=+0.164134169 container start 5dc8672dcc55de10c05d73a12defb379021b8c2ffc14b1de89fbefa3cd7e65e5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_beaver, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., GIT_CLEAN=True, release=1763362218, RELEASE=main, version=7, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, CEPH_POINT_RELEASE=, architecture=x86_64, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 6 05:08:30 localhost podman[302161]: 2025-12-06 10:08:30.13450555 +0000 UTC m=+0.164402337 container attach 5dc8672dcc55de10c05d73a12defb379021b8c2ffc14b1de89fbefa3cd7e65e5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_beaver, GIT_BRANCH=main, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , ceph=True, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 05:08:30 localhost systemd[1]: tmp-crun.2Z1ppV.mount: Deactivated successfully. Dec 6 05:08:30 localhost systemd[1]: libpod-5dc8672dcc55de10c05d73a12defb379021b8c2ffc14b1de89fbefa3cd7e65e5.scope: Deactivated successfully. Dec 6 05:08:30 localhost sweet_beaver[302176]: 167 167 Dec 6 05:08:30 localhost podman[302161]: 2025-12-06 10:08:30.139354125 +0000 UTC m=+0.169250932 container died 5dc8672dcc55de10c05d73a12defb379021b8c2ffc14b1de89fbefa3cd7e65e5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_beaver, build-date=2025-11-26T19:44:28Z, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, CEPH_POINT_RELEASE=, release=1763362218, architecture=x86_64, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , distribution-scope=public, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, version=7) Dec 6 05:08:30 localhost nova_compute[282193]: 2025-12-06 10:08:30.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:08:30 localhost nova_compute[282193]: 2025-12-06 10:08:30.182 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:08:30 localhost podman[302181]: 2025-12-06 10:08:30.242218056 +0000 UTC m=+0.092216211 container remove 5dc8672dcc55de10c05d73a12defb379021b8c2ffc14b1de89fbefa3cd7e65e5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_beaver, architecture=x86_64, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, name=rhceph, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, ceph=True, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, version=7, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , GIT_CLEAN=True, release=1763362218, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 6 05:08:30 localhost systemd[1]: libpod-conmon-5dc8672dcc55de10c05d73a12defb379021b8c2ffc14b1de89fbefa3cd7e65e5.scope: Deactivated successfully. Dec 6 05:08:30 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain.devices.0}] v 0) Dec 6 05:08:30 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain}] v 0) Dec 6 05:08:30 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)... Dec 6 05:08:30 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)... Dec 6 05:08:30 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) Dec 6 05:08:30 localhost ceph-mon[298582]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Dec 6 05:08:30 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 6 05:08:30 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 6 05:08:30 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on np0005548789.localdomain Dec 6 05:08:30 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on np0005548789.localdomain Dec 6 05:08:30 localhost ceph-mon[298582]: Reconfiguring crash.np0005548789 (monmap changed)... Dec 6 05:08:30 localhost ceph-mon[298582]: Reconfiguring daemon crash.np0005548789 on np0005548789.localdomain Dec 6 05:08:30 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:30 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:30 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Dec 6 05:08:30 localhost nova_compute[282193]: 2025-12-06 10:08:30.558 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:08:30 localhost nova_compute[282193]: 2025-12-06 10:08:30.559 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:08:30 localhost ceph-mgr[288591]: [progress INFO root] Writing back 50 completed events Dec 6 05:08:30 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Dec 6 05:08:31 localhost podman[302250]: Dec 6 05:08:31 localhost podman[302250]: 2025-12-06 10:08:31.027411004 +0000 UTC m=+0.076423664 container create c57769c602aa8fc07e7d2b1598e3b0868c82b8e00754fb14f03609c43673f914 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_shaw, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, name=rhceph, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, release=1763362218, version=7, RELEASE=main, io.buildah.version=1.41.4, architecture=x86_64, build-date=2025-11-26T19:44:28Z, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 6 05:08:31 localhost systemd[1]: var-lib-containers-storage-overlay-a1faae6de27bb8ff17635f97614e04310a7989935942b68c5b17c874b9a4b85a-merged.mount: Deactivated successfully. Dec 6 05:08:31 localhost systemd[1]: Started libpod-conmon-c57769c602aa8fc07e7d2b1598e3b0868c82b8e00754fb14f03609c43673f914.scope. Dec 6 05:08:31 localhost systemd[1]: Started libcrun container. Dec 6 05:08:31 localhost podman[302250]: 2025-12-06 10:08:30.993863863 +0000 UTC m=+0.042876533 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:08:31 localhost podman[302250]: 2025-12-06 10:08:31.099072005 +0000 UTC m=+0.148084665 container init c57769c602aa8fc07e7d2b1598e3b0868c82b8e00754fb14f03609c43673f914 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_shaw, GIT_CLEAN=True, release=1763362218, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, architecture=x86_64, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, RELEASE=main, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 6 05:08:31 localhost systemd[1]: tmp-crun.qHMFqX.mount: Deactivated successfully. Dec 6 05:08:31 localhost podman[302250]: 2025-12-06 10:08:31.111593602 +0000 UTC m=+0.160606272 container start c57769c602aa8fc07e7d2b1598e3b0868c82b8e00754fb14f03609c43673f914 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_shaw, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, GIT_BRANCH=main, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., RELEASE=main, GIT_CLEAN=True, release=1763362218, version=7, maintainer=Guillaume Abrioux , architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7) Dec 6 05:08:31 localhost vibrant_shaw[302267]: 167 167 Dec 6 05:08:31 localhost podman[302250]: 2025-12-06 10:08:31.112362266 +0000 UTC m=+0.161374926 container attach c57769c602aa8fc07e7d2b1598e3b0868c82b8e00754fb14f03609c43673f914 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_shaw, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, distribution-scope=public, architecture=x86_64, maintainer=Guillaume Abrioux , ceph=True, name=rhceph, io.buildah.version=1.41.4, GIT_BRANCH=main, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 05:08:31 localhost systemd[1]: libpod-c57769c602aa8fc07e7d2b1598e3b0868c82b8e00754fb14f03609c43673f914.scope: Deactivated successfully. Dec 6 05:08:31 localhost podman[302250]: 2025-12-06 10:08:31.11448107 +0000 UTC m=+0.163493740 container died c57769c602aa8fc07e7d2b1598e3b0868c82b8e00754fb14f03609c43673f914 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_shaw, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, com.redhat.component=rhceph-container, release=1763362218, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main) Dec 6 05:08:31 localhost podman[302272]: 2025-12-06 10:08:31.25748279 +0000 UTC m=+0.133203276 container remove c57769c602aa8fc07e7d2b1598e3b0868c82b8e00754fb14f03609c43673f914 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_shaw, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, version=7, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, distribution-scope=public, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, CEPH_POINT_RELEASE=, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, io.openshift.expose-services=, io.buildah.version=1.41.4) Dec 6 05:08:31 localhost systemd[1]: libpod-conmon-c57769c602aa8fc07e7d2b1598e3b0868c82b8e00754fb14f03609c43673f914.scope: Deactivated successfully. Dec 6 05:08:31 localhost ceph-mgr[288591]: log_channel(cluster) log [DBG] : pgmap v9: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 0 B/s wr, 11 op/s Dec 6 05:08:31 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain.devices.0}] v 0) Dec 6 05:08:31 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain}] v 0) Dec 6 05:08:31 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain.devices.0}] v 0) Dec 6 05:08:31 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain}] v 0) Dec 6 05:08:31 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring osd.4 (monmap changed)... Dec 6 05:08:31 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring osd.4 (monmap changed)... Dec 6 05:08:31 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "auth get", "entity": "osd.4"} v 0) Dec 6 05:08:31 localhost ceph-mon[298582]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Dec 6 05:08:31 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 6 05:08:31 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 6 05:08:31 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.4 on np0005548789.localdomain Dec 6 05:08:31 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.4 on np0005548789.localdomain Dec 6 05:08:31 localhost ceph-mon[298582]: Reconfiguring osd.1 (monmap changed)... Dec 6 05:08:31 localhost ceph-mon[298582]: Reconfiguring daemon osd.1 on np0005548789.localdomain Dec 6 05:08:31 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:31 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:31 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:31 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:31 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:31 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Dec 6 05:08:32 localhost systemd[1]: var-lib-containers-storage-overlay-2511db499222bfceb614c3294a4a3aac44d207f9f92cae0ea8d7358c72b4c212-merged.mount: Deactivated successfully. Dec 6 05:08:32 localhost ceph-mgr[288591]: log_channel(audit) log [DBG] : from='client.44410 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch Dec 6 05:08:32 localhost podman[302350]: Dec 6 05:08:32 localhost podman[302350]: 2025-12-06 10:08:32.1578425 +0000 UTC m=+0.079646131 container create 70160df232d73ab8d4735fbd215f2d538fca8d687fdbb9baecff2d9d848a0738 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_bouman, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, RELEASE=main, vcs-type=git, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.openshift.tags=rhceph ceph, version=7, vendor=Red Hat, Inc., release=1763362218, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_CLEAN=True, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, name=rhceph, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux ) Dec 6 05:08:32 localhost nova_compute[282193]: 2025-12-06 10:08:32.178 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:08:32 localhost nova_compute[282193]: 2025-12-06 10:08:32.196 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:08:32 localhost nova_compute[282193]: 2025-12-06 10:08:32.197 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Dec 6 05:08:32 localhost systemd[1]: Started libpod-conmon-70160df232d73ab8d4735fbd215f2d538fca8d687fdbb9baecff2d9d848a0738.scope. Dec 6 05:08:32 localhost systemd[1]: Started libcrun container. Dec 6 05:08:32 localhost podman[302350]: 2025-12-06 10:08:32.127245178 +0000 UTC m=+0.049048859 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:08:32 localhost podman[302350]: 2025-12-06 10:08:32.237522461 +0000 UTC m=+0.159326092 container init 70160df232d73ab8d4735fbd215f2d538fca8d687fdbb9baecff2d9d848a0738 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_bouman, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, ceph=True, GIT_BRANCH=main, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , vcs-type=git, distribution-scope=public, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, name=rhceph) Dec 6 05:08:32 localhost podman[302350]: 2025-12-06 10:08:32.246628337 +0000 UTC m=+0.168431968 container start 70160df232d73ab8d4735fbd215f2d538fca8d687fdbb9baecff2d9d848a0738 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_bouman, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, version=7, RELEASE=main, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, GIT_CLEAN=True, vendor=Red Hat, Inc., ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 6 05:08:32 localhost podman[302350]: 2025-12-06 10:08:32.247145582 +0000 UTC m=+0.168949243 container attach 70160df232d73ab8d4735fbd215f2d538fca8d687fdbb9baecff2d9d848a0738 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_bouman, version=7, RELEASE=main, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph) Dec 6 05:08:32 localhost lucid_bouman[302365]: 167 167 Dec 6 05:08:32 localhost systemd[1]: libpod-70160df232d73ab8d4735fbd215f2d538fca8d687fdbb9baecff2d9d848a0738.scope: Deactivated successfully. Dec 6 05:08:32 localhost podman[302350]: 2025-12-06 10:08:32.250709039 +0000 UTC m=+0.172512710 container died 70160df232d73ab8d4735fbd215f2d538fca8d687fdbb9baecff2d9d848a0738 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_bouman, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., ceph=True, io.openshift.expose-services=, GIT_BRANCH=main, architecture=x86_64, name=rhceph, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, RELEASE=main, vcs-type=git, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7) Dec 6 05:08:32 localhost podman[302370]: 2025-12-06 10:08:32.34228331 +0000 UTC m=+0.082324443 container remove 70160df232d73ab8d4735fbd215f2d538fca8d687fdbb9baecff2d9d848a0738 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_bouman, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, distribution-scope=public, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, com.redhat.component=rhceph-container, name=rhceph, version=7, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, ceph=True) Dec 6 05:08:32 localhost systemd[1]: libpod-conmon-70160df232d73ab8d4735fbd215f2d538fca8d687fdbb9baecff2d9d848a0738.scope: Deactivated successfully. Dec 6 05:08:32 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain.devices.0}] v 0) Dec 6 05:08:32 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain}] v 0) Dec 6 05:08:32 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain.devices.0}] v 0) Dec 6 05:08:32 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain}] v 0) Dec 6 05:08:32 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005548789.vxwwsq (monmap changed)... Dec 6 05:08:32 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005548789.vxwwsq (monmap changed)... Dec 6 05:08:32 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005548789.vxwwsq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Dec 6 05:08:32 localhost ceph-mon[298582]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548789.vxwwsq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 6 05:08:32 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 6 05:08:32 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 6 05:08:32 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005548789.vxwwsq on np0005548789.localdomain Dec 6 05:08:32 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005548789.vxwwsq on np0005548789.localdomain Dec 6 05:08:32 localhost ceph-mon[298582]: Reconfiguring osd.4 (monmap changed)... Dec 6 05:08:32 localhost ceph-mon[298582]: Reconfiguring daemon osd.4 on np0005548789.localdomain Dec 6 05:08:32 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:32 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:32 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:32 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:32 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548789.vxwwsq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 6 05:08:32 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548789.vxwwsq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 6 05:08:33 localhost systemd[1]: var-lib-containers-storage-overlay-ecacb212a165db74fc22ba98139a6a8f12b12d9e169791da2c0eab0a950b21fb-merged.mount: Deactivated successfully. Dec 6 05:08:33 localhost ceph-mon[298582]: mon.np0005548789@3(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:08:33 localhost podman[302448]: Dec 6 05:08:33 localhost podman[302448]: 2025-12-06 10:08:33.211545262 +0000 UTC m=+0.070206627 container create bb23283739c220df69ac7cb4b33992937af847fccc471d58831b3f293be512f1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_wright, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, name=rhceph, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, CEPH_POINT_RELEASE=, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, RELEASE=main) Dec 6 05:08:33 localhost systemd[1]: Started libpod-conmon-bb23283739c220df69ac7cb4b33992937af847fccc471d58831b3f293be512f1.scope. Dec 6 05:08:33 localhost systemd[1]: Started libcrun container. Dec 6 05:08:33 localhost podman[302448]: 2025-12-06 10:08:33.276226422 +0000 UTC m=+0.134887797 container init bb23283739c220df69ac7cb4b33992937af847fccc471d58831b3f293be512f1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_wright, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, release=1763362218, io.openshift.expose-services=, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, distribution-scope=public, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, RELEASE=main, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7) Dec 6 05:08:33 localhost ceph-mgr[288591]: log_channel(cluster) log [DBG] : pgmap v10: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s Dec 6 05:08:33 localhost podman[302448]: 2025-12-06 10:08:33.18360686 +0000 UTC m=+0.042268245 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:08:33 localhost goofy_wright[302464]: 167 167 Dec 6 05:08:33 localhost systemd[1]: libpod-bb23283739c220df69ac7cb4b33992937af847fccc471d58831b3f293be512f1.scope: Deactivated successfully. Dec 6 05:08:33 localhost podman[302448]: 2025-12-06 10:08:33.293664357 +0000 UTC m=+0.152325722 container start bb23283739c220df69ac7cb4b33992937af847fccc471d58831b3f293be512f1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_wright, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., distribution-scope=public, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, version=7, ceph=True, architecture=x86_64, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True) Dec 6 05:08:33 localhost podman[302448]: 2025-12-06 10:08:33.293855023 +0000 UTC m=+0.152516398 container attach bb23283739c220df69ac7cb4b33992937af847fccc471d58831b3f293be512f1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_wright, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, version=7, release=1763362218, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, vendor=Red Hat, Inc., GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, name=rhceph) Dec 6 05:08:33 localhost podman[302448]: 2025-12-06 10:08:33.295468122 +0000 UTC m=+0.154129497 container died bb23283739c220df69ac7cb4b33992937af847fccc471d58831b3f293be512f1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_wright, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, release=1763362218, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, RELEASE=main, io.openshift.expose-services=, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., GIT_CLEAN=True, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, name=rhceph, maintainer=Guillaume Abrioux , ceph=True, version=7) Dec 6 05:08:33 localhost podman[302469]: 2025-12-06 10:08:33.378890317 +0000 UTC m=+0.084548229 container remove bb23283739c220df69ac7cb4b33992937af847fccc471d58831b3f293be512f1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_wright, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, build-date=2025-11-26T19:44:28Z, vcs-type=git, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, distribution-scope=public, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, ceph=True, description=Red Hat Ceph Storage 7, version=7, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux ) Dec 6 05:08:33 localhost systemd[1]: libpod-conmon-bb23283739c220df69ac7cb4b33992937af847fccc471d58831b3f293be512f1.scope: Deactivated successfully. Dec 6 05:08:33 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain.devices.0}] v 0) Dec 6 05:08:33 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain}] v 0) Dec 6 05:08:33 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005548789.mzhmje (monmap changed)... Dec 6 05:08:33 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005548789.mzhmje (monmap changed)... Dec 6 05:08:33 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005548789.mzhmje", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Dec 6 05:08:33 localhost ceph-mon[298582]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548789.mzhmje", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:08:33 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "mgr services"} v 0) Dec 6 05:08:33 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mgr services"} : dispatch Dec 6 05:08:33 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 6 05:08:33 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 6 05:08:33 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005548789.mzhmje on np0005548789.localdomain Dec 6 05:08:33 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005548789.mzhmje on np0005548789.localdomain Dec 6 05:08:33 localhost ceph-mon[298582]: Reconfiguring mds.mds.np0005548789.vxwwsq (monmap changed)... Dec 6 05:08:33 localhost ceph-mon[298582]: Reconfiguring daemon mds.mds.np0005548789.vxwwsq on np0005548789.localdomain Dec 6 05:08:33 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:33 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:33 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548789.mzhmje", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:08:33 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548789.mzhmje", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:08:34 localhost systemd[1]: var-lib-containers-storage-overlay-ec464f3513e5c3afa95c75e49fa45403e3e530dfd4f37daad73d1739d88190da-merged.mount: Deactivated successfully. Dec 6 05:08:34 localhost podman[302537]: Dec 6 05:08:34 localhost podman[302537]: 2025-12-06 10:08:34.07067186 +0000 UTC m=+0.076189328 container create 2f3d8dc3c20a1ca236e1c6eaddf9538f0c868bac6a90acc5b4c7a6488b807a9e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_kepler, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, maintainer=Guillaume Abrioux , release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., architecture=x86_64, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, version=7, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, io.openshift.expose-services=, GIT_BRANCH=main, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, vcs-type=git) Dec 6 05:08:34 localhost systemd[1]: Started libpod-conmon-2f3d8dc3c20a1ca236e1c6eaddf9538f0c868bac6a90acc5b4c7a6488b807a9e.scope. Dec 6 05:08:34 localhost systemd[1]: Started libcrun container. Dec 6 05:08:34 localhost podman[302537]: 2025-12-06 10:08:34.039009975 +0000 UTC m=+0.044527483 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:08:34 localhost podman[302537]: 2025-12-06 10:08:34.144315119 +0000 UTC m=+0.149832577 container init 2f3d8dc3c20a1ca236e1c6eaddf9538f0c868bac6a90acc5b4c7a6488b807a9e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_kepler, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , RELEASE=main, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, distribution-scope=public, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, release=1763362218, name=rhceph) Dec 6 05:08:34 localhost podman[302537]: 2025-12-06 10:08:34.15528214 +0000 UTC m=+0.160799598 container start 2f3d8dc3c20a1ca236e1c6eaddf9538f0c868bac6a90acc5b4c7a6488b807a9e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_kepler, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_BRANCH=main, version=7, GIT_CLEAN=True, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, RELEASE=main, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, distribution-scope=public, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux ) Dec 6 05:08:34 localhost silly_kepler[302552]: 167 167 Dec 6 05:08:34 localhost systemd[1]: libpod-2f3d8dc3c20a1ca236e1c6eaddf9538f0c868bac6a90acc5b4c7a6488b807a9e.scope: Deactivated successfully. Dec 6 05:08:34 localhost podman[302537]: 2025-12-06 10:08:34.155578199 +0000 UTC m=+0.161095697 container attach 2f3d8dc3c20a1ca236e1c6eaddf9538f0c868bac6a90acc5b4c7a6488b807a9e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_kepler, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, vendor=Red Hat, Inc., release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, GIT_CLEAN=True) Dec 6 05:08:34 localhost podman[302537]: 2025-12-06 10:08:34.158956921 +0000 UTC m=+0.164474419 container died 2f3d8dc3c20a1ca236e1c6eaddf9538f0c868bac6a90acc5b4c7a6488b807a9e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_kepler, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, vcs-type=git, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, com.redhat.component=rhceph-container, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , GIT_BRANCH=main, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True) Dec 6 05:08:34 localhost podman[302557]: 2025-12-06 10:08:34.261933085 +0000 UTC m=+0.094550121 container remove 2f3d8dc3c20a1ca236e1c6eaddf9538f0c868bac6a90acc5b4c7a6488b807a9e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_kepler, version=7, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, GIT_CLEAN=True, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, name=rhceph) Dec 6 05:08:34 localhost systemd[1]: libpod-conmon-2f3d8dc3c20a1ca236e1c6eaddf9538f0c868bac6a90acc5b4c7a6488b807a9e.scope: Deactivated successfully. Dec 6 05:08:34 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain.devices.0}] v 0) Dec 6 05:08:34 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain}] v 0) Dec 6 05:08:34 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005548789 (monmap changed)... Dec 6 05:08:34 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005548789 (monmap changed)... Dec 6 05:08:34 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Dec 6 05:08:34 localhost ceph-mon[298582]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 6 05:08:34 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Dec 6 05:08:34 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Dec 6 05:08:34 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 6 05:08:34 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 6 05:08:34 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005548789 on np0005548789.localdomain Dec 6 05:08:34 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005548789 on np0005548789.localdomain Dec 6 05:08:34 localhost ceph-mon[298582]: Reconfiguring mgr.np0005548789.mzhmje (monmap changed)... Dec 6 05:08:34 localhost ceph-mon[298582]: Reconfiguring daemon mgr.np0005548789.mzhmje on np0005548789.localdomain Dec 6 05:08:34 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:34 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:34 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 6 05:08:34 localhost ceph-mgr[288591]: log_channel(audit) log [DBG] : from='client.44426 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005548787", "target": ["mon-mgr", ""], "format": "json"}]: dispatch Dec 6 05:08:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:08:35 localhost systemd[1]: var-lib-containers-storage-overlay-ce761430d69334dd94a31804d71973070c50376e38dd1b1c3f5c1f6dc6ff4a15-merged.mount: Deactivated successfully. Dec 6 05:08:35 localhost podman[302628]: Dec 6 05:08:35 localhost podman[302628]: 2025-12-06 10:08:35.101898465 +0000 UTC m=+0.078148487 container create 210320d4ba75a9e4beba51d8c7e11f3b04123b8d733dc4f0bf68baab67644166 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_elbakyan, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, architecture=x86_64, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, distribution-scope=public, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, vcs-type=git, build-date=2025-11-26T19:44:28Z, name=rhceph, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, ceph=True, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, GIT_CLEAN=True, RELEASE=main) Dec 6 05:08:35 localhost systemd[1]: tmp-crun.613DUo.mount: Deactivated successfully. Dec 6 05:08:35 localhost podman[302627]: 2025-12-06 10:08:35.130889978 +0000 UTC m=+0.112726069 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 6 05:08:35 localhost systemd[1]: Started libpod-conmon-210320d4ba75a9e4beba51d8c7e11f3b04123b8d733dc4f0bf68baab67644166.scope. Dec 6 05:08:35 localhost systemd[1]: Started libcrun container. Dec 6 05:08:35 localhost podman[302627]: 2025-12-06 10:08:35.166561604 +0000 UTC m=+0.148397655 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 6 05:08:35 localhost podman[302628]: 2025-12-06 10:08:35.074400455 +0000 UTC m=+0.050650537 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:08:35 localhost podman[302628]: 2025-12-06 10:08:35.177603947 +0000 UTC m=+0.153853989 container init 210320d4ba75a9e4beba51d8c7e11f3b04123b8d733dc4f0bf68baab67644166 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_elbakyan, CEPH_POINT_RELEASE=, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, maintainer=Guillaume Abrioux , io.openshift.expose-services=, distribution-scope=public, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., ceph=True, release=1763362218, com.redhat.component=rhceph-container, GIT_BRANCH=main, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 6 05:08:35 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:08:35 localhost podman[302628]: 2025-12-06 10:08:35.186128233 +0000 UTC m=+0.162378295 container start 210320d4ba75a9e4beba51d8c7e11f3b04123b8d733dc4f0bf68baab67644166 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_elbakyan, GIT_CLEAN=True, vcs-type=git, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, distribution-scope=public, RELEASE=main, version=7, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, architecture=x86_64, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 6 05:08:35 localhost podman[302628]: 2025-12-06 10:08:35.186515385 +0000 UTC m=+0.162765407 container attach 210320d4ba75a9e4beba51d8c7e11f3b04123b8d733dc4f0bf68baab67644166 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_elbakyan, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, io.buildah.version=1.41.4, version=7, architecture=x86_64, description=Red Hat Ceph Storage 7, RELEASE=main, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, release=1763362218, com.redhat.component=rhceph-container, distribution-scope=public, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, GIT_BRANCH=main) Dec 6 05:08:35 localhost sad_elbakyan[302666]: 167 167 Dec 6 05:08:35 localhost systemd[1]: libpod-210320d4ba75a9e4beba51d8c7e11f3b04123b8d733dc4f0bf68baab67644166.scope: Deactivated successfully. Dec 6 05:08:35 localhost podman[302628]: 2025-12-06 10:08:35.190140574 +0000 UTC m=+0.166390656 container died 210320d4ba75a9e4beba51d8c7e11f3b04123b8d733dc4f0bf68baab67644166 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_elbakyan, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, vcs-type=git, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, version=7) Dec 6 05:08:35 localhost ceph-mgr[288591]: log_channel(cluster) log [DBG] : pgmap v11: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s Dec 6 05:08:35 localhost podman[302671]: 2025-12-06 10:08:35.280686934 +0000 UTC m=+0.080176308 container remove 210320d4ba75a9e4beba51d8c7e11f3b04123b8d733dc4f0bf68baab67644166 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_elbakyan, version=7, io.openshift.expose-services=, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, distribution-scope=public, vcs-type=git, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, release=1763362218, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7) Dec 6 05:08:35 localhost systemd[1]: libpod-conmon-210320d4ba75a9e4beba51d8c7e11f3b04123b8d733dc4f0bf68baab67644166.scope: Deactivated successfully. Dec 6 05:08:35 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain.devices.0}] v 0) Dec 6 05:08:35 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain}] v 0) Dec 6 05:08:35 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005548790 (monmap changed)... Dec 6 05:08:35 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005548790 (monmap changed)... Dec 6 05:08:35 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005548790.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Dec 6 05:08:35 localhost ceph-mon[298582]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548790.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:08:35 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 6 05:08:35 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 6 05:08:35 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005548790 on np0005548790.localdomain Dec 6 05:08:35 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005548790 on np0005548790.localdomain Dec 6 05:08:35 localhost nova_compute[282193]: 2025-12-06 10:08:35.560 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:08:35 localhost nova_compute[282193]: 2025-12-06 10:08:35.563 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:08:35 localhost nova_compute[282193]: 2025-12-06 10:08:35.563 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:08:35 localhost nova_compute[282193]: 2025-12-06 10:08:35.563 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:08:35 localhost nova_compute[282193]: 2025-12-06 10:08:35.592 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:08:35 localhost nova_compute[282193]: 2025-12-06 10:08:35.593 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:08:35 localhost ceph-mon[298582]: Reconfiguring mon.np0005548789 (monmap changed)... Dec 6 05:08:35 localhost ceph-mon[298582]: Reconfiguring daemon mon.np0005548789 on np0005548789.localdomain Dec 6 05:08:35 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:35 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:35 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548790.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:08:35 localhost ceph-mon[298582]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548790.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:08:36 localhost systemd[1]: var-lib-containers-storage-overlay-cfb8622338409a0aa7cb66aee8e9c1c408ae7d98bc5f4ace1d1c936a1a8635fe-merged.mount: Deactivated successfully. Dec 6 05:08:36 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain.devices.0}] v 0) Dec 6 05:08:36 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain}] v 0) Dec 6 05:08:36 localhost ceph-mgr[288591]: log_channel(audit) log [DBG] : from='client.34469 -' entity='client.admin' cmd=[{"prefix": "orch daemon rm", "names": ["mon.np0005548787"], "force": true, "target": ["mon-mgr", ""]}]: dispatch Dec 6 05:08:36 localhost ceph-mgr[288591]: [cephadm INFO root] Remove daemons mon.np0005548787 Dec 6 05:08:36 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Remove daemons mon.np0005548787 Dec 6 05:08:36 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "quorum_status"} v 0) Dec 6 05:08:36 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "quorum_status"} : dispatch Dec 6 05:08:36 localhost ceph-mgr[288591]: [cephadm INFO cephadm.services.cephadmservice] Safe to remove mon.np0005548787: new quorum should be ['np0005548790', 'np0005548788', 'np0005548789'] (from ['np0005548790', 'np0005548788', 'np0005548789']) Dec 6 05:08:36 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Safe to remove mon.np0005548787: new quorum should be ['np0005548790', 'np0005548788', 'np0005548789'] (from ['np0005548790', 'np0005548788', 'np0005548789']) Dec 6 05:08:36 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)... Dec 6 05:08:36 localhost ceph-mgr[288591]: [cephadm INFO cephadm.services.cephadmservice] Removing monitor np0005548787 from monmap... Dec 6 05:08:36 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)... Dec 6 05:08:36 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Removing monitor np0005548787 from monmap... Dec 6 05:08:36 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) Dec 6 05:08:36 localhost ceph-mon[298582]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Dec 6 05:08:36 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "mon rm", "name": "np0005548787"} v 0) Dec 6 05:08:36 localhost ceph-mon[298582]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mon rm", "name": "np0005548787"} : dispatch Dec 6 05:08:36 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e12 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Dec 6 05:08:36 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch Dec 6 05:08:36 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Removing daemon mon.np0005548787 from np0005548787.localdomain -- ports [] Dec 6 05:08:36 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Removing daemon mon.np0005548787 from np0005548787.localdomain -- ports [] Dec 6 05:08:36 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on np0005548790.localdomain Dec 6 05:08:36 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on np0005548790.localdomain Dec 6 05:08:36 localhost ceph-mgr[288591]: client.44398 ms_handle_reset on v2:172.18.0.103:3300/0 Dec 6 05:08:36 localhost ceph-mon[298582]: mon.np0005548789@3(peon) e13 my rank is now 2 (was 3) Dec 6 05:08:36 localhost ceph-mgr[288591]: client.44398 ms_handle_reset on v2:172.18.0.104:3300/0 Dec 6 05:08:36 localhost ceph-mgr[288591]: client.54179 ms_handle_reset on v2:172.18.0.104:3300/0 Dec 6 05:08:36 localhost ceph-mgr[288591]: client.0 ms_handle_reset on v2:172.18.0.104:3300/0 Dec 6 05:08:36 localhost ceph-mgr[288591]: client.0 ms_handle_reset on v2:172.18.0.104:3300/0 Dec 6 05:08:36 localhost ceph-mon[298582]: log_channel(cluster) log [INF] : mon.np0005548789 calling monitor election Dec 6 05:08:36 localhost ceph-mon[298582]: paxos.2).electionLogic(56) init, last seen epoch 56 Dec 6 05:08:36 localhost ceph-mon[298582]: mon.np0005548789@2(electing) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 6 05:08:36 localhost ceph-mon[298582]: mon.np0005548789@2(electing) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 6 05:08:37 localhost ceph-mgr[288591]: log_channel(cluster) log [DBG] : pgmap v12: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail Dec 6 05:08:38 localhost ceph-mon[298582]: mon.np0005548789@2(electing) e13 handle_auth_request failed to assign global_id Dec 6 05:08:39 localhost ceph-mon[298582]: mon.np0005548789@2(electing) e13 handle_auth_request failed to assign global_id Dec 6 05:08:39 localhost ceph-mgr[288591]: log_channel(cluster) log [DBG] : pgmap v13: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail Dec 6 05:08:39 localhost ceph-mon[298582]: mon.np0005548789@2(electing) e13 handle_auth_request failed to assign global_id Dec 6 05:08:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:08:39 localhost podman[302687]: 2025-12-06 10:08:39.928394061 +0000 UTC m=+0.081974731 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125) Dec 6 05:08:39 localhost podman[302687]: 2025-12-06 10:08:39.971381267 +0000 UTC m=+0.124961917 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 6 05:08:39 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:08:40 localhost ceph-mon[298582]: mon.np0005548789@2(electing) e13 handle_auth_request failed to assign global_id Dec 6 05:08:40 localhost ceph-mds[287313]: mds.beacon.mds.np0005548789.vxwwsq missed beacon ack from the monitors Dec 6 05:08:40 localhost nova_compute[282193]: 2025-12-06 10:08:40.593 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:08:41 localhost ceph-mgr[288591]: log_channel(cluster) log [DBG] : pgmap v14: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail Dec 6 05:08:41 localhost ceph-mon[298582]: mon.np0005548789@2(peon) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 6 05:08:41 localhost ceph-mon[298582]: Reconfiguring crash.np0005548790 (monmap changed)... Dec 6 05:08:41 localhost ceph-mon[298582]: Reconfiguring daemon crash.np0005548790 on np0005548790.localdomain Dec 6 05:08:41 localhost ceph-mon[298582]: Remove daemons mon.np0005548787 Dec 6 05:08:41 localhost ceph-mon[298582]: Safe to remove mon.np0005548787: new quorum should be ['np0005548790', 'np0005548788', 'np0005548789'] (from ['np0005548790', 'np0005548788', 'np0005548789']) Dec 6 05:08:41 localhost ceph-mon[298582]: Reconfiguring osd.0 (monmap changed)... Dec 6 05:08:41 localhost ceph-mon[298582]: Removing monitor np0005548787 from monmap... Dec 6 05:08:41 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Dec 6 05:08:41 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mon rm", "name": "np0005548787"} : dispatch Dec 6 05:08:41 localhost ceph-mon[298582]: Removing daemon mon.np0005548787 from np0005548787.localdomain -- ports [] Dec 6 05:08:41 localhost ceph-mon[298582]: Reconfiguring daemon osd.0 on np0005548790.localdomain Dec 6 05:08:41 localhost ceph-mon[298582]: mon.np0005548790 calling monitor election Dec 6 05:08:41 localhost ceph-mon[298582]: mon.np0005548789 calling monitor election Dec 6 05:08:41 localhost ceph-mon[298582]: mon.np0005548790 is new leader, mons np0005548790,np0005548789 in quorum (ranks 0,2) Dec 6 05:08:41 localhost ceph-mon[298582]: Health check failed: 1/3 mons down, quorum np0005548790,np0005548789 (MON_DOWN) Dec 6 05:08:41 localhost ceph-mon[298582]: Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm; 1/3 mons down, quorum np0005548790,np0005548789 Dec 6 05:08:41 localhost ceph-mon[298582]: [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm Dec 6 05:08:41 localhost ceph-mon[298582]: stray daemon mgr.np0005548785.vhqlsq on host np0005548785.localdomain not managed by cephadm Dec 6 05:08:41 localhost ceph-mon[298582]: [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm Dec 6 05:08:41 localhost ceph-mon[298582]: stray host np0005548785.localdomain has 1 stray daemons: ['mgr.np0005548785.vhqlsq'] Dec 6 05:08:41 localhost ceph-mon[298582]: [WRN] MON_DOWN: 1/3 mons down, quorum np0005548790,np0005548789 Dec 6 05:08:41 localhost ceph-mon[298582]: mon.np0005548788 (rank 1) addr [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] is down (out of quorum) Dec 6 05:08:41 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:41 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:41 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring osd.3 (monmap changed)... Dec 6 05:08:41 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring osd.3 (monmap changed)... Dec 6 05:08:41 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.3 on np0005548790.localdomain Dec 6 05:08:41 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.3 on np0005548790.localdomain Dec 6 05:08:41 localhost ceph-mon[298582]: mon.np0005548789@2(peon) e13 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 6 05:08:41 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2722608319' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 6 05:08:41 localhost ceph-mon[298582]: mon.np0005548789@2(peon) e13 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 6 05:08:41 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2722608319' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 6 05:08:42 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:42 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:42 localhost ceph-mon[298582]: Reconfiguring osd.3 (monmap changed)... Dec 6 05:08:42 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Dec 6 05:08:42 localhost ceph-mon[298582]: Reconfiguring daemon osd.3 on np0005548790.localdomain Dec 6 05:08:42 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0. Dec 6 05:08:42 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:08:42.478207) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 6 05:08:42 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16 Dec 6 05:08:42 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015722478260, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 2567, "num_deletes": 272, "total_data_size": 10789467, "memory_usage": 11593992, "flush_reason": "Manual Compaction"} Dec 6 05:08:42 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started Dec 6 05:08:42 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015722520175, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 7196772, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10571, "largest_seqno": 13137, "table_properties": {"data_size": 7184980, "index_size": 7273, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3461, "raw_key_size": 31353, "raw_average_key_size": 22, "raw_value_size": 7158759, "raw_average_value_size": 5221, "num_data_blocks": 313, "num_entries": 1371, "num_filter_entries": 1371, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015627, "oldest_key_time": 1765015627, "file_creation_time": 1765015722, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ec2a3a6-c7d0-4f1e-9923-5a3b782725ae", "db_session_id": "JMBO5KX1IJCJ8FWC64EX", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}} Dec 6 05:08:42 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 42018 microseconds, and 9532 cpu microseconds. Dec 6 05:08:42 localhost ceph-mon[298582]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 6 05:08:42 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:08:42.520230) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 7196772 bytes OK Dec 6 05:08:42 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:08:42.520253) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started Dec 6 05:08:42 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:08:42.521657) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done Dec 6 05:08:42 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:08:42.521675) EVENT_LOG_v1 {"time_micros": 1765015722521670, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 6 05:08:42 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:08:42.521693) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 6 05:08:42 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 10775666, prev total WAL file size 10781099, number of live WAL files 2. Dec 6 05:08:42 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:08:42 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:08:42.523596) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130373933' seq:72057594037927935, type:22 .. '7061786F73003131303435' seq:0, type:0; will stop at (end) Dec 6 05:08:42 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 6 05:08:42 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(7028KB)], [15(12MB)] Dec 6 05:08:42 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015722523681, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 20664240, "oldest_snapshot_seqno": -1} Dec 6 05:08:42 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005548790.vhcezv (monmap changed)... Dec 6 05:08:42 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005548790.vhcezv (monmap changed)... Dec 6 05:08:42 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005548790.vhcezv on np0005548790.localdomain Dec 6 05:08:42 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005548790.vhcezv on np0005548790.localdomain Dec 6 05:08:42 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 11123 keys, 17441653 bytes, temperature: kUnknown Dec 6 05:08:42 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015722623010, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 17441653, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17376655, "index_size": 36097, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27845, "raw_key_size": 297757, "raw_average_key_size": 26, "raw_value_size": 17185459, "raw_average_value_size": 1545, "num_data_blocks": 1384, "num_entries": 11123, "num_filter_entries": 11123, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015623, "oldest_key_time": 0, "file_creation_time": 1765015722, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ec2a3a6-c7d0-4f1e-9923-5a3b782725ae", "db_session_id": "JMBO5KX1IJCJ8FWC64EX", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}} Dec 6 05:08:42 localhost ceph-mon[298582]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 6 05:08:42 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:08:42.623245) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 17441653 bytes Dec 6 05:08:42 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:08:42.624804) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 208.5 rd, 176.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(6.9, 12.8 +0.0 blob) out(16.6 +0.0 blob), read-write-amplify(5.3) write-amplify(2.4) OK, records in: 11676, records dropped: 553 output_compression: NoCompression Dec 6 05:08:42 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:08:42.624825) EVENT_LOG_v1 {"time_micros": 1765015722624816, "job": 6, "event": "compaction_finished", "compaction_time_micros": 99113, "compaction_time_cpu_micros": 45640, "output_level": 6, "num_output_files": 1, "total_output_size": 17441653, "num_input_records": 11676, "num_output_records": 11123, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 6 05:08:42 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:08:42 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015722625543, "job": 6, "event": "table_file_deletion", "file_number": 17} Dec 6 05:08:42 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:08:42 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015722626979, "job": 6, "event": "table_file_deletion", "file_number": 15} Dec 6 05:08:42 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:08:42.523446) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:08:42 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:08:42.627004) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:08:42 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:08:42.627008) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:08:42 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:08:42.627010) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:08:42 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:08:42.627012) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:08:42 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:08:42.627014) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:08:42 localhost ceph-mgr[288591]: log_channel(audit) log [DBG] : from='client.34476 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005548787.localdomain", "label": "mon", "target": ["mon-mgr", ""]}]: dispatch Dec 6 05:08:42 localhost ceph-mgr[288591]: [cephadm INFO root] Removed label mon from host np0005548787.localdomain Dec 6 05:08:42 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Removed label mon from host np0005548787.localdomain Dec 6 05:08:43 localhost ceph-mon[298582]: mon.np0005548789@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:08:43 localhost ceph-mgr[288591]: log_channel(cluster) log [DBG] : pgmap v15: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail Dec 6 05:08:43 localhost ceph-mon[298582]: mon.np0005548789@2(electing) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 6 05:08:43 localhost ceph-mon[298582]: mon.np0005548789@2(electing) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 6 05:08:43 localhost ceph-mon[298582]: mon.np0005548789@2(peon) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 6 05:08:43 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005548790.kvkfyr (monmap changed)... Dec 6 05:08:43 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005548790.kvkfyr (monmap changed)... Dec 6 05:08:43 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005548790.kvkfyr on np0005548790.localdomain Dec 6 05:08:43 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005548790.kvkfyr on np0005548790.localdomain Dec 6 05:08:43 localhost ceph-mon[298582]: mon.np0005548788 calling monitor election Dec 6 05:08:43 localhost ceph-mon[298582]: mon.np0005548790 calling monitor election Dec 6 05:08:43 localhost ceph-mon[298582]: mon.np0005548790 is new leader, mons np0005548790,np0005548788,np0005548789 in quorum (ranks 0,1,2) Dec 6 05:08:43 localhost ceph-mon[298582]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum np0005548790,np0005548789) Dec 6 05:08:43 localhost ceph-mon[298582]: Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm Dec 6 05:08:43 localhost ceph-mon[298582]: [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm Dec 6 05:08:43 localhost ceph-mon[298582]: stray daemon mgr.np0005548785.vhqlsq on host np0005548785.localdomain not managed by cephadm Dec 6 05:08:43 localhost ceph-mon[298582]: [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm Dec 6 05:08:43 localhost ceph-mon[298582]: stray host np0005548785.localdomain has 1 stray daemons: ['mgr.np0005548785.vhqlsq'] Dec 6 05:08:43 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:43 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:43 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548790.kvkfyr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:08:44 localhost ceph-mgr[288591]: log_channel(audit) log [DBG] : from='client.44452 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005548787.localdomain", "label": "mgr", "target": ["mon-mgr", ""]}]: dispatch Dec 6 05:08:44 localhost ceph-mgr[288591]: [cephadm INFO root] Removed label mgr from host np0005548787.localdomain Dec 6 05:08:44 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Removed label mgr from host np0005548787.localdomain Dec 6 05:08:44 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005548790 (monmap changed)... Dec 6 05:08:44 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005548790 (monmap changed)... Dec 6 05:08:44 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005548790 on np0005548790.localdomain Dec 6 05:08:44 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005548790 on np0005548790.localdomain Dec 6 05:08:44 localhost ceph-mon[298582]: Reconfiguring mgr.np0005548790.kvkfyr (monmap changed)... Dec 6 05:08:44 localhost ceph-mon[298582]: Reconfiguring daemon mgr.np0005548790.kvkfyr on np0005548790.localdomain Dec 6 05:08:44 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:44 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:44 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:44 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 6 05:08:45 localhost ceph-mgr[288591]: log_channel(cluster) log [DBG] : pgmap v16: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail Dec 6 05:08:45 localhost ceph-mgr[288591]: log_channel(audit) log [DBG] : from='client.44458 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005548787.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch Dec 6 05:08:45 localhost ceph-mgr[288591]: [cephadm INFO root] Removed label _admin from host np0005548787.localdomain Dec 6 05:08:45 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Removed label _admin from host np0005548787.localdomain Dec 6 05:08:45 localhost nova_compute[282193]: 2025-12-06 10:08:45.596 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:08:45 localhost nova_compute[282193]: 2025-12-06 10:08:45.599 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:08:45 localhost nova_compute[282193]: 2025-12-06 10:08:45.599 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:08:45 localhost nova_compute[282193]: 2025-12-06 10:08:45.599 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:08:45 localhost nova_compute[282193]: 2025-12-06 10:08:45.631 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:08:45 localhost nova_compute[282193]: 2025-12-06 10:08:45.632 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:08:45 localhost ceph-mon[298582]: Removed label mgr from host np0005548787.localdomain Dec 6 05:08:45 localhost ceph-mon[298582]: Reconfiguring mon.np0005548790 (monmap changed)... Dec 6 05:08:45 localhost ceph-mon[298582]: Reconfiguring daemon mon.np0005548790 on np0005548790.localdomain Dec 6 05:08:45 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:45 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:45 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:46 localhost openstack_network_exporter[243110]: ERROR 10:08:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:08:46 localhost openstack_network_exporter[243110]: ERROR 10:08:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:08:46 localhost openstack_network_exporter[243110]: ERROR 10:08:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:08:46 localhost openstack_network_exporter[243110]: ERROR 10:08:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:08:46 localhost openstack_network_exporter[243110]: Dec 6 05:08:46 localhost openstack_network_exporter[243110]: ERROR 10:08:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:08:46 localhost openstack_network_exporter[243110]: Dec 6 05:08:47 localhost ceph-mgr[288591]: log_channel(cluster) log [DBG] : pgmap v17: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail Dec 6 05:08:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:08:47.299 160509 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:08:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:08:47.299 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:08:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:08:47.300 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:08:48 localhost ceph-mon[298582]: mon.np0005548789@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:08:48 localhost ceph-mon[298582]: Removed label _admin from host np0005548787.localdomain Dec 6 05:08:48 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Removing np0005548787.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:08:48 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Removing np0005548787.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:08:48 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Updating np0005548788.localdomain:/etc/ceph/ceph.conf Dec 6 05:08:48 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Updating np0005548788.localdomain:/etc/ceph/ceph.conf Dec 6 05:08:48 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Updating np0005548789.localdomain:/etc/ceph/ceph.conf Dec 6 05:08:48 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Updating np0005548790.localdomain:/etc/ceph/ceph.conf Dec 6 05:08:48 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Updating np0005548789.localdomain:/etc/ceph/ceph.conf Dec 6 05:08:48 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Updating np0005548790.localdomain:/etc/ceph/ceph.conf Dec 6 05:08:48 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Removing np0005548787.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 6 05:08:48 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Removing np0005548787.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 6 05:08:48 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Removing np0005548787.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring Dec 6 05:08:48 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Removing np0005548787.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring Dec 6 05:08:49 localhost ceph-mgr[288591]: log_channel(cluster) log [DBG] : pgmap v18: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail Dec 6 05:08:49 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:08:49 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:08:49 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:08:49 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:08:49 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:08:49 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:08:49 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:49 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:49 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:08:49 localhost ceph-mon[298582]: Removing np0005548787.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:08:49 localhost ceph-mon[298582]: Updating np0005548788.localdomain:/etc/ceph/ceph.conf Dec 6 05:08:49 localhost ceph-mon[298582]: Updating np0005548789.localdomain:/etc/ceph/ceph.conf Dec 6 05:08:49 localhost ceph-mon[298582]: Updating np0005548790.localdomain:/etc/ceph/ceph.conf Dec 6 05:08:49 localhost ceph-mon[298582]: Removing np0005548787.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 6 05:08:49 localhost ceph-mon[298582]: Removing np0005548787.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring Dec 6 05:08:49 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:49 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:08:50 localhost podman[303015]: 2025-12-06 10:08:50.129629781 +0000 UTC m=+0.119757470 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible) Dec 6 05:08:50 localhost podman[303015]: 2025-12-06 10:08:50.163021858 +0000 UTC m=+0.153149517 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3) Dec 6 05:08:50 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:08:50 localhost ceph-mgr[288591]: [progress INFO root] update: starting ev 4259bd03-1a0b-415c-a9e9-199a778c03cd (Updating mgr deployment (-1 -> 3)) Dec 6 05:08:50 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Removing daemon mgr.np0005548787.umwsra from np0005548787.localdomain -- ports [8765] Dec 6 05:08:50 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Removing daemon mgr.np0005548787.umwsra from np0005548787.localdomain -- ports [8765] Dec 6 05:08:50 localhost nova_compute[282193]: 2025-12-06 10:08:50.633 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:08:50 localhost nova_compute[282193]: 2025-12-06 10:08:50.635 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:08:50 localhost nova_compute[282193]: 2025-12-06 10:08:50.635 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:08:50 localhost nova_compute[282193]: 2025-12-06 10:08:50.635 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:08:50 localhost ceph-mgr[288591]: [volumes INFO mgr_util] scanning for idle connections.. Dec 6 05:08:50 localhost ceph-mgr[288591]: [volumes INFO mgr_util] cleaning up connections: [] Dec 6 05:08:50 localhost nova_compute[282193]: 2025-12-06 10:08:50.671 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:08:50 localhost ceph-mgr[288591]: [volumes INFO mgr_util] scanning for idle connections.. Dec 6 05:08:50 localhost ceph-mgr[288591]: [volumes INFO mgr_util] cleaning up connections: [] Dec 6 05:08:50 localhost nova_compute[282193]: 2025-12-06 10:08:50.672 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:08:50 localhost ceph-mgr[288591]: [volumes INFO mgr_util] scanning for idle connections.. Dec 6 05:08:50 localhost ceph-mgr[288591]: [volumes INFO mgr_util] cleaning up connections: [] Dec 6 05:08:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:08:50 localhost systemd[1]: tmp-crun.kVY8vI.mount: Deactivated successfully. Dec 6 05:08:50 localhost podman[303050]: 2025-12-06 10:08:50.909421027 +0000 UTC m=+0.078815978 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:08:50 localhost ceph-mon[298582]: Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:08:50 localhost ceph-mon[298582]: Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:08:50 localhost ceph-mon[298582]: Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:08:50 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:50 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:50 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:50 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:50 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:50 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:50 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:50 localhost podman[303050]: 2025-12-06 10:08:50.950439433 +0000 UTC m=+0.119834374 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 6 05:08:50 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:08:51 localhost ceph-mgr[288591]: log_channel(cluster) log [DBG] : pgmap v19: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail Dec 6 05:08:51 localhost ceph-mon[298582]: Removing daemon mgr.np0005548787.umwsra from np0005548787.localdomain -- ports [8765] Dec 6 05:08:51 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0. Dec 6 05:08:51 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:08:51.957432) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 6 05:08:51 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19 Dec 6 05:08:51 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015731957480, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 630, "num_deletes": 250, "total_data_size": 765562, "memory_usage": 778872, "flush_reason": "Manual Compaction"} Dec 6 05:08:51 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started Dec 6 05:08:51 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015731963321, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 453098, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13142, "largest_seqno": 13767, "table_properties": {"data_size": 449748, "index_size": 1205, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8100, "raw_average_key_size": 19, "raw_value_size": 442687, "raw_average_value_size": 1054, "num_data_blocks": 48, "num_entries": 420, "num_filter_entries": 420, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015722, "oldest_key_time": 1765015722, "file_creation_time": 1765015731, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ec2a3a6-c7d0-4f1e-9923-5a3b782725ae", "db_session_id": "JMBO5KX1IJCJ8FWC64EX", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}} Dec 6 05:08:51 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 5912 microseconds, and 2282 cpu microseconds. Dec 6 05:08:51 localhost ceph-mon[298582]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 6 05:08:51 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:08:51.963350) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 453098 bytes OK Dec 6 05:08:51 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:08:51.963364) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started Dec 6 05:08:51 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:08:51.964934) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done Dec 6 05:08:51 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:08:51.964946) EVENT_LOG_v1 {"time_micros": 1765015731964943, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 6 05:08:51 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:08:51.964959) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 6 05:08:51 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 761862, prev total WAL file size 761862, number of live WAL files 2. Dec 6 05:08:51 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:08:51 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:08:51.965291) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760031323836' seq:72057594037927935, type:22 .. '6B760031353337' seq:0, type:0; will stop at (end) Dec 6 05:08:51 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 6 05:08:51 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(442KB)], [18(16MB)] Dec 6 05:08:51 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015731965351, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 17894751, "oldest_snapshot_seqno": -1} Dec 6 05:08:52 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 11016 keys, 16889865 bytes, temperature: kUnknown Dec 6 05:08:52 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015732047602, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 16889865, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16826454, "index_size": 34766, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27589, "raw_key_size": 297268, "raw_average_key_size": 26, "raw_value_size": 16637818, "raw_average_value_size": 1510, "num_data_blocks": 1311, "num_entries": 11016, "num_filter_entries": 11016, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015623, "oldest_key_time": 0, "file_creation_time": 1765015731, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ec2a3a6-c7d0-4f1e-9923-5a3b782725ae", "db_session_id": "JMBO5KX1IJCJ8FWC64EX", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}} Dec 6 05:08:52 localhost ceph-mon[298582]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 6 05:08:52 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:08:52.047944) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 16889865 bytes Dec 6 05:08:52 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:08:52.049890) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 217.3 rd, 205.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 16.6 +0.0 blob) out(16.1 +0.0 blob), read-write-amplify(76.8) write-amplify(37.3) OK, records in: 11543, records dropped: 527 output_compression: NoCompression Dec 6 05:08:52 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:08:52.049922) EVENT_LOG_v1 {"time_micros": 1765015732049908, "job": 8, "event": "compaction_finished", "compaction_time_micros": 82339, "compaction_time_cpu_micros": 36245, "output_level": 6, "num_output_files": 1, "total_output_size": 16889865, "num_input_records": 11543, "num_output_records": 11016, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 6 05:08:52 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:08:52 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015732050137, "job": 8, "event": "table_file_deletion", "file_number": 20} Dec 6 05:08:52 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:08:52 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015732053118, "job": 8, "event": "table_file_deletion", "file_number": 18} Dec 6 05:08:52 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:08:51.965229) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:08:52 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:08:52.053160) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:08:52 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:08:52.053164) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:08:52 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:08:52.053167) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:08:52 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:08:52.053168) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:08:52 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:08:52.053170) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:08:52 localhost ceph-mgr[288591]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.np0005548787.umwsra Dec 6 05:08:52 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Removing key for mgr.np0005548787.umwsra Dec 6 05:08:52 localhost ceph-mgr[288591]: [progress INFO root] complete: finished ev 4259bd03-1a0b-415c-a9e9-199a778c03cd (Updating mgr deployment (-1 -> 3)) Dec 6 05:08:52 localhost ceph-mgr[288591]: [progress INFO root] Completed event 4259bd03-1a0b-415c-a9e9-199a778c03cd (Updating mgr deployment (-1 -> 3)) in 2 seconds Dec 6 05:08:52 localhost ceph-mgr[288591]: [progress INFO root] update: starting ev d378aec2-e8d0-4148-bcb8-8a9d09701394 (Updating node-proxy deployment (+4 -> 4)) Dec 6 05:08:52 localhost ceph-mgr[288591]: [progress INFO root] complete: finished ev d378aec2-e8d0-4148-bcb8-8a9d09701394 (Updating node-proxy deployment (+4 -> 4)) Dec 6 05:08:52 localhost ceph-mgr[288591]: [progress INFO root] Completed event d378aec2-e8d0-4148-bcb8-8a9d09701394 (Updating node-proxy deployment (+4 -> 4)) in 0 seconds Dec 6 05:08:52 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth rm", "entity": "mgr.np0005548787.umwsra"} : dispatch Dec 6 05:08:52 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd='[{"prefix": "auth rm", "entity": "mgr.np0005548787.umwsra"}]': finished Dec 6 05:08:52 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:52 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:53 localhost ceph-mon[298582]: mon.np0005548789@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:08:53 localhost ceph-mgr[288591]: log_channel(cluster) log [DBG] : pgmap v20: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail Dec 6 05:08:53 localhost podman[241090]: time="2025-12-06T10:08:53Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:08:53 localhost podman[241090]: @ - - [06/Dec/2025:10:08:53 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156104 "" "Go-http-client/1.1" Dec 6 05:08:53 localhost ceph-mon[298582]: Removing key for mgr.np0005548787.umwsra Dec 6 05:08:53 localhost podman[241090]: @ - - [06/Dec/2025:10:08:53 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19228 "" "Go-http-client/1.1" Dec 6 05:08:54 localhost ceph-mgr[288591]: [progress INFO root] update: starting ev 9c69ec7c-cfed-449b-bbea-090d12f32dd8 (Updating node-proxy deployment (+4 -> 4)) Dec 6 05:08:54 localhost ceph-mgr[288591]: [progress INFO root] complete: finished ev 9c69ec7c-cfed-449b-bbea-090d12f32dd8 (Updating node-proxy deployment (+4 -> 4)) Dec 6 05:08:54 localhost ceph-mgr[288591]: [progress INFO root] Completed event 9c69ec7c-cfed-449b-bbea-090d12f32dd8 (Updating node-proxy deployment (+4 -> 4)) in 0 seconds Dec 6 05:08:54 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005548787 (monmap changed)... Dec 6 05:08:54 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005548787 (monmap changed)... Dec 6 05:08:54 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005548787 on np0005548787.localdomain Dec 6 05:08:54 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005548787 on np0005548787.localdomain Dec 6 05:08:55 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:55 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:55 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:08:55 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:55 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548787.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:08:55 localhost nova_compute[282193]: 2025-12-06 10:08:55.073 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:08:55 localhost nova_compute[282193]: 2025-12-06 10:08:55.093 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Triggering sync for uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m Dec 6 05:08:55 localhost nova_compute[282193]: 2025-12-06 10:08:55.094 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:08:55 localhost nova_compute[282193]: 2025-12-06 10:08:55.094 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" acquired by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:08:55 localhost nova_compute[282193]: 2025-12-06 10:08:55.121 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" "released" by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" :: held 0.027s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:08:55 localhost ceph-mgr[288591]: log_channel(cluster) log [DBG] : pgmap v21: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail Dec 6 05:08:55 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005548788 (monmap changed)... Dec 6 05:08:55 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005548788 (monmap changed)... Dec 6 05:08:55 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005548788 on np0005548788.localdomain Dec 6 05:08:55 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005548788 on np0005548788.localdomain Dec 6 05:08:55 localhost ceph-mgr[288591]: [progress INFO root] Writing back 50 completed events Dec 6 05:08:55 localhost nova_compute[282193]: 2025-12-06 10:08:55.673 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:08:55 localhost nova_compute[282193]: 2025-12-06 10:08:55.675 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:08:55 localhost nova_compute[282193]: 2025-12-06 10:08:55.676 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:08:55 localhost nova_compute[282193]: 2025-12-06 10:08:55.676 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:08:55 localhost nova_compute[282193]: 2025-12-06 10:08:55.708 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:08:55 localhost nova_compute[282193]: 2025-12-06 10:08:55.709 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:08:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:08:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:08:55 localhost podman[303112]: 2025-12-06 10:08:55.927228761 +0000 UTC m=+0.084020664 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_compute) Dec 6 05:08:55 localhost podman[303112]: 2025-12-06 10:08:55.965104752 +0000 UTC m=+0.121896665 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2) Dec 6 05:08:55 localhost systemd[1]: tmp-crun.RrNnuT.mount: Deactivated successfully. Dec 6 05:08:55 localhost podman[303111]: 2025-12-06 10:08:55.986957951 +0000 UTC m=+0.146855677 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, version=9.6, name=ubi9-minimal, architecture=x86_64, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, release=1755695350, distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 05:08:55 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:08:56 localhost podman[303111]: 2025-12-06 10:08:56.00019173 +0000 UTC m=+0.160089516 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, version=9.6, managed_by=edpm_ansible, maintainer=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, architecture=x86_64, build-date=2025-08-20T13:12:41, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.buildah.version=1.33.7, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers) Dec 6 05:08:56 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:08:56 localhost ceph-mon[298582]: Reconfiguring crash.np0005548787 (monmap changed)... Dec 6 05:08:56 localhost ceph-mon[298582]: Reconfiguring daemon crash.np0005548787 on np0005548787.localdomain Dec 6 05:08:56 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:56 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:56 localhost ceph-mon[298582]: Reconfiguring crash.np0005548788 (monmap changed)... Dec 6 05:08:56 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548788.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:08:56 localhost ceph-mon[298582]: Reconfiguring daemon crash.np0005548788 on np0005548788.localdomain Dec 6 05:08:56 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:56 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring osd.2 (monmap changed)... Dec 6 05:08:56 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring osd.2 (monmap changed)... Dec 6 05:08:56 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.2 on np0005548788.localdomain Dec 6 05:08:56 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.2 on np0005548788.localdomain Dec 6 05:08:56 localhost ceph-mgr[288591]: log_channel(audit) log [DBG] : from='client.54257 -' entity='client.admin' cmd=[{"prefix": "orch host drain", "hostname": "np0005548787.localdomain", "target": ["mon-mgr", ""]}]: dispatch Dec 6 05:08:56 localhost ceph-mgr[288591]: [cephadm INFO root] Added label _no_schedule to host np0005548787.localdomain Dec 6 05:08:56 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Added label _no_schedule to host np0005548787.localdomain Dec 6 05:08:56 localhost ceph-mgr[288591]: [cephadm INFO root] Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005548787.localdomain Dec 6 05:08:56 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005548787.localdomain Dec 6 05:08:57 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:57 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:57 localhost ceph-mon[298582]: Reconfiguring osd.2 (monmap changed)... Dec 6 05:08:57 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 6 05:08:57 localhost ceph-mon[298582]: Reconfiguring daemon osd.2 on np0005548788.localdomain Dec 6 05:08:57 localhost ceph-mon[298582]: Added label _no_schedule to host np0005548787.localdomain Dec 6 05:08:57 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:57 localhost ceph-mon[298582]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005548787.localdomain Dec 6 05:08:57 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:57 localhost ceph-mgr[288591]: log_channel(cluster) log [DBG] : pgmap v22: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail Dec 6 05:08:57 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring osd.5 (monmap changed)... Dec 6 05:08:57 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring osd.5 (monmap changed)... Dec 6 05:08:57 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.5 on np0005548788.localdomain Dec 6 05:08:57 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.5 on np0005548788.localdomain Dec 6 05:08:57 localhost sshd[303151]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:08:58 localhost ceph-mon[298582]: mon.np0005548789@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:08:58 localhost ceph-mgr[288591]: log_channel(audit) log [DBG] : from='client.44467 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "host_pattern": "np0005548787.localdomain", "target": ["mon-mgr", ""], "format": "json"}]: dispatch Dec 6 05:08:58 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005548788.erzujf (monmap changed)... Dec 6 05:08:58 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005548788.erzujf (monmap changed)... Dec 6 05:08:58 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005548788.erzujf on np0005548788.localdomain Dec 6 05:08:58 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005548788.erzujf on np0005548788.localdomain Dec 6 05:08:58 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:58 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:58 localhost ceph-mon[298582]: Reconfiguring osd.5 (monmap changed)... Dec 6 05:08:58 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 6 05:08:58 localhost ceph-mon[298582]: Reconfiguring daemon osd.5 on np0005548788.localdomain Dec 6 05:08:58 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:58 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:58 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548788.erzujf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 6 05:08:59 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005548788.yvwbqq (monmap changed)... Dec 6 05:08:59 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005548788.yvwbqq (monmap changed)... Dec 6 05:08:59 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005548788.yvwbqq on np0005548788.localdomain Dec 6 05:08:59 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005548788.yvwbqq on np0005548788.localdomain Dec 6 05:08:59 localhost ceph-mgr[288591]: log_channel(cluster) log [DBG] : pgmap v23: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail Dec 6 05:08:59 localhost ceph-mon[298582]: Reconfiguring mds.mds.np0005548788.erzujf (monmap changed)... Dec 6 05:08:59 localhost ceph-mon[298582]: Reconfiguring daemon mds.mds.np0005548788.erzujf on np0005548788.localdomain Dec 6 05:08:59 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:59 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:08:59 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548788.yvwbqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:08:59 localhost ceph-mgr[288591]: log_channel(audit) log [DBG] : from='client.44470 -' entity='client.admin' cmd=[{"prefix": "orch host rm", "hostname": "np0005548787.localdomain", "force": true, "target": ["mon-mgr", ""]}]: dispatch Dec 6 05:08:59 localhost ceph-mgr[288591]: [cephadm INFO root] Removed host np0005548787.localdomain Dec 6 05:08:59 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Removed host np0005548787.localdomain Dec 6 05:08:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:08:59 localhost podman[303153]: 2025-12-06 10:08:59.934340999 +0000 UTC m=+0.089060746 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 6 05:08:59 localhost podman[303153]: 2025-12-06 10:08:59.954222358 +0000 UTC m=+0.108942065 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:08:59 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005548788 (monmap changed)... Dec 6 05:08:59 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005548788 (monmap changed)... Dec 6 05:08:59 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:08:59 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005548788 on np0005548788.localdomain Dec 6 05:08:59 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005548788 on np0005548788.localdomain Dec 6 05:09:00 localhost ceph-mon[298582]: Reconfiguring mgr.np0005548788.yvwbqq (monmap changed)... Dec 6 05:09:00 localhost ceph-mon[298582]: Reconfiguring daemon mgr.np0005548788.yvwbqq on np0005548788.localdomain Dec 6 05:09:00 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:00 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005548787.localdomain"} : dispatch Dec 6 05:09:00 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005548787.localdomain"}]': finished Dec 6 05:09:00 localhost ceph-mon[298582]: Removed host np0005548787.localdomain Dec 6 05:09:00 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:00 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:00 localhost ceph-mon[298582]: Reconfiguring mon.np0005548788 (monmap changed)... Dec 6 05:09:00 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 6 05:09:00 localhost ceph-mon[298582]: Reconfiguring daemon mon.np0005548788 on np0005548788.localdomain Dec 6 05:09:00 localhost nova_compute[282193]: 2025-12-06 10:09:00.710 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:09:00 localhost nova_compute[282193]: 2025-12-06 10:09:00.712 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:09:00 localhost nova_compute[282193]: 2025-12-06 10:09:00.713 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:09:00 localhost nova_compute[282193]: 2025-12-06 10:09:00.713 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:09:00 localhost nova_compute[282193]: 2025-12-06 10:09:00.744 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:09:00 localhost nova_compute[282193]: 2025-12-06 10:09:00.745 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:09:01 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005548789 (monmap changed)... Dec 6 05:09:01 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005548789 (monmap changed)... Dec 6 05:09:01 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005548789 on np0005548789.localdomain Dec 6 05:09:01 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005548789 on np0005548789.localdomain Dec 6 05:09:01 localhost ceph-mgr[288591]: log_channel(cluster) log [DBG] : pgmap v24: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail Dec 6 05:09:01 localhost podman[303224]: Dec 6 05:09:01 localhost podman[303224]: 2025-12-06 10:09:01.606156434 +0000 UTC m=+0.078029583 container create fa41c3b4817d1ba97d63748dff0081f0f0d64ae58b286e1ed7eca52c81ba70b0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_satoshi, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, RELEASE=main, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, release=1763362218, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , name=rhceph, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 6 05:09:01 localhost systemd[1]: Started libpod-conmon-fa41c3b4817d1ba97d63748dff0081f0f0d64ae58b286e1ed7eca52c81ba70b0.scope. Dec 6 05:09:01 localhost podman[303224]: 2025-12-06 10:09:01.573370625 +0000 UTC m=+0.045243794 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:09:01 localhost systemd[1]: Started libcrun container. Dec 6 05:09:01 localhost podman[303224]: 2025-12-06 10:09:01.695088315 +0000 UTC m=+0.166961434 container init fa41c3b4817d1ba97d63748dff0081f0f0d64ae58b286e1ed7eca52c81ba70b0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_satoshi, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, architecture=x86_64, CEPH_POINT_RELEASE=, vcs-type=git, ceph=True, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., distribution-scope=public) Dec 6 05:09:01 localhost podman[303224]: 2025-12-06 10:09:01.70619626 +0000 UTC m=+0.178069369 container start fa41c3b4817d1ba97d63748dff0081f0f0d64ae58b286e1ed7eca52c81ba70b0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_satoshi, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, name=rhceph, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , architecture=x86_64, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, vcs-type=git) Dec 6 05:09:01 localhost podman[303224]: 2025-12-06 10:09:01.706379585 +0000 UTC m=+0.178252774 container attach fa41c3b4817d1ba97d63748dff0081f0f0d64ae58b286e1ed7eca52c81ba70b0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_satoshi, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., RELEASE=main, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, release=1763362218, vcs-type=git, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, maintainer=Guillaume Abrioux , architecture=x86_64, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 6 05:09:01 localhost silly_satoshi[303239]: 167 167 Dec 6 05:09:01 localhost systemd[1]: libpod-fa41c3b4817d1ba97d63748dff0081f0f0d64ae58b286e1ed7eca52c81ba70b0.scope: Deactivated successfully. Dec 6 05:09:01 localhost podman[303224]: 2025-12-06 10:09:01.712275303 +0000 UTC m=+0.184148482 container died fa41c3b4817d1ba97d63748dff0081f0f0d64ae58b286e1ed7eca52c81ba70b0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_satoshi, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , RELEASE=main, ceph=True, io.openshift.expose-services=, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., version=7, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, vcs-type=git, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4) Dec 6 05:09:01 localhost podman[303244]: 2025-12-06 10:09:01.810142813 +0000 UTC m=+0.088207771 container remove fa41c3b4817d1ba97d63748dff0081f0f0d64ae58b286e1ed7eca52c81ba70b0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_satoshi, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, release=1763362218, name=rhceph, CEPH_POINT_RELEASE=, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, GIT_CLEAN=True, maintainer=Guillaume Abrioux , RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, ceph=True) Dec 6 05:09:01 localhost systemd[1]: libpod-conmon-fa41c3b4817d1ba97d63748dff0081f0f0d64ae58b286e1ed7eca52c81ba70b0.scope: Deactivated successfully. Dec 6 05:09:01 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)... Dec 6 05:09:01 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)... Dec 6 05:09:01 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on np0005548789.localdomain Dec 6 05:09:01 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on np0005548789.localdomain Dec 6 05:09:02 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:02 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:02 localhost ceph-mon[298582]: Reconfiguring crash.np0005548789 (monmap changed)... Dec 6 05:09:02 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548789.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:09:02 localhost ceph-mon[298582]: Reconfiguring daemon crash.np0005548789 on np0005548789.localdomain Dec 6 05:09:02 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:02 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:02 localhost ceph-mon[298582]: Reconfiguring osd.1 (monmap changed)... Dec 6 05:09:02 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Dec 6 05:09:02 localhost ceph-mon[298582]: Reconfiguring daemon osd.1 on np0005548789.localdomain Dec 6 05:09:02 localhost podman[303315]: Dec 6 05:09:02 localhost podman[303315]: 2025-12-06 10:09:02.511174494 +0000 UTC m=+0.078339712 container create 8e8aad40d60cb7d2666c9e0298855f7e8694c601063a8c41068bdd18ec961b92 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_lumiere, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , name=rhceph, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, ceph=True, distribution-scope=public) Dec 6 05:09:02 localhost systemd[1]: Started libpod-conmon-8e8aad40d60cb7d2666c9e0298855f7e8694c601063a8c41068bdd18ec961b92.scope. Dec 6 05:09:02 localhost systemd[1]: Started libcrun container. Dec 6 05:09:02 localhost podman[303315]: 2025-12-06 10:09:02.575822413 +0000 UTC m=+0.142987621 container init 8e8aad40d60cb7d2666c9e0298855f7e8694c601063a8c41068bdd18ec961b92 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_lumiere, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, RELEASE=main, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, name=rhceph, vcs-type=git, maintainer=Guillaume Abrioux , distribution-scope=public, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, GIT_CLEAN=True) Dec 6 05:09:02 localhost podman[303315]: 2025-12-06 10:09:02.479603893 +0000 UTC m=+0.046769151 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:09:02 localhost podman[303315]: 2025-12-06 10:09:02.588178586 +0000 UTC m=+0.155343804 container start 8e8aad40d60cb7d2666c9e0298855f7e8694c601063a8c41068bdd18ec961b92 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_lumiere, GIT_CLEAN=True, distribution-scope=public, version=7, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., RELEASE=main, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7) Dec 6 05:09:02 localhost podman[303315]: 2025-12-06 10:09:02.588486295 +0000 UTC m=+0.155651553 container attach 8e8aad40d60cb7d2666c9e0298855f7e8694c601063a8c41068bdd18ec961b92 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_lumiere, version=7, vcs-type=git, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, release=1763362218, GIT_CLEAN=True, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, com.redhat.component=rhceph-container, distribution-scope=public, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph) Dec 6 05:09:02 localhost musing_lumiere[303331]: 167 167 Dec 6 05:09:02 localhost systemd[1]: libpod-8e8aad40d60cb7d2666c9e0298855f7e8694c601063a8c41068bdd18ec961b92.scope: Deactivated successfully. Dec 6 05:09:02 localhost podman[303315]: 2025-12-06 10:09:02.592351851 +0000 UTC m=+0.159517089 container died 8e8aad40d60cb7d2666c9e0298855f7e8694c601063a8c41068bdd18ec961b92 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_lumiere, RELEASE=main, ceph=True, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, version=7, build-date=2025-11-26T19:44:28Z, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., architecture=x86_64, maintainer=Guillaume Abrioux , name=rhceph, GIT_BRANCH=main) Dec 6 05:09:02 localhost systemd[1]: var-lib-containers-storage-overlay-160781fb2c69432426e8f5bf9a30f81c6704cc23fd369d42dfdae832a22f1f35-merged.mount: Deactivated successfully. Dec 6 05:09:02 localhost systemd[1]: var-lib-containers-storage-overlay-1505fa216ab21d4b3717abef1ea26f4fc36c0bea09c76937a08d203c5784e4e3-merged.mount: Deactivated successfully. Dec 6 05:09:02 localhost podman[303336]: 2025-12-06 10:09:02.697648326 +0000 UTC m=+0.092896903 container remove 8e8aad40d60cb7d2666c9e0298855f7e8694c601063a8c41068bdd18ec961b92 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_lumiere, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, distribution-scope=public, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, name=rhceph, version=7, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, RELEASE=main, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.openshift.expose-services=, architecture=x86_64, GIT_BRANCH=main, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 6 05:09:02 localhost systemd[1]: libpod-conmon-8e8aad40d60cb7d2666c9e0298855f7e8694c601063a8c41068bdd18ec961b92.scope: Deactivated successfully. Dec 6 05:09:02 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring osd.4 (monmap changed)... Dec 6 05:09:02 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring osd.4 (monmap changed)... Dec 6 05:09:02 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.4 on np0005548789.localdomain Dec 6 05:09:02 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.4 on np0005548789.localdomain Dec 6 05:09:03 localhost ceph-mon[298582]: mon.np0005548789@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:09:03 localhost ceph-mgr[288591]: log_channel(cluster) log [DBG] : pgmap v25: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail Dec 6 05:09:03 localhost podman[303412]: Dec 6 05:09:03 localhost podman[303412]: 2025-12-06 10:09:03.520584161 +0000 UTC m=+0.058429613 container create 1f9dde573b26de883910e7ad1b6b72c5b90046fb8228d88d567a5f83c5f03730 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_johnson, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, io.openshift.expose-services=, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, distribution-scope=public, GIT_BRANCH=main, name=rhceph, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, ceph=True, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True) Dec 6 05:09:03 localhost systemd[1]: Started libpod-conmon-1f9dde573b26de883910e7ad1b6b72c5b90046fb8228d88d567a5f83c5f03730.scope. Dec 6 05:09:03 localhost systemd[1]: Started libcrun container. Dec 6 05:09:03 localhost podman[303412]: 2025-12-06 10:09:03.577010472 +0000 UTC m=+0.114855924 container init 1f9dde573b26de883910e7ad1b6b72c5b90046fb8228d88d567a5f83c5f03730 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_johnson, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, version=7, ceph=True, distribution-scope=public, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , name=rhceph, RELEASE=main, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 6 05:09:03 localhost podman[303412]: 2025-12-06 10:09:03.583747275 +0000 UTC m=+0.121592717 container start 1f9dde573b26de883910e7ad1b6b72c5b90046fb8228d88d567a5f83c5f03730 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_johnson, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, GIT_CLEAN=True, ceph=True, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, name=rhceph) Dec 6 05:09:03 localhost podman[303412]: 2025-12-06 10:09:03.584055455 +0000 UTC m=+0.121900907 container attach 1f9dde573b26de883910e7ad1b6b72c5b90046fb8228d88d567a5f83c5f03730 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_johnson, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, com.redhat.component=rhceph-container, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., version=7, maintainer=Guillaume Abrioux , GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, name=rhceph, architecture=x86_64, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7) Dec 6 05:09:03 localhost wizardly_johnson[303427]: 167 167 Dec 6 05:09:03 localhost systemd[1]: libpod-1f9dde573b26de883910e7ad1b6b72c5b90046fb8228d88d567a5f83c5f03730.scope: Deactivated successfully. Dec 6 05:09:03 localhost podman[303412]: 2025-12-06 10:09:03.586595291 +0000 UTC m=+0.124440763 container died 1f9dde573b26de883910e7ad1b6b72c5b90046fb8228d88d567a5f83c5f03730 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_johnson, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, release=1763362218, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, description=Red Hat Ceph Storage 7, ceph=True, distribution-scope=public, vcs-type=git, GIT_CLEAN=True, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=) Dec 6 05:09:03 localhost podman[303412]: 2025-12-06 10:09:03.49831047 +0000 UTC m=+0.036155952 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:09:03 localhost systemd[1]: var-lib-containers-storage-overlay-51fe94b548087a4d2b9c980263d69cb47e2f14c82c2e0bb7c4d98d250280af2e-merged.mount: Deactivated successfully. Dec 6 05:09:03 localhost podman[303432]: 2025-12-06 10:09:03.669101428 +0000 UTC m=+0.076931280 container remove 1f9dde573b26de883910e7ad1b6b72c5b90046fb8228d88d567a5f83c5f03730 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_johnson, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, ceph=True, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, GIT_BRANCH=main, architecture=x86_64, version=7, name=rhceph, release=1763362218) Dec 6 05:09:03 localhost systemd[1]: libpod-conmon-1f9dde573b26de883910e7ad1b6b72c5b90046fb8228d88d567a5f83c5f03730.scope: Deactivated successfully. Dec 6 05:09:03 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005548789.vxwwsq (monmap changed)... Dec 6 05:09:03 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005548789.vxwwsq (monmap changed)... Dec 6 05:09:03 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005548789.vxwwsq on np0005548789.localdomain Dec 6 05:09:03 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005548789.vxwwsq on np0005548789.localdomain Dec 6 05:09:03 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:03 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:03 localhost ceph-mon[298582]: Reconfiguring osd.4 (monmap changed)... Dec 6 05:09:03 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Dec 6 05:09:03 localhost ceph-mon[298582]: Reconfiguring daemon osd.4 on np0005548789.localdomain Dec 6 05:09:03 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:03 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:03 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548789.vxwwsq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 6 05:09:04 localhost podman[303508]: Dec 6 05:09:04 localhost podman[303508]: 2025-12-06 10:09:04.404136605 +0000 UTC m=+0.053227465 container create 58ed546b5412fd9d5b6c7594c0164c59f851f6c370af0aab1bd531be2c59de17 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_williamson, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, CEPH_POINT_RELEASE=, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhceph ceph, RELEASE=main, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, ceph=True, GIT_BRANCH=main, release=1763362218, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 6 05:09:04 localhost systemd[1]: Started libpod-conmon-58ed546b5412fd9d5b6c7594c0164c59f851f6c370af0aab1bd531be2c59de17.scope. Dec 6 05:09:04 localhost systemd[1]: Started libcrun container. Dec 6 05:09:04 localhost podman[303508]: 2025-12-06 10:09:04.470475524 +0000 UTC m=+0.119566394 container init 58ed546b5412fd9d5b6c7594c0164c59f851f6c370af0aab1bd531be2c59de17 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_williamson, ceph=True, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, version=7, release=1763362218, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph) Dec 6 05:09:04 localhost youthful_williamson[303524]: 167 167 Dec 6 05:09:04 localhost podman[303508]: 2025-12-06 10:09:04.478781614 +0000 UTC m=+0.127872514 container start 58ed546b5412fd9d5b6c7594c0164c59f851f6c370af0aab1bd531be2c59de17 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_williamson, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, vcs-type=git, version=7, io.openshift.expose-services=, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, distribution-scope=public, architecture=x86_64, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., GIT_BRANCH=main, release=1763362218, RELEASE=main) Dec 6 05:09:04 localhost systemd[1]: libpod-58ed546b5412fd9d5b6c7594c0164c59f851f6c370af0aab1bd531be2c59de17.scope: Deactivated successfully. Dec 6 05:09:04 localhost podman[303508]: 2025-12-06 10:09:04.479560879 +0000 UTC m=+0.128651779 container attach 58ed546b5412fd9d5b6c7594c0164c59f851f6c370af0aab1bd531be2c59de17 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_williamson, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, version=7, com.redhat.component=rhceph-container, vcs-type=git, CEPH_POINT_RELEASE=, architecture=x86_64, name=rhceph, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 6 05:09:04 localhost podman[303508]: 2025-12-06 10:09:04.381002587 +0000 UTC m=+0.030093507 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:09:04 localhost podman[303508]: 2025-12-06 10:09:04.483018683 +0000 UTC m=+0.132109593 container died 58ed546b5412fd9d5b6c7594c0164c59f851f6c370af0aab1bd531be2c59de17 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_williamson, vcs-type=git, CEPH_POINT_RELEASE=, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, vendor=Red Hat, Inc., name=rhceph, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, distribution-scope=public, io.openshift.tags=rhceph ceph, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, release=1763362218) Dec 6 05:09:04 localhost podman[303529]: 2025-12-06 10:09:04.573702386 +0000 UTC m=+0.082388024 container remove 58ed546b5412fd9d5b6c7594c0164c59f851f6c370af0aab1bd531be2c59de17 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_williamson, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, vcs-type=git, maintainer=Guillaume Abrioux , build-date=2025-11-26T19:44:28Z, architecture=x86_64, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, vendor=Red Hat, Inc., name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, RELEASE=main) Dec 6 05:09:04 localhost systemd[1]: libpod-conmon-58ed546b5412fd9d5b6c7594c0164c59f851f6c370af0aab1bd531be2c59de17.scope: Deactivated successfully. Dec 6 05:09:04 localhost systemd[1]: var-lib-containers-storage-overlay-42957f4825799ccfbe6e3a866bfa2de1e7f738c2726206560a06458a98a3ac1b-merged.mount: Deactivated successfully. Dec 6 05:09:04 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005548789.mzhmje (monmap changed)... Dec 6 05:09:04 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005548789.mzhmje (monmap changed)... Dec 6 05:09:04 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005548789.mzhmje on np0005548789.localdomain Dec 6 05:09:04 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005548789.mzhmje on np0005548789.localdomain Dec 6 05:09:04 localhost ceph-mon[298582]: Reconfiguring mds.mds.np0005548789.vxwwsq (monmap changed)... Dec 6 05:09:04 localhost ceph-mon[298582]: Reconfiguring daemon mds.mds.np0005548789.vxwwsq on np0005548789.localdomain Dec 6 05:09:04 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:04 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:04 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548789.mzhmje", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:09:05 localhost podman[303598]: Dec 6 05:09:05 localhost podman[303598]: 2025-12-06 10:09:05.258818238 +0000 UTC m=+0.078632081 container create ed8a31d2db9ce08fa6f793d07f13d59211c11c6af2078461d266d081e6300712 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_fermat, com.redhat.component=rhceph-container, release=1763362218, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., RELEASE=main, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, vcs-type=git, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, version=7) Dec 6 05:09:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:09:05 localhost ceph-mgr[288591]: log_channel(cluster) log [DBG] : pgmap v26: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail Dec 6 05:09:05 localhost systemd[1]: Started libpod-conmon-ed8a31d2db9ce08fa6f793d07f13d59211c11c6af2078461d266d081e6300712.scope. Dec 6 05:09:05 localhost systemd[1]: Started libcrun container. Dec 6 05:09:05 localhost podman[303598]: 2025-12-06 10:09:05.322593321 +0000 UTC m=+0.142406984 container init ed8a31d2db9ce08fa6f793d07f13d59211c11c6af2078461d266d081e6300712 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_fermat, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, release=1763362218, GIT_CLEAN=True, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, GIT_BRANCH=main, distribution-scope=public, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, build-date=2025-11-26T19:44:28Z, version=7, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 6 05:09:05 localhost podman[303598]: 2025-12-06 10:09:05.227635138 +0000 UTC m=+0.047448811 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:09:05 localhost podman[303598]: 2025-12-06 10:09:05.33650283 +0000 UTC m=+0.156316423 container start ed8a31d2db9ce08fa6f793d07f13d59211c11c6af2078461d266d081e6300712 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_fermat, architecture=x86_64, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, version=7, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, com.redhat.component=rhceph-container, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 6 05:09:05 localhost podman[303598]: 2025-12-06 10:09:05.336815149 +0000 UTC m=+0.156628852 container attach ed8a31d2db9ce08fa6f793d07f13d59211c11c6af2078461d266d081e6300712 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_fermat, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, distribution-scope=public, GIT_BRANCH=main, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, vcs-type=git, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, release=1763362218, maintainer=Guillaume Abrioux , name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, CEPH_POINT_RELEASE=) Dec 6 05:09:05 localhost systemd[1]: libpod-ed8a31d2db9ce08fa6f793d07f13d59211c11c6af2078461d266d081e6300712.scope: Deactivated successfully. Dec 6 05:09:05 localhost recursing_fermat[303614]: 167 167 Dec 6 05:09:05 localhost podman[303598]: 2025-12-06 10:09:05.340469679 +0000 UTC m=+0.160283322 container died ed8a31d2db9ce08fa6f793d07f13d59211c11c6af2078461d266d081e6300712 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_fermat, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, version=7, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., name=rhceph, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public) Dec 6 05:09:05 localhost podman[303613]: 2025-12-06 10:09:05.415025247 +0000 UTC m=+0.110068170 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:09:05 localhost podman[303613]: 2025-12-06 10:09:05.42707519 +0000 UTC m=+0.122118123 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 6 05:09:05 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:09:05 localhost podman[303624]: 2025-12-06 10:09:05.473448907 +0000 UTC m=+0.118732499 container remove ed8a31d2db9ce08fa6f793d07f13d59211c11c6af2078461d266d081e6300712 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_fermat, version=7, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, release=1763362218, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, name=rhceph, GIT_BRANCH=main, GIT_CLEAN=True, architecture=x86_64, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 6 05:09:05 localhost systemd[1]: libpod-conmon-ed8a31d2db9ce08fa6f793d07f13d59211c11c6af2078461d266d081e6300712.scope: Deactivated successfully. Dec 6 05:09:05 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005548789 (monmap changed)... Dec 6 05:09:05 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005548789 (monmap changed)... Dec 6 05:09:05 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005548789 on np0005548789.localdomain Dec 6 05:09:05 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005548789 on np0005548789.localdomain Dec 6 05:09:05 localhost systemd[1]: var-lib-containers-storage-overlay-15d924119a66538052ca412c2fea855bdd1bc0e5a463af0ad04c1cb15818c648-merged.mount: Deactivated successfully. Dec 6 05:09:05 localhost nova_compute[282193]: 2025-12-06 10:09:05.745 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:09:05 localhost nova_compute[282193]: 2025-12-06 10:09:05.748 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:09:05 localhost nova_compute[282193]: 2025-12-06 10:09:05.748 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:09:05 localhost nova_compute[282193]: 2025-12-06 10:09:05.748 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:09:05 localhost nova_compute[282193]: 2025-12-06 10:09:05.778 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:09:05 localhost nova_compute[282193]: 2025-12-06 10:09:05.779 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:09:05 localhost ceph-mon[298582]: Reconfiguring mgr.np0005548789.mzhmje (monmap changed)... Dec 6 05:09:05 localhost ceph-mon[298582]: Reconfiguring daemon mgr.np0005548789.mzhmje on np0005548789.localdomain Dec 6 05:09:05 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:05 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:05 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 6 05:09:06 localhost podman[303710]: Dec 6 05:09:06 localhost podman[303710]: 2025-12-06 10:09:06.099962233 +0000 UTC m=+0.056759732 container create c3d0fcc5c5e81adb76fb46916c2076f0100700785d2c0ca57847e46b66847d3d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_sinoussi, io.openshift.expose-services=, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, RELEASE=main, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, version=7, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, name=rhceph, GIT_BRANCH=main, distribution-scope=public, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=) Dec 6 05:09:06 localhost systemd[1]: Started libpod-conmon-c3d0fcc5c5e81adb76fb46916c2076f0100700785d2c0ca57847e46b66847d3d.scope. Dec 6 05:09:06 localhost systemd[1]: Started libcrun container. Dec 6 05:09:06 localhost podman[303710]: 2025-12-06 10:09:06.15958262 +0000 UTC m=+0.116380149 container init c3d0fcc5c5e81adb76fb46916c2076f0100700785d2c0ca57847e46b66847d3d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_sinoussi, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, RELEASE=main, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, distribution-scope=public, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, com.redhat.component=rhceph-container, version=7, architecture=x86_64, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, release=1763362218, GIT_BRANCH=main) Dec 6 05:09:06 localhost systemd[1]: tmp-crun.H1ylue.mount: Deactivated successfully. Dec 6 05:09:06 localhost podman[303710]: 2025-12-06 10:09:06.167134178 +0000 UTC m=+0.123931677 container start c3d0fcc5c5e81adb76fb46916c2076f0100700785d2c0ca57847e46b66847d3d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_sinoussi, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, distribution-scope=public, maintainer=Guillaume Abrioux , version=7, build-date=2025-11-26T19:44:28Z, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, GIT_CLEAN=True, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, RELEASE=main, release=1763362218) Dec 6 05:09:06 localhost podman[303710]: 2025-12-06 10:09:06.167339994 +0000 UTC m=+0.124137513 container attach c3d0fcc5c5e81adb76fb46916c2076f0100700785d2c0ca57847e46b66847d3d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_sinoussi, io.openshift.expose-services=, CEPH_POINT_RELEASE=, vcs-type=git, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, build-date=2025-11-26T19:44:28Z, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 6 05:09:06 localhost peaceful_sinoussi[303725]: 167 167 Dec 6 05:09:06 localhost systemd[1]: libpod-c3d0fcc5c5e81adb76fb46916c2076f0100700785d2c0ca57847e46b66847d3d.scope: Deactivated successfully. Dec 6 05:09:06 localhost podman[303710]: 2025-12-06 10:09:06.169045006 +0000 UTC m=+0.125842555 container died c3d0fcc5c5e81adb76fb46916c2076f0100700785d2c0ca57847e46b66847d3d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_sinoussi, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, release=1763362218, version=7, io.buildah.version=1.41.4, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, RELEASE=main, distribution-scope=public, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, vendor=Red Hat, Inc., ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 6 05:09:06 localhost podman[303710]: 2025-12-06 10:09:06.08363124 +0000 UTC m=+0.040428749 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:09:06 localhost podman[303730]: 2025-12-06 10:09:06.223980601 +0000 UTC m=+0.050185994 container remove c3d0fcc5c5e81adb76fb46916c2076f0100700785d2c0ca57847e46b66847d3d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_sinoussi, maintainer=Guillaume Abrioux , io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=rhceph, RELEASE=main, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, GIT_CLEAN=True, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, GIT_BRANCH=main, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, vcs-type=git, io.openshift.tags=rhceph ceph, ceph=True) Dec 6 05:09:06 localhost systemd[1]: libpod-conmon-c3d0fcc5c5e81adb76fb46916c2076f0100700785d2c0ca57847e46b66847d3d.scope: Deactivated successfully. Dec 6 05:09:06 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005548790 (monmap changed)... Dec 6 05:09:06 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005548790 (monmap changed)... Dec 6 05:09:06 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005548790 on np0005548790.localdomain Dec 6 05:09:06 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005548790 on np0005548790.localdomain Dec 6 05:09:06 localhost systemd[1]: var-lib-containers-storage-overlay-e81a04cbad3dd485b0a36ad607e5bfe38cbaa19f802f345f531188613a4eb80c-merged.mount: Deactivated successfully. Dec 6 05:09:06 localhost ceph-mon[298582]: Reconfiguring mon.np0005548789 (monmap changed)... Dec 6 05:09:06 localhost ceph-mon[298582]: Reconfiguring daemon mon.np0005548789 on np0005548789.localdomain Dec 6 05:09:06 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:06 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:06 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548790.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:09:07 localhost ceph-mgr[288591]: [progress INFO root] update: starting ev 023838e0-b52a-42a7-bd47-8afddb7f894e (Updating node-proxy deployment (+3 -> 3)) Dec 6 05:09:07 localhost ceph-mgr[288591]: [progress INFO root] complete: finished ev 023838e0-b52a-42a7-bd47-8afddb7f894e (Updating node-proxy deployment (+3 -> 3)) Dec 6 05:09:07 localhost ceph-mgr[288591]: [progress INFO root] Completed event 023838e0-b52a-42a7-bd47-8afddb7f894e (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 6 05:09:07 localhost ceph-mgr[288591]: log_channel(cluster) log [DBG] : pgmap v27: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail Dec 6 05:09:07 localhost ceph-mgr[288591]: log_channel(audit) log [DBG] : from='client.44476 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch Dec 6 05:09:07 localhost ceph-mgr[288591]: [cephadm INFO root] Saving service mon spec with placement label:mon Dec 6 05:09:07 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Saving service mon spec with placement label:mon Dec 6 05:09:07 localhost ceph-mgr[288591]: [progress INFO root] update: starting ev dbac2153-14b5-4981-983f-b8320034ab00 (Updating node-proxy deployment (+3 -> 3)) Dec 6 05:09:07 localhost ceph-mgr[288591]: [progress INFO root] complete: finished ev dbac2153-14b5-4981-983f-b8320034ab00 (Updating node-proxy deployment (+3 -> 3)) Dec 6 05:09:07 localhost ceph-mgr[288591]: [progress INFO root] Completed event dbac2153-14b5-4981-983f-b8320034ab00 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.914 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'name': 'test', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005548789.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'hostId': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.916 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.922 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0a74db33-f5e3-4bfb-802b-56d08e915c65', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:09:07.916529', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '9a48bc6a-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12166.165895086, 'message_signature': '604b0fe8a30e6fc746636727d66b33a6bfe1f16569108ae47a425ff14c6dca16'}]}, 'timestamp': '2025-12-06 10:09:07.923424', '_unique_id': '9ffb0f41dd004788ba0e8bf789163c47'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.926 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.927 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.927 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.944 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/cpu volume: 14300000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8e6bfb6e-17a2-4d99-b9e1-2e460155dee8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 14300000000, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'timestamp': '2025-12-06T10:09:07.927796', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '9a4c1b58-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12166.194140248, 'message_signature': 'b14003508cc1792cf6ab57feac829e6ef53fa4fed1213552d3036f09d9dd4981'}]}, 'timestamp': '2025-12-06 10:09:07.945382', '_unique_id': '96108dd34d754a7a8a451cdd03e2c427'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.946 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.947 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.970 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.latency volume: 1525105336 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.971 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.latency volume: 106716064 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7377136b-fba2-4943-b744-fe5219b058a7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1525105336, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:09:07.947705', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '9a50138e-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12166.197088356, 'message_signature': '59e777b8f51ae92e5455a1ef6bcfb1ab440ee1fc84b4f1b576753a55d6248461'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 106716064, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:09:07.947705', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '9a5025d6-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12166.197088356, 'message_signature': '893c1bb2cbcac082aaa0de0970c25271ff2ad835b95e3d0ef402a16749844ab2'}]}, 'timestamp': '2025-12-06 10:09:07.971976', '_unique_id': 'bc7eca61326d4f4fafb57daa4f9042f6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.973 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.974 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.974 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets volume: 114 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b2a634b5-7260-4592-b5c0-cf19da10dcc7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 114, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:09:07.974551', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '9a50a394-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12166.165895086, 'message_signature': 'af0fedd61bd3b9cba1dfdb1390b80753218a8c0946d71b1303b948c7999d7a18'}]}, 'timestamp': '2025-12-06 10:09:07.975096', '_unique_id': '9fa49bfbfe1241e2a60160818fb4ff2e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.976 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.977 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.977 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets volume: 60 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:09:07 localhost ceph-mon[298582]: Reconfiguring crash.np0005548790 (monmap changed)... Dec 6 05:09:07 localhost ceph-mon[298582]: Reconfiguring daemon crash.np0005548790 on np0005548790.localdomain Dec 6 05:09:07 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:07 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:07 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:09:07 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:07 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:07 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:09:07 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '568db3ea-97d1-4cf8-aea8-521a383c2d1d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 60, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:09:07.977671', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '9a511d24-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12166.165895086, 'message_signature': '8e35e5af3533db9fe8c00799c8b7bee8cdbf5097ade216878792d00505946985'}]}, 'timestamp': '2025-12-06 10:09:07.978214', '_unique_id': 'f81f80a1261f40069449ce529ee59ccd'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.979 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.980 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.980 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/memory.usage volume: 51.80859375 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '62ec6ad3-3791-4aff-9913-af8e7250053e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.80859375, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'timestamp': '2025-12-06T10:09:07.980710', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '9a51945c-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12166.194140248, 'message_signature': 'ee885a0b88cca73d6467c0c7289ab9b55a4e26d1253fff6cbfda2c897874d125'}]}, 'timestamp': '2025-12-06 10:09:07.981242', '_unique_id': '67d4846308544145aaf71da92743ef8e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.982 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.983 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.995 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.995 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c2fdc82b-f56d-485b-92aa-9bf588ee50a3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:09:07.983681', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '9a53d33e-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12166.233193805, 'message_signature': 'ee436cc1d242b18fbd7a387e5b78bcc6cd28cd7efb083088bcedc0ae0bb803ac'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:09:07.983681', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '9a53e5ae-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12166.233193805, 'message_signature': '2bc2e55c3ed6739033c81903590fc79e35696a6d4ee8c0ec3beb6384470a0428'}]}, 'timestamp': '2025-12-06 10:09:07.996410', '_unique_id': 'd54382dae000454ca0c4c2f73e6620f4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:09:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.997 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.998 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.998 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:07.999 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'aaac2366-808f-4f39-87b4-c3ba92fd5e16', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:09:07.998781', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '9a5453c2-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12166.197088356, 'message_signature': 'a9c38a7d3598a55dab7fdccafab4b7877727ff7b63307e2d766bd3dce945afe4'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:09:07.998781', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '9a546434-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12166.197088356, 'message_signature': 'd51b9ec1f63d1d739b9e2ac609d735b5faba4a1c87673073dc8cc5ae2f44cbf9'}]}, 'timestamp': '2025-12-06 10:09:07.999648', '_unique_id': 'e3a62b382e5f4e2898c18a2a5ac00003'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.000 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.001 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.001 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.requests volume: 47 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.002 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '79533855-fca1-4678-8645-de57faa31b02', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 47, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:09:08.001896', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '9a54ccda-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12166.197088356, 'message_signature': 'bfe68996535373d04a54e58ab8fde3ee61da3347103247466180586c6e9a1833'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:09:08.001896', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '9a54dfb8-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12166.197088356, 'message_signature': '389b67240902636f2bd756a12c55dc80ab13fc20863a135b87fe08e7ab1dfb1b'}]}, 'timestamp': '2025-12-06 10:09:08.002839', '_unique_id': '144cf6e118754343bc17395b376f41a2'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.004 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.005 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.005 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.bytes volume: 389120 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.006 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'db8bb643-15b7-4506-9db8-aa49ce3d13ab', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 389120, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:09:08.005837', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '9a5568de-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12166.197088356, 'message_signature': '9195028ca352f5fafd5b7a57baf6c4d2b90165aa6d8dd80c0293b0db1345735a'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:09:08.005837', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '9a557acc-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12166.197088356, 'message_signature': '76507ac9ddc570eb6e719e4ad71d101812320c8535b28299684f147121b77ace'}]}, 'timestamp': '2025-12-06 10:09:08.006807', '_unique_id': '6f07b9707e544abebff5fa16a060e1c8'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.007 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.009 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.009 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '91d8a94d-c432-43d5-a94a-564b7198714d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:09:08.009212', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '9a55eb7e-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12166.165895086, 'message_signature': 'df50f57b4f088f24367f81d98729355043a2e111dc2369483459ef8ca85afefa'}]}, 'timestamp': '2025-12-06 10:09:08.009700', '_unique_id': '14bd775412214432a7c99ae24e2f5caa'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.010 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.011 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.012 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.012 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c6402536-0e8d-4f52-a675-79edb9f7db10', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:09:08.012111', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '9a565c12-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12166.197088356, 'message_signature': '1f6bdff7fc69932eb05a364e57978644b6a03cb3b666b785d95a11dcee484eca'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:09:08.012111', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '9a566e14-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12166.197088356, 'message_signature': '9544c32e09cb334dd7bc09cf08545ed18d66815f973e6d935d7129c80f30347f'}]}, 'timestamp': '2025-12-06 10:09:08.013042', '_unique_id': '2b4d3158e8a149cf9325bc62dc24fe3e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.014 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.015 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.015 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.015 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '04029069-3005-4c41-8274-5cc826e8e7e8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:09:08.015615', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '9a56ef42-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12166.165895086, 'message_signature': '20e482ee207a888a0fcf8ca495923d28bfbee1478b6456f5dd942c3b71f75af7'}]}, 'timestamp': '2025-12-06 10:09:08.016352', '_unique_id': '0555c68978934b578fa46c66c9ae04ca'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.017 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.018 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.018 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '04f09ab1-c9a2-4a0a-bce6-604c5eddf788', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:09:08.018809', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '9a5761fc-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12166.165895086, 'message_signature': 'dc712f8a041299bb5a2b65b87b0df7f0a070573fb1ba66bf1324220a4e07168f'}]}, 'timestamp': '2025-12-06 10:09:08.019280', '_unique_id': '9a913791a8414cdd8719c9f2fa43391f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.020 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.021 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.022 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.022 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a56ad630-89f4-4c81-a291-5b58e0ee6c25', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:09:08.022265', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '9a57e9c4-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12166.165895086, 'message_signature': 'dc6e2d90cb4ca766e5944a9eda4fd26d0c19b8167997aa46fc75b01aa435fc3d'}]}, 'timestamp': '2025-12-06 10:09:08.022784', '_unique_id': '8934bd79f96c4ece9c16db6097344789'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.023 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.024 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.024 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.025 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'cddc65a1-6a4a-4dc4-8550-6e9a394b152d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:09:08.024942', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '9a585116-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12166.233193805, 'message_signature': '4437f6a9139835bc15d9ddf79e7d8a97a731402806be07a09a67a08a3888eec0'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:09:08.024942', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '9a586138-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12166.233193805, 'message_signature': '6990bd1e7aeaeb0345f7b44c93311d628dc9e53aeea9663dbaff64987bef9ca7'}]}, 'timestamp': '2025-12-06 10:09:08.025815', '_unique_id': '9deeca6c202744b5b80fc8c4e40e051d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.026 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.027 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.028 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.028 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0fa15156-881e-455e-b5a3-e57410faad47', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:09:08.028023', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '9a58c98e-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12166.233193805, 'message_signature': 'b61c9d10450a6bd7fd201f739253e30a491d7d20176ab090ec81cab05fbf883b'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:09:08.028023', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '9a58d9ba-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12166.233193805, 'message_signature': '600083dd58ef12f0ff6558f0326bf23cde59eb4b237f8de0ce57945ff90bc90c'}]}, 'timestamp': '2025-12-06 10:09:08.028903', '_unique_id': '0557cc51e5ca4cb480d54511b805d11d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.029 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.030 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.031 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e9ee31bc-2166-420d-95cc-bca96b469a3a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:09:08.031106', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '9a594238-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12166.165895086, 'message_signature': '9f71dcd24c72e5efa69e2b07b5607330219d173162fdf46636848438a56f0746'}]}, 'timestamp': '2025-12-06 10:09:08.031503', '_unique_id': '883e5d61c461495eaadc60d23f29275d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.032 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.033 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.latency volume: 1252245154 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.033 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.latency volume: 27668224 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1a13b122-6252-484a-ad57-0ffb04398e1b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1252245154, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:09:08.032999', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '9a59891e-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12166.197088356, 'message_signature': '2a0177182359ac2fdc901523ef68d4b29855930573662d291c9e862621126893'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 27668224, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:09:08.032999', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '9a59935a-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12166.197088356, 'message_signature': '22ef2aa59a46eee400d642b2336020e8fe817246866ee9f9f0626a3aa00e9691'}]}, 'timestamp': '2025-12-06 10:09:08.033544', '_unique_id': 'd1ea4cad9af04c7f90cb7c1c6e8af5b1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.034 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.bytes volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b1260772-b3d8-4dd9-9a00-3840287eb1a5', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:09:08.035027', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '9a59d7ca-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12166.165895086, 'message_signature': '2cbd916679658742058a2777228614d77e159fd0f6f39248ed46105b9b8711ab'}]}, 'timestamp': '2025-12-06 10:09:08.035315', '_unique_id': 'f536d06add394d0cb8da1e8686c6582e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.035 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.036 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.036 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.bytes volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2f975d04-3d9f-4d66-9697-0dd8e701c99e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:09:08.036651', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '9a5a180c-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12166.165895086, 'message_signature': '9cb1512d39863d528032b913d7d9c7524a8da001df0d1083a81c77bcec1bc083'}]}, 'timestamp': '2025-12-06 10:09:08.036960', '_unique_id': '59e5a15d092546bd86f80260ebad8bcc'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:09:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:09:08.037 12 ERROR oslo_messaging.notify.messaging Dec 6 05:09:08 localhost ceph-mon[298582]: mon.np0005548789@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:09:08 localhost ceph-mon[298582]: Saving service mon spec with placement label:mon Dec 6 05:09:09 localhost ceph-mgr[288591]: log_channel(cluster) log [DBG] : pgmap v28: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail Dec 6 05:09:10 localhost ceph-mgr[288591]: [progress INFO root] Writing back 50 completed events Dec 6 05:09:10 localhost nova_compute[282193]: 2025-12-06 10:09:10.780 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:09:10 localhost nova_compute[282193]: 2025-12-06 10:09:10.783 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:09:10 localhost nova_compute[282193]: 2025-12-06 10:09:10.783 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:09:10 localhost nova_compute[282193]: 2025-12-06 10:09:10.783 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:09:10 localhost nova_compute[282193]: 2025-12-06 10:09:10.817 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:09:10 localhost nova_compute[282193]: 2025-12-06 10:09:10.818 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:09:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:09:10 localhost podman[303783]: 2025-12-06 10:09:10.937997189 +0000 UTC m=+0.090768357 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:09:10 localhost podman[303783]: 2025-12-06 10:09:10.980255513 +0000 UTC m=+0.133026691 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251125) Dec 6 05:09:10 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:09:11 localhost ceph-mgr[288591]: log_channel(cluster) log [DBG] : pgmap v29: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail Dec 6 05:09:11 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:11 localhost ceph-mgr[288591]: log_channel(audit) log [DBG] : from='client.44482 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005548790", "target": ["mon-mgr", ""], "format": "json"}]: dispatch Dec 6 05:09:13 localhost ceph-mon[298582]: mon.np0005548789@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:09:13 localhost ceph-mgr[288591]: log_channel(cluster) log [DBG] : pgmap v30: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail Dec 6 05:09:13 localhost ceph-mgr[288591]: log_channel(audit) log [DBG] : from='client.44488 -' entity='client.admin' cmd=[{"prefix": "orch daemon rm", "names": ["mon.np0005548790"], "force": true, "target": ["mon-mgr", ""]}]: dispatch Dec 6 05:09:13 localhost ceph-mgr[288591]: [cephadm INFO root] Remove daemons mon.np0005548790 Dec 6 05:09:13 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Remove daemons mon.np0005548790 Dec 6 05:09:13 localhost ceph-mgr[288591]: [cephadm INFO cephadm.services.cephadmservice] Safe to remove mon.np0005548790: new quorum should be ['np0005548788', 'np0005548789'] (from ['np0005548788', 'np0005548789']) Dec 6 05:09:13 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Safe to remove mon.np0005548790: new quorum should be ['np0005548788', 'np0005548789'] (from ['np0005548788', 'np0005548789']) Dec 6 05:09:13 localhost ceph-mgr[288591]: [cephadm INFO cephadm.services.cephadmservice] Removing monitor np0005548790 from monmap... Dec 6 05:09:13 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Removing monitor np0005548790 from monmap... Dec 6 05:09:13 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Removing daemon mon.np0005548790 from np0005548790.localdomain -- ports [] Dec 6 05:09:13 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Removing daemon mon.np0005548790 from np0005548790.localdomain -- ports [] Dec 6 05:09:13 localhost ceph-mgr[288591]: client.54179 ms_handle_reset on v2:172.18.0.103:3300/0 Dec 6 05:09:13 localhost ceph-mgr[288591]: client.44398 ms_handle_reset on v2:172.18.0.104:3300/0 Dec 6 05:09:13 localhost ceph-mon[298582]: mon.np0005548789@2(peon) e14 my rank is now 1 (was 2) Dec 6 05:09:13 localhost ceph-mgr[288591]: client.0 ms_handle_reset on v2:172.18.0.104:3300/0 Dec 6 05:09:13 localhost ceph-mgr[288591]: client.0 ms_handle_reset on v2:172.18.0.104:3300/0 Dec 6 05:09:13 localhost ceph-mon[298582]: log_channel(cluster) log [INF] : mon.np0005548789 calling monitor election Dec 6 05:09:13 localhost ceph-mon[298582]: paxos.1).electionLogic(62) init, last seen epoch 62 Dec 6 05:09:13 localhost ceph-mon[298582]: mon.np0005548789@1(electing) e14 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 6 05:09:13 localhost ceph-mon[298582]: mon.np0005548789@1(electing) e14 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 6 05:09:13 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e14 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 6 05:09:13 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Updating np0005548788.localdomain:/etc/ceph/ceph.conf Dec 6 05:09:13 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Updating np0005548788.localdomain:/etc/ceph/ceph.conf Dec 6 05:09:13 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Updating np0005548789.localdomain:/etc/ceph/ceph.conf Dec 6 05:09:13 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Updating np0005548790.localdomain:/etc/ceph/ceph.conf Dec 6 05:09:13 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Updating np0005548789.localdomain:/etc/ceph/ceph.conf Dec 6 05:09:13 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Updating np0005548790.localdomain:/etc/ceph/ceph.conf Dec 6 05:09:13 localhost ceph-mon[298582]: Remove daemons mon.np0005548790 Dec 6 05:09:13 localhost ceph-mon[298582]: Safe to remove mon.np0005548790: new quorum should be ['np0005548788', 'np0005548789'] (from ['np0005548788', 'np0005548789']) Dec 6 05:09:13 localhost ceph-mon[298582]: Removing monitor np0005548790 from monmap... Dec 6 05:09:13 localhost ceph-mon[298582]: Removing daemon mon.np0005548790 from np0005548790.localdomain -- ports [] Dec 6 05:09:13 localhost ceph-mon[298582]: mon.np0005548789 calling monitor election Dec 6 05:09:13 localhost ceph-mon[298582]: mon.np0005548788 calling monitor election Dec 6 05:09:13 localhost ceph-mon[298582]: mon.np0005548788 is new leader, mons np0005548788,np0005548789 in quorum (ranks 0,1) Dec 6 05:09:13 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:09:13 localhost ceph-mon[298582]: Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm Dec 6 05:09:13 localhost ceph-mon[298582]: [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm Dec 6 05:09:13 localhost ceph-mon[298582]: stray daemon mgr.np0005548785.vhqlsq on host np0005548785.localdomain not managed by cephadm Dec 6 05:09:13 localhost ceph-mon[298582]: [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm Dec 6 05:09:13 localhost ceph-mon[298582]: stray host np0005548785.localdomain has 1 stray daemons: ['mgr.np0005548785.vhqlsq'] Dec 6 05:09:13 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:09:13 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:09:14 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:09:14 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:09:14 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:09:14 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:09:14 localhost ceph-mon[298582]: Updating np0005548788.localdomain:/etc/ceph/ceph.conf Dec 6 05:09:14 localhost ceph-mon[298582]: Updating np0005548789.localdomain:/etc/ceph/ceph.conf Dec 6 05:09:14 localhost ceph-mon[298582]: Updating np0005548790.localdomain:/etc/ceph/ceph.conf Dec 6 05:09:14 localhost ceph-mon[298582]: Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:09:14 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:14 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:15 localhost ceph-mgr[288591]: [progress INFO root] update: starting ev f6dd0353-489f-4408-b0d7-0fd7319bdf9d (Updating node-proxy deployment (+3 -> 3)) Dec 6 05:09:15 localhost ceph-mgr[288591]: [progress INFO root] complete: finished ev f6dd0353-489f-4408-b0d7-0fd7319bdf9d (Updating node-proxy deployment (+3 -> 3)) Dec 6 05:09:15 localhost ceph-mgr[288591]: [progress INFO root] Completed event f6dd0353-489f-4408-b0d7-0fd7319bdf9d (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 6 05:09:15 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005548788 (monmap changed)... Dec 6 05:09:15 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005548788 (monmap changed)... Dec 6 05:09:15 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005548788 on np0005548788.localdomain Dec 6 05:09:15 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005548788 on np0005548788.localdomain Dec 6 05:09:15 localhost ceph-mgr[288591]: log_channel(cluster) log [DBG] : pgmap v31: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail Dec 6 05:09:15 localhost ceph-mgr[288591]: [progress INFO root] Writing back 50 completed events Dec 6 05:09:15 localhost ceph-mon[298582]: Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:09:15 localhost ceph-mon[298582]: Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:09:15 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:15 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:15 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:15 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:15 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:15 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548788.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:09:15 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:15 localhost nova_compute[282193]: 2025-12-06 10:09:15.820 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:09:15 localhost nova_compute[282193]: 2025-12-06 10:09:15.821 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:09:15 localhost nova_compute[282193]: 2025-12-06 10:09:15.822 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:09:15 localhost nova_compute[282193]: 2025-12-06 10:09:15.822 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:09:15 localhost nova_compute[282193]: 2025-12-06 10:09:15.864 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:09:15 localhost nova_compute[282193]: 2025-12-06 10:09:15.864 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:09:16 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring osd.2 (monmap changed)... Dec 6 05:09:16 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring osd.2 (monmap changed)... Dec 6 05:09:16 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.2 on np0005548788.localdomain Dec 6 05:09:16 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.2 on np0005548788.localdomain Dec 6 05:09:16 localhost openstack_network_exporter[243110]: ERROR 10:09:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:09:16 localhost openstack_network_exporter[243110]: ERROR 10:09:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:09:16 localhost openstack_network_exporter[243110]: ERROR 10:09:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:09:16 localhost openstack_network_exporter[243110]: ERROR 10:09:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:09:16 localhost openstack_network_exporter[243110]: Dec 6 05:09:16 localhost openstack_network_exporter[243110]: ERROR 10:09:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:09:16 localhost openstack_network_exporter[243110]: Dec 6 05:09:16 localhost ceph-mon[298582]: Reconfiguring crash.np0005548788 (monmap changed)... Dec 6 05:09:16 localhost ceph-mon[298582]: Reconfiguring daemon crash.np0005548788 on np0005548788.localdomain Dec 6 05:09:16 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:16 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:16 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 6 05:09:17 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring osd.5 (monmap changed)... Dec 6 05:09:17 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring osd.5 (monmap changed)... Dec 6 05:09:17 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.5 on np0005548788.localdomain Dec 6 05:09:17 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.5 on np0005548788.localdomain Dec 6 05:09:17 localhost ceph-mgr[288591]: log_channel(cluster) log [DBG] : pgmap v32: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail Dec 6 05:09:17 localhost ceph-mon[298582]: Reconfiguring osd.2 (monmap changed)... Dec 6 05:09:17 localhost ceph-mon[298582]: Reconfiguring daemon osd.2 on np0005548788.localdomain Dec 6 05:09:17 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:17 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:17 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 6 05:09:18 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005548788.erzujf (monmap changed)... Dec 6 05:09:18 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005548788.erzujf (monmap changed)... Dec 6 05:09:18 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005548788.erzujf on np0005548788.localdomain Dec 6 05:09:18 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005548788.erzujf on np0005548788.localdomain Dec 6 05:09:18 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:09:18 localhost ceph-mon[298582]: Reconfiguring osd.5 (monmap changed)... Dec 6 05:09:18 localhost ceph-mon[298582]: Reconfiguring daemon osd.5 on np0005548788.localdomain Dec 6 05:09:18 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:18 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:18 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548788.erzujf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 6 05:09:19 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005548788.yvwbqq (monmap changed)... Dec 6 05:09:19 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005548788.yvwbqq (monmap changed)... Dec 6 05:09:19 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005548788.yvwbqq on np0005548788.localdomain Dec 6 05:09:19 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005548788.yvwbqq on np0005548788.localdomain Dec 6 05:09:19 localhost ceph-mgr[288591]: log_channel(cluster) log [DBG] : pgmap v33: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail Dec 6 05:09:19 localhost ceph-mon[298582]: Reconfiguring mds.mds.np0005548788.erzujf (monmap changed)... Dec 6 05:09:19 localhost ceph-mon[298582]: Reconfiguring daemon mds.mds.np0005548788.erzujf on np0005548788.localdomain Dec 6 05:09:19 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:19 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:19 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548788.yvwbqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:09:20 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005548789 (monmap changed)... Dec 6 05:09:20 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005548789 (monmap changed)... Dec 6 05:09:20 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005548789 on np0005548789.localdomain Dec 6 05:09:20 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005548789 on np0005548789.localdomain Dec 6 05:09:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:09:20 localhost podman[304164]: 2025-12-06 10:09:20.32184571 +0000 UTC m=+0.093027535 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 6 05:09:20 localhost podman[304164]: 2025-12-06 10:09:20.355202216 +0000 UTC m=+0.126384051 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Dec 6 05:09:20 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:09:20 localhost ceph-mgr[288591]: [balancer INFO root] Optimize plan auto_2025-12-06_10:09:20 Dec 6 05:09:20 localhost ceph-mgr[288591]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Dec 6 05:09:20 localhost ceph-mgr[288591]: [balancer INFO root] do_upmap Dec 6 05:09:20 localhost ceph-mgr[288591]: [balancer INFO root] pools ['backups', 'manila_metadata', 'volumes', 'vms', 'images', 'manila_data', '.mgr'] Dec 6 05:09:20 localhost ceph-mgr[288591]: [balancer INFO root] prepared 0/10 changes Dec 6 05:09:20 localhost ceph-mgr[288591]: [pg_autoscaler INFO root] _maybe_adjust Dec 6 05:09:20 localhost ceph-mgr[288591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 6 05:09:20 localhost ceph-mgr[288591]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Dec 6 05:09:20 localhost ceph-mgr[288591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 6 05:09:20 localhost ceph-mgr[288591]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033250017448352874 of space, bias 1.0, pg target 0.6650003489670575 quantized to 32 (current 32) Dec 6 05:09:20 localhost ceph-mgr[288591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 6 05:09:20 localhost ceph-mgr[288591]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 6 05:09:20 localhost ceph-mgr[288591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 6 05:09:20 localhost ceph-mgr[288591]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014449417225013959 of space, bias 1.0, pg target 0.2885066972594454 quantized to 32 (current 32) Dec 6 05:09:20 localhost ceph-mgr[288591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 6 05:09:20 localhost ceph-mgr[288591]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 6 05:09:20 localhost ceph-mgr[288591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 6 05:09:20 localhost ceph-mgr[288591]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Dec 6 05:09:20 localhost ceph-mgr[288591]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Dec 6 05:09:20 localhost ceph-mgr[288591]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.0019596681323283084 quantized to 16 (current 16) Dec 6 05:09:20 localhost ceph-mgr[288591]: [volumes INFO mgr_util] scanning for idle connections.. Dec 6 05:09:20 localhost ceph-mgr[288591]: [volumes INFO mgr_util] cleaning up connections: [] Dec 6 05:09:20 localhost ceph-mgr[288591]: [volumes INFO mgr_util] scanning for idle connections.. Dec 6 05:09:20 localhost ceph-mgr[288591]: [volumes INFO mgr_util] cleaning up connections: [] Dec 6 05:09:20 localhost ceph-mgr[288591]: [volumes INFO mgr_util] scanning for idle connections.. Dec 6 05:09:20 localhost ceph-mgr[288591]: [volumes INFO mgr_util] cleaning up connections: [] Dec 6 05:09:20 localhost ceph-mgr[288591]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Dec 6 05:09:20 localhost ceph-mgr[288591]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 6 05:09:20 localhost ceph-mgr[288591]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 6 05:09:20 localhost ceph-mgr[288591]: [rbd_support INFO root] load_schedules: images, start_after= Dec 6 05:09:20 localhost ceph-mgr[288591]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 6 05:09:20 localhost podman[304219]: Dec 6 05:09:20 localhost ceph-mgr[288591]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Dec 6 05:09:20 localhost ceph-mgr[288591]: [rbd_support INFO root] load_schedules: vms, start_after= Dec 6 05:09:20 localhost ceph-mgr[288591]: [rbd_support INFO root] load_schedules: volumes, start_after= Dec 6 05:09:20 localhost ceph-mgr[288591]: [rbd_support INFO root] load_schedules: images, start_after= Dec 6 05:09:20 localhost ceph-mgr[288591]: [rbd_support INFO root] load_schedules: backups, start_after= Dec 6 05:09:20 localhost podman[304219]: 2025-12-06 10:09:20.720957901 +0000 UTC m=+0.065901987 container create 6e4737215ee694683c0f9a82af81724c69b513afc301c921570d671606094687 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_jackson, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.openshift.tags=rhceph ceph, vcs-type=git, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, RELEASE=main, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Dec 6 05:09:20 localhost systemd[1]: Started libpod-conmon-6e4737215ee694683c0f9a82af81724c69b513afc301c921570d671606094687.scope. Dec 6 05:09:20 localhost systemd[1]: Started libcrun container. Dec 6 05:09:20 localhost podman[304219]: 2025-12-06 10:09:20.699122712 +0000 UTC m=+0.044066778 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:09:20 localhost podman[304219]: 2025-12-06 10:09:20.80720159 +0000 UTC m=+0.152145716 container init 6e4737215ee694683c0f9a82af81724c69b513afc301c921570d671606094687 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_jackson, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, architecture=x86_64, io.buildah.version=1.41.4, release=1763362218, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, RELEASE=main, ceph=True, vendor=Red Hat, Inc.) Dec 6 05:09:20 localhost podman[304219]: 2025-12-06 10:09:20.815342976 +0000 UTC m=+0.160287072 container start 6e4737215ee694683c0f9a82af81724c69b513afc301c921570d671606094687 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_jackson, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, vcs-type=git, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, ceph=True, GIT_BRANCH=main, release=1763362218, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, name=rhceph) Dec 6 05:09:20 localhost podman[304219]: 2025-12-06 10:09:20.815691976 +0000 UTC m=+0.160636112 container attach 6e4737215ee694683c0f9a82af81724c69b513afc301c921570d671606094687 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_jackson, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, GIT_CLEAN=True, io.buildah.version=1.41.4, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, version=7, RELEASE=main, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, release=1763362218) Dec 6 05:09:20 localhost jovial_jackson[304234]: 167 167 Dec 6 05:09:20 localhost systemd[1]: libpod-6e4737215ee694683c0f9a82af81724c69b513afc301c921570d671606094687.scope: Deactivated successfully. Dec 6 05:09:20 localhost podman[304219]: 2025-12-06 10:09:20.82144264 +0000 UTC m=+0.166386776 container died 6e4737215ee694683c0f9a82af81724c69b513afc301c921570d671606094687 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_jackson, GIT_BRANCH=main, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, version=7, CEPH_POINT_RELEASE=, name=rhceph, io.openshift.tags=rhceph ceph, vcs-type=git, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, RELEASE=main, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 6 05:09:20 localhost ceph-mon[298582]: Reconfiguring mgr.np0005548788.yvwbqq (monmap changed)... Dec 6 05:09:20 localhost ceph-mon[298582]: Reconfiguring daemon mgr.np0005548788.yvwbqq on np0005548788.localdomain Dec 6 05:09:20 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:20 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:20 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548789.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:09:20 localhost nova_compute[282193]: 2025-12-06 10:09:20.865 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:09:20 localhost nova_compute[282193]: 2025-12-06 10:09:20.868 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:09:20 localhost nova_compute[282193]: 2025-12-06 10:09:20.868 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:09:20 localhost nova_compute[282193]: 2025-12-06 10:09:20.868 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:09:20 localhost nova_compute[282193]: 2025-12-06 10:09:20.913 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:09:20 localhost nova_compute[282193]: 2025-12-06 10:09:20.914 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:09:20 localhost podman[304239]: 2025-12-06 10:09:20.933313832 +0000 UTC m=+0.100067938 container remove 6e4737215ee694683c0f9a82af81724c69b513afc301c921570d671606094687 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_jackson, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, CEPH_POINT_RELEASE=, GIT_CLEAN=True, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, architecture=x86_64, io.openshift.tags=rhceph ceph, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, vcs-type=git) Dec 6 05:09:20 localhost systemd[1]: libpod-conmon-6e4737215ee694683c0f9a82af81724c69b513afc301c921570d671606094687.scope: Deactivated successfully. Dec 6 05:09:21 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)... Dec 6 05:09:21 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)... Dec 6 05:09:21 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on np0005548789.localdomain Dec 6 05:09:21 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on np0005548789.localdomain Dec 6 05:09:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:09:21 localhost podman[304271]: 2025-12-06 10:09:21.173164342 +0000 UTC m=+0.073343171 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 6 05:09:21 localhost podman[304271]: 2025-12-06 10:09:21.192128023 +0000 UTC m=+0.092306782 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 6 05:09:21 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:09:21 localhost ceph-mgr[288591]: log_channel(cluster) log [DBG] : pgmap v34: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail Dec 6 05:09:21 localhost systemd[1]: var-lib-containers-storage-overlay-611581aab770f0d2936712ca6cd3f2d8c9fe954b0e76cfcbcb173c0d435482f3-merged.mount: Deactivated successfully. Dec 6 05:09:21 localhost podman[304328]: Dec 6 05:09:21 localhost podman[304328]: 2025-12-06 10:09:21.537447893 +0000 UTC m=+0.048789082 container create a9cac8564f7819ca77ca51a7098684245a5f57c398d249d124f49ba77b1b44bd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_diffie, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, CEPH_POINT_RELEASE=, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, RELEASE=main, distribution-scope=public, vcs-type=git, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, vendor=Red Hat, Inc., io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True) Dec 6 05:09:21 localhost systemd[1]: Started libpod-conmon-a9cac8564f7819ca77ca51a7098684245a5f57c398d249d124f49ba77b1b44bd.scope. Dec 6 05:09:21 localhost systemd[1]: Started libcrun container. Dec 6 05:09:21 localhost podman[304328]: 2025-12-06 10:09:21.598828033 +0000 UTC m=+0.110169202 container init a9cac8564f7819ca77ca51a7098684245a5f57c398d249d124f49ba77b1b44bd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_diffie, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, RELEASE=main, io.openshift.expose-services=, GIT_BRANCH=main, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, distribution-scope=public, ceph=True, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64) Dec 6 05:09:21 localhost podman[304328]: 2025-12-06 10:09:21.606620487 +0000 UTC m=+0.117961656 container start a9cac8564f7819ca77ca51a7098684245a5f57c398d249d124f49ba77b1b44bd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_diffie, io.buildah.version=1.41.4, RELEASE=main, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , io.openshift.expose-services=, io.openshift.tags=rhceph ceph, version=7, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, architecture=x86_64, release=1763362218, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 6 05:09:21 localhost podman[304328]: 2025-12-06 10:09:21.606734881 +0000 UTC m=+0.118076050 container attach a9cac8564f7819ca77ca51a7098684245a5f57c398d249d124f49ba77b1b44bd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_diffie, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, architecture=x86_64, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vcs-type=git, com.redhat.component=rhceph-container, name=rhceph, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, io.openshift.expose-services=) Dec 6 05:09:21 localhost thirsty_diffie[304343]: 167 167 Dec 6 05:09:21 localhost systemd[1]: libpod-a9cac8564f7819ca77ca51a7098684245a5f57c398d249d124f49ba77b1b44bd.scope: Deactivated successfully. Dec 6 05:09:21 localhost podman[304328]: 2025-12-06 10:09:21.610508515 +0000 UTC m=+0.121849674 container died a9cac8564f7819ca77ca51a7098684245a5f57c398d249d124f49ba77b1b44bd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_diffie, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , name=rhceph, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, version=7, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, release=1763362218, vcs-type=git, ceph=True) Dec 6 05:09:21 localhost podman[304328]: 2025-12-06 10:09:21.516345766 +0000 UTC m=+0.027686935 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:09:21 localhost podman[304348]: 2025-12-06 10:09:21.706236221 +0000 UTC m=+0.082690524 container remove a9cac8564f7819ca77ca51a7098684245a5f57c398d249d124f49ba77b1b44bd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_diffie, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, release=1763362218, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.buildah.version=1.41.4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, name=rhceph) Dec 6 05:09:21 localhost systemd[1]: libpod-conmon-a9cac8564f7819ca77ca51a7098684245a5f57c398d249d124f49ba77b1b44bd.scope: Deactivated successfully. Dec 6 05:09:21 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring osd.4 (monmap changed)... Dec 6 05:09:21 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring osd.4 (monmap changed)... Dec 6 05:09:21 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.4 on np0005548789.localdomain Dec 6 05:09:21 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.4 on np0005548789.localdomain Dec 6 05:09:22 localhost ceph-mon[298582]: Reconfiguring crash.np0005548789 (monmap changed)... Dec 6 05:09:22 localhost ceph-mon[298582]: Reconfiguring daemon crash.np0005548789 on np0005548789.localdomain Dec 6 05:09:22 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:22 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:22 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Dec 6 05:09:22 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:22 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:22 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Dec 6 05:09:22 localhost systemd[1]: var-lib-containers-storage-overlay-5c7cc5a5d0690136dea4bb69342d81df8a57b4f96ab089a52c849728e0ab90c3-merged.mount: Deactivated successfully. Dec 6 05:09:22 localhost podman[304424]: Dec 6 05:09:22 localhost podman[304424]: 2025-12-06 10:09:22.576015889 +0000 UTC m=+0.101119050 container create 569bf7cec4cdf9541b9e157b2039f3939248bf1dde9d75105a43956e8e9155fa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_dirac, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, vcs-type=git, name=rhceph, architecture=x86_64, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, distribution-scope=public, GIT_CLEAN=True, release=1763362218, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 6 05:09:22 localhost systemd[1]: Started libpod-conmon-569bf7cec4cdf9541b9e157b2039f3939248bf1dde9d75105a43956e8e9155fa.scope. Dec 6 05:09:22 localhost podman[304424]: 2025-12-06 10:09:22.523343982 +0000 UTC m=+0.048447163 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:09:22 localhost systemd[1]: Started libcrun container. Dec 6 05:09:22 localhost podman[304424]: 2025-12-06 10:09:22.666823956 +0000 UTC m=+0.191927117 container init 569bf7cec4cdf9541b9e157b2039f3939248bf1dde9d75105a43956e8e9155fa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_dirac, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_CLEAN=True, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, CEPH_POINT_RELEASE=, version=7, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, io.openshift.expose-services=, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, com.redhat.component=rhceph-container) Dec 6 05:09:22 localhost podman[304424]: 2025-12-06 10:09:22.682798738 +0000 UTC m=+0.207901899 container start 569bf7cec4cdf9541b9e157b2039f3939248bf1dde9d75105a43956e8e9155fa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_dirac, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, version=7, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, architecture=x86_64, io.openshift.tags=rhceph ceph, distribution-scope=public, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z) Dec 6 05:09:22 localhost podman[304424]: 2025-12-06 10:09:22.683052906 +0000 UTC m=+0.208156067 container attach 569bf7cec4cdf9541b9e157b2039f3939248bf1dde9d75105a43956e8e9155fa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_dirac, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, build-date=2025-11-26T19:44:28Z, ceph=True, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , name=rhceph, io.buildah.version=1.41.4, io.openshift.expose-services=, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7) Dec 6 05:09:22 localhost vigorous_dirac[304439]: 167 167 Dec 6 05:09:22 localhost systemd[1]: libpod-569bf7cec4cdf9541b9e157b2039f3939248bf1dde9d75105a43956e8e9155fa.scope: Deactivated successfully. Dec 6 05:09:22 localhost podman[304424]: 2025-12-06 10:09:22.686471549 +0000 UTC m=+0.211574710 container died 569bf7cec4cdf9541b9e157b2039f3939248bf1dde9d75105a43956e8e9155fa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_dirac, architecture=x86_64, io.buildah.version=1.41.4, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, GIT_BRANCH=main, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, GIT_CLEAN=True, RELEASE=main, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7) Dec 6 05:09:22 localhost podman[304444]: 2025-12-06 10:09:22.770146311 +0000 UTC m=+0.076164797 container remove 569bf7cec4cdf9541b9e157b2039f3939248bf1dde9d75105a43956e8e9155fa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_dirac, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , RELEASE=main, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, version=7, GIT_CLEAN=True, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, ceph=True, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 6 05:09:22 localhost systemd[1]: libpod-conmon-569bf7cec4cdf9541b9e157b2039f3939248bf1dde9d75105a43956e8e9155fa.scope: Deactivated successfully. Dec 6 05:09:22 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005548789.vxwwsq (monmap changed)... Dec 6 05:09:22 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005548789.vxwwsq (monmap changed)... Dec 6 05:09:22 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005548789.vxwwsq on np0005548789.localdomain Dec 6 05:09:22 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005548789.vxwwsq on np0005548789.localdomain Dec 6 05:09:23 localhost ceph-mon[298582]: Reconfiguring osd.1 (monmap changed)... Dec 6 05:09:23 localhost ceph-mon[298582]: Reconfiguring daemon osd.1 on np0005548789.localdomain Dec 6 05:09:23 localhost ceph-mon[298582]: Reconfiguring osd.4 (monmap changed)... Dec 6 05:09:23 localhost ceph-mon[298582]: Reconfiguring daemon osd.4 on np0005548789.localdomain Dec 6 05:09:23 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:23 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:23 localhost ceph-mon[298582]: Reconfiguring mds.mds.np0005548789.vxwwsq (monmap changed)... Dec 6 05:09:23 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548789.vxwwsq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 6 05:09:23 localhost ceph-mon[298582]: Reconfiguring daemon mds.mds.np0005548789.vxwwsq on np0005548789.localdomain Dec 6 05:09:23 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:09:23 localhost ceph-mgr[288591]: log_channel(cluster) log [DBG] : pgmap v35: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail Dec 6 05:09:23 localhost systemd[1]: var-lib-containers-storage-overlay-66130f7b57de69b0e3bdc61098a6f4dbf902ffa1162a5e5655a4955145a639f2-merged.mount: Deactivated successfully. Dec 6 05:09:23 localhost podman[304522]: Dec 6 05:09:23 localhost podman[304522]: 2025-12-06 10:09:23.609961006 +0000 UTC m=+0.072236788 container create e8649da0f2dff4fe654db5f6715d5360bce5b3ed589db7263bcb4f7da69a14ae (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_curie, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, RELEASE=main, maintainer=Guillaume Abrioux , GIT_CLEAN=True, release=1763362218, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, version=7, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, vendor=Red Hat, Inc.) Dec 6 05:09:23 localhost systemd[1]: Started libpod-conmon-e8649da0f2dff4fe654db5f6715d5360bce5b3ed589db7263bcb4f7da69a14ae.scope. Dec 6 05:09:23 localhost systemd[1]: Started libcrun container. Dec 6 05:09:23 localhost podman[304522]: 2025-12-06 10:09:23.576404715 +0000 UTC m=+0.038680527 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:09:23 localhost podman[304522]: 2025-12-06 10:09:23.681796051 +0000 UTC m=+0.144071833 container init e8649da0f2dff4fe654db5f6715d5360bce5b3ed589db7263bcb4f7da69a14ae (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_curie, version=7, name=rhceph, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, com.redhat.component=rhceph-container, architecture=x86_64, vcs-type=git, RELEASE=main, GIT_CLEAN=True, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 6 05:09:23 localhost elastic_curie[304537]: 167 167 Dec 6 05:09:23 localhost podman[304522]: 2025-12-06 10:09:23.693795273 +0000 UTC m=+0.156071045 container start e8649da0f2dff4fe654db5f6715d5360bce5b3ed589db7263bcb4f7da69a14ae (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_curie, build-date=2025-11-26T19:44:28Z, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, vcs-type=git, description=Red Hat Ceph Storage 7, name=rhceph, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, RELEASE=main, version=7, distribution-scope=public, io.openshift.tags=rhceph ceph) Dec 6 05:09:23 localhost systemd[1]: libpod-e8649da0f2dff4fe654db5f6715d5360bce5b3ed589db7263bcb4f7da69a14ae.scope: Deactivated successfully. Dec 6 05:09:23 localhost podman[304522]: 2025-12-06 10:09:23.694136144 +0000 UTC m=+0.156412156 container attach e8649da0f2dff4fe654db5f6715d5360bce5b3ed589db7263bcb4f7da69a14ae (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_curie, build-date=2025-11-26T19:44:28Z, version=7, architecture=x86_64, ceph=True, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, release=1763362218, maintainer=Guillaume Abrioux ) Dec 6 05:09:23 localhost podman[304522]: 2025-12-06 10:09:23.697966479 +0000 UTC m=+0.160242321 container died e8649da0f2dff4fe654db5f6715d5360bce5b3ed589db7263bcb4f7da69a14ae (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_curie, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, release=1763362218, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, RELEASE=main, name=rhceph, io.buildah.version=1.41.4, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7) Dec 6 05:09:23 localhost podman[304542]: 2025-12-06 10:09:23.79886214 +0000 UTC m=+0.089565101 container remove e8649da0f2dff4fe654db5f6715d5360bce5b3ed589db7263bcb4f7da69a14ae (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_curie, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, vcs-type=git, io.buildah.version=1.41.4, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_CLEAN=True, release=1763362218, CEPH_POINT_RELEASE=, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 6 05:09:23 localhost systemd[1]: libpod-conmon-e8649da0f2dff4fe654db5f6715d5360bce5b3ed589db7263bcb4f7da69a14ae.scope: Deactivated successfully. Dec 6 05:09:23 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005548789.mzhmje (monmap changed)... Dec 6 05:09:23 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005548789.mzhmje (monmap changed)... Dec 6 05:09:23 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005548789.mzhmje on np0005548789.localdomain Dec 6 05:09:23 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005548789.mzhmje on np0005548789.localdomain Dec 6 05:09:23 localhost podman[241090]: time="2025-12-06T10:09:23Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:09:23 localhost podman[241090]: @ - - [06/Dec/2025:10:09:23 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156104 "" "Go-http-client/1.1" Dec 6 05:09:23 localhost podman[241090]: @ - - [06/Dec/2025:10:09:23 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19241 "" "Go-http-client/1.1" Dec 6 05:09:24 localhost systemd[1]: var-lib-containers-storage-overlay-32cf92c56e986e9cf0392dcebe075c9952a8504acfcf5920943c983ef8630b5c-merged.mount: Deactivated successfully. Dec 6 05:09:24 localhost podman[304610]: Dec 6 05:09:24 localhost podman[304610]: 2025-12-06 10:09:24.534701231 +0000 UTC m=+0.066310230 container create 590e29022460d2dde343d7968e63e9b340f4a4fddf6dd24e3b4368d22477ba34 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_pascal, name=rhceph, description=Red Hat Ceph Storage 7, RELEASE=main, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, build-date=2025-11-26T19:44:28Z, vcs-type=git, GIT_BRANCH=main, io.openshift.expose-services=, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 6 05:09:24 localhost systemd[1]: Started libpod-conmon-590e29022460d2dde343d7968e63e9b340f4a4fddf6dd24e3b4368d22477ba34.scope. Dec 6 05:09:24 localhost systemd[1]: Started libcrun container. Dec 6 05:09:24 localhost podman[304610]: 2025-12-06 10:09:24.501750948 +0000 UTC m=+0.033359977 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:09:24 localhost podman[304610]: 2025-12-06 10:09:24.605195176 +0000 UTC m=+0.136804165 container init 590e29022460d2dde343d7968e63e9b340f4a4fddf6dd24e3b4368d22477ba34 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_pascal, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, GIT_BRANCH=main, release=1763362218, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, name=rhceph, RELEASE=main, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, distribution-scope=public, version=7, vcs-type=git) Dec 6 05:09:24 localhost podman[304610]: 2025-12-06 10:09:24.615247298 +0000 UTC m=+0.146856287 container start 590e29022460d2dde343d7968e63e9b340f4a4fddf6dd24e3b4368d22477ba34 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_pascal, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.openshift.expose-services=, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, release=1763362218, RELEASE=main, version=7, name=rhceph, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4) Dec 6 05:09:24 localhost podman[304610]: 2025-12-06 10:09:24.61560394 +0000 UTC m=+0.147212929 container attach 590e29022460d2dde343d7968e63e9b340f4a4fddf6dd24e3b4368d22477ba34 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_pascal, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, io.openshift.expose-services=, GIT_BRANCH=main, io.buildah.version=1.41.4, ceph=True, version=7, release=1763362218, architecture=x86_64, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph) Dec 6 05:09:24 localhost elastic_pascal[304625]: 167 167 Dec 6 05:09:24 localhost systemd[1]: libpod-590e29022460d2dde343d7968e63e9b340f4a4fddf6dd24e3b4368d22477ba34.scope: Deactivated successfully. Dec 6 05:09:24 localhost podman[304610]: 2025-12-06 10:09:24.620148756 +0000 UTC m=+0.151757755 container died 590e29022460d2dde343d7968e63e9b340f4a4fddf6dd24e3b4368d22477ba34 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_pascal, RELEASE=main, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, CEPH_POINT_RELEASE=, GIT_BRANCH=main, version=7, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, distribution-scope=public, com.redhat.component=rhceph-container, release=1763362218) Dec 6 05:09:24 localhost podman[304630]: 2025-12-06 10:09:24.728129061 +0000 UTC m=+0.095405126 container remove 590e29022460d2dde343d7968e63e9b340f4a4fddf6dd24e3b4368d22477ba34 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_pascal, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, distribution-scope=public, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, RELEASE=main, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, io.buildah.version=1.41.4, architecture=x86_64, vcs-type=git) Dec 6 05:09:24 localhost systemd[1]: libpod-conmon-590e29022460d2dde343d7968e63e9b340f4a4fddf6dd24e3b4368d22477ba34.scope: Deactivated successfully. Dec 6 05:09:24 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005548790 (monmap changed)... Dec 6 05:09:24 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005548790 (monmap changed)... Dec 6 05:09:24 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005548790 on np0005548790.localdomain Dec 6 05:09:24 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005548790 on np0005548790.localdomain Dec 6 05:09:24 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:24 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:24 localhost ceph-mon[298582]: Reconfiguring mgr.np0005548789.mzhmje (monmap changed)... Dec 6 05:09:24 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548789.mzhmje", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:09:24 localhost ceph-mon[298582]: Reconfiguring daemon mgr.np0005548789.mzhmje on np0005548789.localdomain Dec 6 05:09:24 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:24 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:24 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548790.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:09:25 localhost ceph-mgr[288591]: log_channel(cluster) log [DBG] : pgmap v36: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail Dec 6 05:09:25 localhost systemd[1]: var-lib-containers-storage-overlay-2fff68c4181a8b7a4e6d1d95fc9f12e1e4bd51405bbc9ff7551e61a53747ac48-merged.mount: Deactivated successfully. Dec 6 05:09:25 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)... Dec 6 05:09:25 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)... Dec 6 05:09:25 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on np0005548790.localdomain Dec 6 05:09:25 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on np0005548790.localdomain Dec 6 05:09:25 localhost nova_compute[282193]: 2025-12-06 10:09:25.915 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:09:25 localhost nova_compute[282193]: 2025-12-06 10:09:25.918 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:09:25 localhost nova_compute[282193]: 2025-12-06 10:09:25.919 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5005 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:09:25 localhost nova_compute[282193]: 2025-12-06 10:09:25.919 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:09:25 localhost ceph-mon[298582]: Reconfiguring crash.np0005548790 (monmap changed)... Dec 6 05:09:25 localhost ceph-mon[298582]: Reconfiguring daemon crash.np0005548790 on np0005548790.localdomain Dec 6 05:09:25 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:25 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:25 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Dec 6 05:09:25 localhost nova_compute[282193]: 2025-12-06 10:09:25.961 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:09:25 localhost nova_compute[282193]: 2025-12-06 10:09:25.963 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:09:26 localhost ceph-mgr[288591]: log_channel(audit) log [DBG] : from='client.54285 -' entity='client.admin' cmd=[{"prefix": "orch daemon add", "daemon_type": "mon", "placement": "np0005548790.localdomain:172.18.0.105", "target": ["mon-mgr", ""]}]: dispatch Dec 6 05:09:26 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Deploying daemon mon.np0005548790 on np0005548790.localdomain Dec 6 05:09:26 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Deploying daemon mon.np0005548790 on np0005548790.localdomain Dec 6 05:09:26 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring osd.3 (monmap changed)... Dec 6 05:09:26 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring osd.3 (monmap changed)... Dec 6 05:09:26 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.3 on np0005548790.localdomain Dec 6 05:09:26 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.3 on np0005548790.localdomain Dec 6 05:09:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:09:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:09:26 localhost ceph-mon[298582]: Reconfiguring osd.0 (monmap changed)... Dec 6 05:09:26 localhost ceph-mon[298582]: Reconfiguring daemon osd.0 on np0005548790.localdomain Dec 6 05:09:26 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:26 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 6 05:09:26 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:26 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:26 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Dec 6 05:09:26 localhost podman[304648]: 2025-12-06 10:09:26.937839209 +0000 UTC m=+0.093655174 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:09:26 localhost podman[304648]: 2025-12-06 10:09:26.976890216 +0000 UTC m=+0.132706171 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3) Dec 6 05:09:26 localhost podman[304647]: 2025-12-06 10:09:26.984403883 +0000 UTC m=+0.142878278 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-type=git, build-date=2025-08-20T13:12:41, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, io.openshift.expose-services=, release=1755695350, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, config_id=edpm, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Dec 6 05:09:26 localhost podman[304647]: 2025-12-06 10:09:26.996479407 +0000 UTC m=+0.154953792 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, version=9.6, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9) Dec 6 05:09:27 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:09:27 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:09:27 localhost ceph-mgr[288591]: log_channel(cluster) log [DBG] : pgmap v37: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail Dec 6 05:09:27 localhost ceph-mon[298582]: Deploying daemon mon.np0005548790 on np0005548790.localdomain Dec 6 05:09:27 localhost ceph-mon[298582]: Reconfiguring osd.3 (monmap changed)... Dec 6 05:09:27 localhost ceph-mon[298582]: Reconfiguring daemon osd.3 on np0005548790.localdomain Dec 6 05:09:28 localhost nova_compute[282193]: 2025-12-06 10:09:28.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:09:28 localhost nova_compute[282193]: 2025-12-06 10:09:28.181 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 6 05:09:28 localhost nova_compute[282193]: 2025-12-06 10:09:28.182 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 6 05:09:28 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:09:28 localhost nova_compute[282193]: 2025-12-06 10:09:28.586 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 6 05:09:28 localhost nova_compute[282193]: 2025-12-06 10:09:28.587 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquired lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 6 05:09:28 localhost nova_compute[282193]: 2025-12-06 10:09:28.587 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 6 05:09:28 localhost nova_compute[282193]: 2025-12-06 10:09:28.587 282197 DEBUG nova.objects.instance [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 6 05:09:28 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e14 adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints Dec 6 05:09:28 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e14 adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints Dec 6 05:09:28 localhost nova_compute[282193]: 2025-12-06 10:09:28.986 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updating instance_info_cache with network_info: [{"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 6 05:09:29 localhost nova_compute[282193]: 2025-12-06 10:09:29.000 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Releasing lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 6 05:09:29 localhost nova_compute[282193]: 2025-12-06 10:09:29.001 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 6 05:09:29 localhost nova_compute[282193]: 2025-12-06 10:09:29.002 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:09:29 localhost nova_compute[282193]: 2025-12-06 10:09:29.018 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:09:29 localhost nova_compute[282193]: 2025-12-06 10:09:29.019 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:09:29 localhost nova_compute[282193]: 2025-12-06 10:09:29.019 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:09:29 localhost nova_compute[282193]: 2025-12-06 10:09:29.020 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Auditing locally available compute resources for np0005548789.localdomain (node: np0005548789.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 6 05:09:29 localhost nova_compute[282193]: 2025-12-06 10:09:29.020 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:09:29 localhost ceph-mgr[288591]: log_channel(cluster) log [DBG] : pgmap v38: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail Dec 6 05:09:29 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e14 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:09:29 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/788914456' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:09:29 localhost nova_compute[282193]: 2025-12-06 10:09:29.472 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:09:29 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e14 adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints Dec 6 05:09:29 localhost ceph-mgr[288591]: mgr.server handle_open ignoring open from mon.np0005548790 172.18.0.108:0/3264568232; not ready for session (expect reconnect) Dec 6 05:09:29 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005548790.vhcezv (monmap changed)... Dec 6 05:09:29 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005548790.vhcezv (monmap changed)... Dec 6 05:09:29 localhost ceph-mgr[288591]: mgr finish mon failed to return metadata for mon.np0005548790: (2) No such file or directory Dec 6 05:09:29 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005548790.vhcezv on np0005548790.localdomain Dec 6 05:09:29 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005548790.vhcezv on np0005548790.localdomain Dec 6 05:09:29 localhost ceph-mon[298582]: log_channel(cluster) log [INF] : mon.np0005548789 calling monitor election Dec 6 05:09:29 localhost ceph-mon[298582]: paxos.1).electionLogic(64) init, last seen epoch 64 Dec 6 05:09:29 localhost ceph-mgr[288591]: mgr finish mon failed to return metadata for mon.np0005548790: (22) Invalid argument Dec 6 05:09:29 localhost ceph-mon[298582]: mon.np0005548789@1(electing) e15 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 6 05:09:29 localhost ceph-mon[298582]: mon.np0005548789@1(electing) e15 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 6 05:09:29 localhost nova_compute[282193]: 2025-12-06 10:09:29.532 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:09:29 localhost nova_compute[282193]: 2025-12-06 10:09:29.533 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:09:29 localhost nova_compute[282193]: 2025-12-06 10:09:29.687 282197 WARNING nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 6 05:09:29 localhost nova_compute[282193]: 2025-12-06 10:09:29.688 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Hypervisor/Node resource view: name=np0005548789.localdomain free_ram=11483MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 6 05:09:29 localhost nova_compute[282193]: 2025-12-06 10:09:29.688 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:09:29 localhost nova_compute[282193]: 2025-12-06 10:09:29.689 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:09:29 localhost nova_compute[282193]: 2025-12-06 10:09:29.754 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 6 05:09:29 localhost nova_compute[282193]: 2025-12-06 10:09:29.754 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 6 05:09:29 localhost nova_compute[282193]: 2025-12-06 10:09:29.755 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Final resource view: name=np0005548789.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 6 05:09:29 localhost nova_compute[282193]: 2025-12-06 10:09:29.799 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:09:29 localhost ceph-mon[298582]: mon.np0005548789@1(electing) e15 handle_auth_request failed to assign global_id Dec 6 05:09:30 localhost ceph-mon[298582]: mon.np0005548789@1(electing) e15 handle_auth_request failed to assign global_id Dec 6 05:09:30 localhost ceph-mgr[288591]: mgr.server handle_open ignoring open from mon.np0005548790 172.18.0.108:0/3264568232; not ready for session (expect reconnect) Dec 6 05:09:30 localhost ceph-mgr[288591]: mgr finish mon failed to return metadata for mon.np0005548790: (22) Invalid argument Dec 6 05:09:30 localhost ceph-mon[298582]: mon.np0005548789@1(electing) e15 handle_auth_request failed to assign global_id Dec 6 05:09:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:09:30 localhost podman[304718]: 2025-12-06 10:09:30.921888532 +0000 UTC m=+0.081279311 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd) Dec 6 05:09:30 localhost podman[304718]: 2025-12-06 10:09:30.935971787 +0000 UTC m=+0.095362566 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125) Dec 6 05:09:30 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:09:30 localhost nova_compute[282193]: 2025-12-06 10:09:30.964 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:09:31 localhost nova_compute[282193]: 2025-12-06 10:09:30.966 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:09:31 localhost nova_compute[282193]: 2025-12-06 10:09:30.966 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:09:31 localhost nova_compute[282193]: 2025-12-06 10:09:30.967 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:09:31 localhost nova_compute[282193]: 2025-12-06 10:09:31.001 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:09:31 localhost nova_compute[282193]: 2025-12-06 10:09:31.001 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:09:31 localhost ceph-mgr[288591]: log_channel(cluster) log [DBG] : pgmap v39: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail Dec 6 05:09:31 localhost ceph-mon[298582]: mon.np0005548789@1(electing) e15 handle_auth_request failed to assign global_id Dec 6 05:09:31 localhost ceph-mgr[288591]: mgr.server handle_open ignoring open from mon.np0005548790 172.18.0.108:0/3264568232; not ready for session (expect reconnect) Dec 6 05:09:31 localhost ceph-mgr[288591]: mgr finish mon failed to return metadata for mon.np0005548790: (22) Invalid argument Dec 6 05:09:32 localhost ceph-mgr[288591]: mgr.server handle_open ignoring open from mon.np0005548790 172.18.0.108:0/3264568232; not ready for session (expect reconnect) Dec 6 05:09:32 localhost ceph-mgr[288591]: mgr finish mon failed to return metadata for mon.np0005548790: (22) Invalid argument Dec 6 05:09:32 localhost ceph-mon[298582]: mon.np0005548789@1(electing) e15 handle_auth_request failed to assign global_id Dec 6 05:09:33 localhost ceph-mon[298582]: mon.np0005548789@1(electing) e15 handle_auth_request failed to assign global_id Dec 6 05:09:33 localhost ceph-mgr[288591]: log_channel(cluster) log [DBG] : pgmap v40: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail Dec 6 05:09:33 localhost ceph-mgr[288591]: mgr.server handle_open ignoring open from mon.np0005548790 172.18.0.108:0/3264568232; not ready for session (expect reconnect) Dec 6 05:09:33 localhost ceph-mgr[288591]: mgr finish mon failed to return metadata for mon.np0005548790: (22) Invalid argument Dec 6 05:09:33 localhost ceph-mon[298582]: mon.np0005548789@1(electing) e15 handle_auth_request failed to assign global_id Dec 6 05:09:34 localhost ceph-mon[298582]: mon.np0005548789@1(electing) e15 handle_auth_request failed to assign global_id Dec 6 05:09:34 localhost ceph-mgr[288591]: mgr.server handle_open ignoring open from mon.np0005548790 172.18.0.108:0/3264568232; not ready for session (expect reconnect) Dec 6 05:09:34 localhost ceph-mgr[288591]: mgr finish mon failed to return metadata for mon.np0005548790: (22) Invalid argument Dec 6 05:09:34 localhost ceph-mon[298582]: mon.np0005548789@1(electing) e15 handle_auth_request failed to assign global_id Dec 6 05:09:34 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 6 05:09:34 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005548790.kvkfyr (monmap changed)... Dec 6 05:09:34 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005548790.kvkfyr (monmap changed)... Dec 6 05:09:34 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005548790.kvkfyr on np0005548790.localdomain Dec 6 05:09:34 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005548790.kvkfyr on np0005548790.localdomain Dec 6 05:09:34 localhost ceph-mon[298582]: Reconfiguring mds.mds.np0005548790.vhcezv (monmap changed)... Dec 6 05:09:34 localhost ceph-mon[298582]: Reconfiguring daemon mds.mds.np0005548790.vhcezv on np0005548790.localdomain Dec 6 05:09:34 localhost ceph-mon[298582]: mon.np0005548788 calling monitor election Dec 6 05:09:34 localhost ceph-mon[298582]: mon.np0005548789 calling monitor election Dec 6 05:09:34 localhost ceph-mon[298582]: mon.np0005548788 is new leader, mons np0005548788,np0005548789 in quorum (ranks 0,1) Dec 6 05:09:34 localhost ceph-mon[298582]: Health check failed: 1/3 mons down, quorum np0005548788,np0005548789 (MON_DOWN) Dec 6 05:09:34 localhost ceph-mon[298582]: Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm; 1/3 mons down, quorum np0005548788,np0005548789 Dec 6 05:09:34 localhost ceph-mon[298582]: [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm Dec 6 05:09:34 localhost ceph-mon[298582]: stray daemon mgr.np0005548785.vhqlsq on host np0005548785.localdomain not managed by cephadm Dec 6 05:09:34 localhost ceph-mon[298582]: [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm Dec 6 05:09:34 localhost ceph-mon[298582]: stray host np0005548785.localdomain has 1 stray daemons: ['mgr.np0005548785.vhqlsq'] Dec 6 05:09:34 localhost ceph-mon[298582]: [WRN] MON_DOWN: 1/3 mons down, quorum np0005548788,np0005548789 Dec 6 05:09:34 localhost ceph-mon[298582]: mon.np0005548790 (rank 2) addr [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] is down (out of quorum) Dec 6 05:09:34 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:34 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:34 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548790.kvkfyr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:09:35 localhost ceph-mgr[288591]: log_channel(cluster) log [DBG] : pgmap v41: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail Dec 6 05:09:35 localhost ceph-mgr[288591]: mgr.server handle_open ignoring open from mon.np0005548790 172.18.0.108:0/3264568232; not ready for session (expect reconnect) Dec 6 05:09:35 localhost ceph-mgr[288591]: mgr finish mon failed to return metadata for mon.np0005548790: (22) Invalid argument Dec 6 05:09:35 localhost ceph-mon[298582]: Reconfiguring mgr.np0005548790.kvkfyr (monmap changed)... Dec 6 05:09:35 localhost ceph-mon[298582]: Reconfiguring daemon mgr.np0005548790.kvkfyr on np0005548790.localdomain Dec 6 05:09:35 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:35 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:09:35 localhost podman[304756]: 2025-12-06 10:09:35.732268884 +0000 UTC m=+0.084020984 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 6 05:09:35 localhost podman[304756]: 2025-12-06 10:09:35.742482381 +0000 UTC m=+0.094234481 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 6 05:09:35 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:09:35 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:09:35 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3172055493' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:09:36 localhost nova_compute[282193]: 2025-12-06 10:09:36.002 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:09:36 localhost nova_compute[282193]: 2025-12-06 10:09:36.004 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:09:36 localhost nova_compute[282193]: 2025-12-06 10:09:36.004 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:09:36 localhost nova_compute[282193]: 2025-12-06 10:09:36.004 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:09:36 localhost nova_compute[282193]: 2025-12-06 10:09:36.036 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:09:36 localhost nova_compute[282193]: 2025-12-06 10:09:36.037 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:09:36 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:09:36 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2247057544' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:09:36 localhost nova_compute[282193]: 2025-12-06 10:09:36.297 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 6.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:09:36 localhost nova_compute[282193]: 2025-12-06 10:09:36.303 282197 DEBUG nova.compute.provider_tree [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed in ProviderTree for provider: 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 6 05:09:36 localhost nova_compute[282193]: 2025-12-06 10:09:36.320 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 6 05:09:36 localhost nova_compute[282193]: 2025-12-06 10:09:36.322 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Compute_service record updated for np0005548789.localdomain:np0005548789.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 6 05:09:36 localhost nova_compute[282193]: 2025-12-06 10:09:36.323 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 6.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:09:36 localhost ceph-mgr[288591]: mgr.server handle_open ignoring open from mon.np0005548790 172.18.0.108:0/3264568232; not ready for session (expect reconnect) Dec 6 05:09:36 localhost ceph-mgr[288591]: mgr finish mon failed to return metadata for mon.np0005548790: (22) Invalid argument Dec 6 05:09:36 localhost nova_compute[282193]: 2025-12-06 10:09:36.502 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:09:36 localhost nova_compute[282193]: 2025-12-06 10:09:36.503 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:09:36 localhost nova_compute[282193]: 2025-12-06 10:09:36.503 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:09:36 localhost nova_compute[282193]: 2025-12-06 10:09:36.504 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:09:36 localhost nova_compute[282193]: 2025-12-06 10:09:36.504 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:09:36 localhost nova_compute[282193]: 2025-12-06 10:09:36.504 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:09:36 localhost nova_compute[282193]: 2025-12-06 10:09:36.504 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:09:36 localhost nova_compute[282193]: 2025-12-06 10:09:36.505 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 6 05:09:36 localhost ceph-mon[298582]: log_channel(cluster) log [INF] : mon.np0005548789 calling monitor election Dec 6 05:09:36 localhost ceph-mon[298582]: paxos.1).electionLogic(66) init, last seen epoch 66 Dec 6 05:09:36 localhost ceph-mon[298582]: mon.np0005548789@1(electing) e15 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 6 05:09:36 localhost ceph-mon[298582]: mon.np0005548789@1(electing) e15 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 6 05:09:36 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Dec 6 05:09:36 localhost ceph-mon[298582]: mon.np0005548790 calling monitor election Dec 6 05:09:36 localhost ceph-mon[298582]: mon.np0005548790 calling monitor election Dec 6 05:09:36 localhost ceph-mon[298582]: mon.np0005548789 calling monitor election Dec 6 05:09:36 localhost ceph-mon[298582]: mon.np0005548788 calling monitor election Dec 6 05:09:36 localhost ceph-mon[298582]: mon.np0005548788 is new leader, mons np0005548788,np0005548789,np0005548790 in quorum (ranks 0,1,2) Dec 6 05:09:36 localhost ceph-mon[298582]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum np0005548788,np0005548789) Dec 6 05:09:36 localhost ceph-mon[298582]: Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm Dec 6 05:09:36 localhost ceph-mon[298582]: [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm Dec 6 05:09:36 localhost ceph-mon[298582]: stray daemon mgr.np0005548785.vhqlsq on host np0005548785.localdomain not managed by cephadm Dec 6 05:09:36 localhost ceph-mon[298582]: [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm Dec 6 05:09:36 localhost ceph-mon[298582]: stray host np0005548785.localdomain has 1 stray daemons: ['mgr.np0005548785.vhqlsq'] Dec 6 05:09:37 localhost sshd[304839]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:09:37 localhost ceph-mgr[288591]: log_channel(cluster) log [DBG] : pgmap v42: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail Dec 6 05:09:37 localhost ceph-mgr[288591]: mgr.server handle_open ignoring open from mon.np0005548790 172.18.0.108:0/3264568232; not ready for session (expect reconnect) Dec 6 05:09:37 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:37 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:38 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:09:38 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Updating np0005548788.localdomain:/etc/ceph/ceph.conf Dec 6 05:09:38 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Updating np0005548788.localdomain:/etc/ceph/ceph.conf Dec 6 05:09:38 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Updating np0005548789.localdomain:/etc/ceph/ceph.conf Dec 6 05:09:38 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Updating np0005548789.localdomain:/etc/ceph/ceph.conf Dec 6 05:09:38 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Updating np0005548790.localdomain:/etc/ceph/ceph.conf Dec 6 05:09:38 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Updating np0005548790.localdomain:/etc/ceph/ceph.conf Dec 6 05:09:38 localhost ceph-mgr[288591]: mgr.server handle_report got status from non-daemon mon.np0005548790 Dec 6 05:09:38 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:09:38.480+0000 7f041dcc8640 -1 mgr.server handle_report got status from non-daemon mon.np0005548790 Dec 6 05:09:38 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:09:38 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 6 05:09:38 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2596033626' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 6 05:09:38 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 6 05:09:38 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2596033626' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 6 05:09:39 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:09:39 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:09:39 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:09:39 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:09:39 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:09:39 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:09:39 localhost ceph-mgr[288591]: log_channel(cluster) log [DBG] : pgmap v43: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail Dec 6 05:09:39 localhost sshd[305161]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:09:39 localhost ceph-mgr[288591]: [progress INFO root] update: starting ev 1f510e1d-7b73-4412-b039-2de466b2d3e6 (Updating node-proxy deployment (+3 -> 3)) Dec 6 05:09:39 localhost ceph-mgr[288591]: [progress INFO root] complete: finished ev 1f510e1d-7b73-4412-b039-2de466b2d3e6 (Updating node-proxy deployment (+3 -> 3)) Dec 6 05:09:39 localhost ceph-mgr[288591]: [progress INFO root] Completed event 1f510e1d-7b73-4412-b039-2de466b2d3e6 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Dec 6 05:09:39 localhost ceph-mon[298582]: Updating np0005548788.localdomain:/etc/ceph/ceph.conf Dec 6 05:09:39 localhost ceph-mon[298582]: Updating np0005548789.localdomain:/etc/ceph/ceph.conf Dec 6 05:09:39 localhost ceph-mon[298582]: Updating np0005548790.localdomain:/etc/ceph/ceph.conf Dec 6 05:09:39 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:39 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:39 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:39 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:39 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:39 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:40 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005548788 (monmap changed)... Dec 6 05:09:40 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005548788 (monmap changed)... Dec 6 05:09:40 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005548788 on np0005548788.localdomain Dec 6 05:09:40 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005548788 on np0005548788.localdomain Dec 6 05:09:40 localhost ceph-mgr[288591]: [progress INFO root] Writing back 50 completed events Dec 6 05:09:40 localhost ceph-mon[298582]: Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:09:40 localhost ceph-mon[298582]: Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:09:40 localhost ceph-mon[298582]: Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:09:40 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:40 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548788.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:09:40 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:41 localhost nova_compute[282193]: 2025-12-06 10:09:41.081 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:09:41 localhost nova_compute[282193]: 2025-12-06 10:09:41.084 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:09:41 localhost nova_compute[282193]: 2025-12-06 10:09:41.084 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5047 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:09:41 localhost nova_compute[282193]: 2025-12-06 10:09:41.085 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:09:41 localhost nova_compute[282193]: 2025-12-06 10:09:41.086 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:09:41 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring osd.2 (monmap changed)... Dec 6 05:09:41 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring osd.2 (monmap changed)... Dec 6 05:09:41 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.2 on np0005548788.localdomain Dec 6 05:09:41 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.2 on np0005548788.localdomain Dec 6 05:09:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:09:41 localhost ceph-mgr[288591]: log_channel(cluster) log [DBG] : pgmap v44: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail Dec 6 05:09:41 localhost podman[305181]: 2025-12-06 10:09:41.377075778 +0000 UTC m=+0.086380745 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible) Dec 6 05:09:41 localhost podman[305181]: 2025-12-06 10:09:41.413865687 +0000 UTC m=+0.123170694 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0) Dec 6 05:09:41 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:09:41 localhost ceph-mon[298582]: Reconfiguring crash.np0005548788 (monmap changed)... Dec 6 05:09:41 localhost ceph-mon[298582]: Reconfiguring daemon crash.np0005548788 on np0005548788.localdomain Dec 6 05:09:41 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:41 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:41 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 6 05:09:42 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring osd.5 (monmap changed)... Dec 6 05:09:42 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring osd.5 (monmap changed)... Dec 6 05:09:42 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.5 on np0005548788.localdomain Dec 6 05:09:42 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.5 on np0005548788.localdomain Dec 6 05:09:43 localhost ceph-mon[298582]: Reconfiguring osd.2 (monmap changed)... Dec 6 05:09:43 localhost ceph-mon[298582]: Reconfiguring daemon osd.2 on np0005548788.localdomain Dec 6 05:09:43 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:43 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:43 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 6 05:09:43 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005548788.erzujf (monmap changed)... Dec 6 05:09:43 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005548788.erzujf (monmap changed)... Dec 6 05:09:43 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005548788.erzujf on np0005548788.localdomain Dec 6 05:09:43 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005548788.erzujf on np0005548788.localdomain Dec 6 05:09:43 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:09:43 localhost ceph-mgr[288591]: log_channel(cluster) log [DBG] : pgmap v45: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail Dec 6 05:09:43 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005548788.yvwbqq (monmap changed)... Dec 6 05:09:43 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005548788.yvwbqq (monmap changed)... Dec 6 05:09:43 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005548788.yvwbqq on np0005548788.localdomain Dec 6 05:09:43 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005548788.yvwbqq on np0005548788.localdomain Dec 6 05:09:44 localhost ceph-mon[298582]: Reconfiguring osd.5 (monmap changed)... Dec 6 05:09:44 localhost ceph-mon[298582]: Reconfiguring daemon osd.5 on np0005548788.localdomain Dec 6 05:09:44 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:44 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:44 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548788.erzujf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 6 05:09:44 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:44 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:44 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548788.yvwbqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:09:44 localhost ceph-mgr[288591]: log_channel(audit) log [DBG] : from='client.44541 -' entity='client.admin' cmd=[{"prefix": "orch", "action": "reconfig", "service_name": "osd.default_drive_group", "target": ["mon-mgr", ""]}]: dispatch Dec 6 05:09:44 localhost ceph-mgr[288591]: [cephadm INFO root] Reconfig service osd.default_drive_group Dec 6 05:09:44 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfig service osd.default_drive_group Dec 6 05:09:44 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005548789 (monmap changed)... Dec 6 05:09:44 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005548789 (monmap changed)... Dec 6 05:09:44 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005548789 on np0005548789.localdomain Dec 6 05:09:44 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005548789 on np0005548789.localdomain Dec 6 05:09:45 localhost ceph-mon[298582]: Reconfiguring mds.mds.np0005548788.erzujf (monmap changed)... Dec 6 05:09:45 localhost ceph-mon[298582]: Reconfiguring daemon mds.mds.np0005548788.erzujf on np0005548788.localdomain Dec 6 05:09:45 localhost ceph-mon[298582]: Reconfiguring mgr.np0005548788.yvwbqq (monmap changed)... Dec 6 05:09:45 localhost ceph-mon[298582]: Reconfiguring daemon mgr.np0005548788.yvwbqq on np0005548788.localdomain Dec 6 05:09:45 localhost ceph-mon[298582]: Reconfig service osd.default_drive_group Dec 6 05:09:45 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:45 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:45 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:45 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:45 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:45 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:45 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:45 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:45 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:45 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:45 localhost ceph-mon[298582]: Reconfiguring crash.np0005548789 (monmap changed)... Dec 6 05:09:45 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548789.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:09:45 localhost ceph-mon[298582]: Reconfiguring daemon crash.np0005548789 on np0005548789.localdomain Dec 6 05:09:45 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:45 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:45 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:45 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:45 localhost sshd[305242]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:09:45 localhost ceph-mgr[288591]: log_channel(cluster) log [DBG] : pgmap v46: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail Dec 6 05:09:45 localhost podman[305261]: Dec 6 05:09:45 localhost podman[305261]: 2025-12-06 10:09:45.461872219 +0000 UTC m=+0.081794076 container create 1453efeb067aa7ef657551153a9eeb617d8cd97f6a690338d20648c06f0e9657 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_kare, CEPH_POINT_RELEASE=, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, release=1763362218, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, maintainer=Guillaume Abrioux , version=7, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, GIT_CLEAN=True) Dec 6 05:09:45 localhost systemd[1]: Started libpod-conmon-1453efeb067aa7ef657551153a9eeb617d8cd97f6a690338d20648c06f0e9657.scope. Dec 6 05:09:45 localhost systemd[1]: Started libcrun container. Dec 6 05:09:45 localhost podman[305261]: 2025-12-06 10:09:45.427890195 +0000 UTC m=+0.047812092 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:09:45 localhost podman[305261]: 2025-12-06 10:09:45.539310034 +0000 UTC m=+0.159231901 container init 1453efeb067aa7ef657551153a9eeb617d8cd97f6a690338d20648c06f0e9657 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_kare, RELEASE=main, build-date=2025-11-26T19:44:28Z, version=7, GIT_BRANCH=main, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, ceph=True, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, name=rhceph, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 05:09:45 localhost systemd[1]: tmp-crun.XTJiFQ.mount: Deactivated successfully. Dec 6 05:09:45 localhost podman[305261]: 2025-12-06 10:09:45.562128241 +0000 UTC m=+0.182050108 container start 1453efeb067aa7ef657551153a9eeb617d8cd97f6a690338d20648c06f0e9657 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_kare, version=7, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., RELEASE=main, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, maintainer=Guillaume Abrioux , release=1763362218, name=rhceph, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.buildah.version=1.41.4, vcs-type=git, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 05:09:45 localhost podman[305261]: 2025-12-06 10:09:45.562530393 +0000 UTC m=+0.182452300 container attach 1453efeb067aa7ef657551153a9eeb617d8cd97f6a690338d20648c06f0e9657 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_kare, release=1763362218, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, ceph=True, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, CEPH_POINT_RELEASE=, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, name=rhceph, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7) Dec 6 05:09:45 localhost priceless_kare[305276]: 167 167 Dec 6 05:09:45 localhost systemd[1]: libpod-1453efeb067aa7ef657551153a9eeb617d8cd97f6a690338d20648c06f0e9657.scope: Deactivated successfully. Dec 6 05:09:45 localhost podman[305261]: 2025-12-06 10:09:45.570809943 +0000 UTC m=+0.190731850 container died 1453efeb067aa7ef657551153a9eeb617d8cd97f6a690338d20648c06f0e9657 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_kare, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, version=7, GIT_CLEAN=True, name=rhceph, ceph=True, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, architecture=x86_64, vendor=Red Hat, Inc., RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=) Dec 6 05:09:45 localhost podman[305281]: 2025-12-06 10:09:45.667057814 +0000 UTC m=+0.083966412 container remove 1453efeb067aa7ef657551153a9eeb617d8cd97f6a690338d20648c06f0e9657 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_kare, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, build-date=2025-11-26T19:44:28Z, RELEASE=main, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, GIT_BRANCH=main, architecture=x86_64, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Dec 6 05:09:45 localhost systemd[1]: libpod-conmon-1453efeb067aa7ef657551153a9eeb617d8cd97f6a690338d20648c06f0e9657.scope: Deactivated successfully. Dec 6 05:09:46 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)... Dec 6 05:09:46 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)... Dec 6 05:09:46 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on np0005548789.localdomain Dec 6 05:09:46 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on np0005548789.localdomain Dec 6 05:09:46 localhost nova_compute[282193]: 2025-12-06 10:09:46.087 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:09:46 localhost systemd[1]: var-lib-containers-storage-overlay-679ae9f13e4f0608335b72542c899868251d91ca02cf84fe368978b2df8ba5ad-merged.mount: Deactivated successfully. Dec 6 05:09:46 localhost openstack_network_exporter[243110]: ERROR 10:09:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:09:46 localhost openstack_network_exporter[243110]: ERROR 10:09:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:09:46 localhost openstack_network_exporter[243110]: ERROR 10:09:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:09:46 localhost openstack_network_exporter[243110]: ERROR 10:09:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:09:46 localhost openstack_network_exporter[243110]: Dec 6 05:09:46 localhost openstack_network_exporter[243110]: ERROR 10:09:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:09:46 localhost openstack_network_exporter[243110]: Dec 6 05:09:46 localhost podman[305349]: Dec 6 05:09:46 localhost podman[305349]: 2025-12-06 10:09:46.702356822 +0000 UTC m=+0.071582469 container create 93a2256d59a75689ea06cc06c4c2e400ecceff275d86e50e434b8577bd760712 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_curran, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=rhceph, ceph=True, RELEASE=main, release=1763362218, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, vcs-type=git, com.redhat.component=rhceph-container) Dec 6 05:09:46 localhost systemd[1]: Started libpod-conmon-93a2256d59a75689ea06cc06c4c2e400ecceff275d86e50e434b8577bd760712.scope. Dec 6 05:09:46 localhost systemd[1]: Started libcrun container. Dec 6 05:09:46 localhost podman[305349]: 2025-12-06 10:09:46.776688362 +0000 UTC m=+0.145913989 container init 93a2256d59a75689ea06cc06c4c2e400ecceff275d86e50e434b8577bd760712 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_curran, CEPH_POINT_RELEASE=, RELEASE=main, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, version=7, distribution-scope=public, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, io.openshift.expose-services=, architecture=x86_64, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 6 05:09:46 localhost podman[305349]: 2025-12-06 10:09:46.679979898 +0000 UTC m=+0.049205575 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:09:46 localhost podman[305349]: 2025-12-06 10:09:46.790726205 +0000 UTC m=+0.159951872 container start 93a2256d59a75689ea06cc06c4c2e400ecceff275d86e50e434b8577bd760712 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_curran, name=rhceph, GIT_BRANCH=main, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, architecture=x86_64, ceph=True, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 6 05:09:46 localhost podman[305349]: 2025-12-06 10:09:46.791034795 +0000 UTC m=+0.160260422 container attach 93a2256d59a75689ea06cc06c4c2e400ecceff275d86e50e434b8577bd760712 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_curran, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, architecture=x86_64, description=Red Hat Ceph Storage 7, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.buildah.version=1.41.4, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, vcs-type=git, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, release=1763362218, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, ceph=True) Dec 6 05:09:46 localhost nice_curran[305364]: 167 167 Dec 6 05:09:46 localhost systemd[1]: libpod-93a2256d59a75689ea06cc06c4c2e400ecceff275d86e50e434b8577bd760712.scope: Deactivated successfully. Dec 6 05:09:46 localhost podman[305349]: 2025-12-06 10:09:46.795400097 +0000 UTC m=+0.164625764 container died 93a2256d59a75689ea06cc06c4c2e400ecceff275d86e50e434b8577bd760712 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_curran, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, vendor=Red Hat, Inc., version=7, GIT_BRANCH=main, maintainer=Guillaume Abrioux , io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main) Dec 6 05:09:46 localhost podman[305369]: 2025-12-06 10:09:46.902288438 +0000 UTC m=+0.092852589 container remove 93a2256d59a75689ea06cc06c4c2e400ecceff275d86e50e434b8577bd760712 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_curran, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, architecture=x86_64, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, vendor=Red Hat, Inc., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.openshift.expose-services=, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0) Dec 6 05:09:46 localhost systemd[1]: libpod-conmon-93a2256d59a75689ea06cc06c4c2e400ecceff275d86e50e434b8577bd760712.scope: Deactivated successfully. Dec 6 05:09:47 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:47 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:47 localhost ceph-mon[298582]: Reconfiguring osd.1 (monmap changed)... Dec 6 05:09:47 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Dec 6 05:09:47 localhost ceph-mon[298582]: Reconfiguring daemon osd.1 on np0005548789.localdomain Dec 6 05:09:47 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring osd.4 (monmap changed)... Dec 6 05:09:47 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring osd.4 (monmap changed)... Dec 6 05:09:47 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.4 on np0005548789.localdomain Dec 6 05:09:47 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.4 on np0005548789.localdomain Dec 6 05:09:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:09:47.300 160509 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:09:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:09:47.300 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:09:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:09:47.302 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:09:47 localhost ceph-mgr[288591]: log_channel(cluster) log [DBG] : pgmap v47: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail Dec 6 05:09:47 localhost systemd[1]: tmp-crun.vyGaOg.mount: Deactivated successfully. Dec 6 05:09:47 localhost systemd[1]: var-lib-containers-storage-overlay-8d8c68b29bf19d373668becd5f287da884895351e5e14a57cddc81e1575095ab-merged.mount: Deactivated successfully. Dec 6 05:09:47 localhost podman[305444]: Dec 6 05:09:47 localhost podman[305444]: 2025-12-06 10:09:47.772927233 +0000 UTC m=+0.081102616 container create 68942d5883816e3d102f42db7795f09748c3590865964ed5c4cf129f257e0da3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_mclean, vcs-type=git, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=7, distribution-scope=public, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , name=rhceph, release=1763362218, architecture=x86_64, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4) Dec 6 05:09:47 localhost systemd[1]: Started libpod-conmon-68942d5883816e3d102f42db7795f09748c3590865964ed5c4cf129f257e0da3.scope. Dec 6 05:09:47 localhost systemd[1]: Started libcrun container. Dec 6 05:09:47 localhost podman[305444]: 2025-12-06 10:09:47.839280153 +0000 UTC m=+0.147455566 container init 68942d5883816e3d102f42db7795f09748c3590865964ed5c4cf129f257e0da3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_mclean, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, maintainer=Guillaume Abrioux , architecture=x86_64, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.component=rhceph-container, version=7, io.buildah.version=1.41.4, ceph=True, CEPH_POINT_RELEASE=, release=1763362218, GIT_CLEAN=True) Dec 6 05:09:47 localhost podman[305444]: 2025-12-06 10:09:47.740883476 +0000 UTC m=+0.049058949 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:09:47 localhost podman[305444]: 2025-12-06 10:09:47.852230433 +0000 UTC m=+0.160405836 container start 68942d5883816e3d102f42db7795f09748c3590865964ed5c4cf129f257e0da3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_mclean, ceph=True, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, vcs-type=git, maintainer=Guillaume Abrioux , distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, com.redhat.component=rhceph-container, version=7, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, vendor=Red Hat, Inc.) Dec 6 05:09:47 localhost podman[305444]: 2025-12-06 10:09:47.852689167 +0000 UTC m=+0.160864610 container attach 68942d5883816e3d102f42db7795f09748c3590865964ed5c4cf129f257e0da3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_mclean, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, name=rhceph, architecture=x86_64, io.openshift.tags=rhceph ceph, version=7, RELEASE=main, CEPH_POINT_RELEASE=, GIT_CLEAN=True) Dec 6 05:09:47 localhost flamboyant_mclean[305460]: 167 167 Dec 6 05:09:47 localhost systemd[1]: libpod-68942d5883816e3d102f42db7795f09748c3590865964ed5c4cf129f257e0da3.scope: Deactivated successfully. Dec 6 05:09:47 localhost podman[305444]: 2025-12-06 10:09:47.856141351 +0000 UTC m=+0.164316754 container died 68942d5883816e3d102f42db7795f09748c3590865964ed5c4cf129f257e0da3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_mclean, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=rhceph-container, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, io.openshift.expose-services=, architecture=x86_64, ceph=True, io.openshift.tags=rhceph ceph) Dec 6 05:09:47 localhost podman[305465]: 2025-12-06 10:09:47.943594957 +0000 UTC m=+0.079303981 container remove 68942d5883816e3d102f42db7795f09748c3590865964ed5c4cf129f257e0da3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_mclean, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, RELEASE=main, GIT_BRANCH=main, GIT_CLEAN=True, release=1763362218, maintainer=Guillaume Abrioux , url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, distribution-scope=public, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, version=7, architecture=x86_64) Dec 6 05:09:47 localhost systemd[1]: libpod-conmon-68942d5883816e3d102f42db7795f09748c3590865964ed5c4cf129f257e0da3.scope: Deactivated successfully. Dec 6 05:09:48 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:48 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:48 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:48 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:48 localhost ceph-mon[298582]: Reconfiguring osd.4 (monmap changed)... Dec 6 05:09:48 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Dec 6 05:09:48 localhost ceph-mon[298582]: Reconfiguring daemon osd.4 on np0005548789.localdomain Dec 6 05:09:48 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:09:48 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005548789.vxwwsq (monmap changed)... Dec 6 05:09:48 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005548789.vxwwsq (monmap changed)... Dec 6 05:09:48 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005548789.vxwwsq on np0005548789.localdomain Dec 6 05:09:48 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005548789.vxwwsq on np0005548789.localdomain Dec 6 05:09:48 localhost systemd[1]: var-lib-containers-storage-overlay-bd991d8484c1e009835b28be4e3f899c0fdbc2213d3c353b5186210df8014171-merged.mount: Deactivated successfully. Dec 6 05:09:48 localhost podman[305544]: Dec 6 05:09:48 localhost podman[305544]: 2025-12-06 10:09:48.826708587 +0000 UTC m=+0.078229199 container create fab2676db316fcb64c5307ff7c4419e7fd2cf5070798ef79f13a4482dc71b669 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_bell, distribution-scope=public, vcs-type=git, name=rhceph, version=7, GIT_BRANCH=main, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, architecture=x86_64, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 6 05:09:48 localhost systemd[1]: Started libpod-conmon-fab2676db316fcb64c5307ff7c4419e7fd2cf5070798ef79f13a4482dc71b669.scope. Dec 6 05:09:48 localhost systemd[1]: Started libcrun container. Dec 6 05:09:48 localhost podman[305544]: 2025-12-06 10:09:48.796049943 +0000 UTC m=+0.047570575 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:09:48 localhost podman[305544]: 2025-12-06 10:09:48.898351517 +0000 UTC m=+0.149872129 container init fab2676db316fcb64c5307ff7c4419e7fd2cf5070798ef79f13a4482dc71b669 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_bell, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, RELEASE=main, release=1763362218, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, distribution-scope=public, com.redhat.component=rhceph-container, vcs-type=git, maintainer=Guillaume Abrioux ) Dec 6 05:09:48 localhost podman[305544]: 2025-12-06 10:09:48.911239955 +0000 UTC m=+0.162760557 container start fab2676db316fcb64c5307ff7c4419e7fd2cf5070798ef79f13a4482dc71b669 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_bell, maintainer=Guillaume Abrioux , GIT_CLEAN=True, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, io.buildah.version=1.41.4, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, CEPH_POINT_RELEASE=, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, com.redhat.component=rhceph-container, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, GIT_BRANCH=main) Dec 6 05:09:48 localhost podman[305544]: 2025-12-06 10:09:48.911715539 +0000 UTC m=+0.163236221 container attach fab2676db316fcb64c5307ff7c4419e7fd2cf5070798ef79f13a4482dc71b669 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_bell, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, release=1763362218, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, RELEASE=main, vcs-type=git, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7) Dec 6 05:09:48 localhost sweet_bell[305559]: 167 167 Dec 6 05:09:48 localhost systemd[1]: libpod-fab2676db316fcb64c5307ff7c4419e7fd2cf5070798ef79f13a4482dc71b669.scope: Deactivated successfully. Dec 6 05:09:48 localhost podman[305544]: 2025-12-06 10:09:48.916056051 +0000 UTC m=+0.167576703 container died fab2676db316fcb64c5307ff7c4419e7fd2cf5070798ef79f13a4482dc71b669 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_bell, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, ceph=True, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, distribution-scope=public, io.openshift.expose-services=, RELEASE=main, vcs-type=git, name=rhceph, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, architecture=x86_64, maintainer=Guillaume Abrioux , cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 05:09:49 localhost podman[305564]: 2025-12-06 10:09:49.01585783 +0000 UTC m=+0.087756947 container remove fab2676db316fcb64c5307ff7c4419e7fd2cf5070798ef79f13a4482dc71b669 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_bell, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, io.openshift.tags=rhceph ceph, RELEASE=main, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, CEPH_POINT_RELEASE=, ceph=True, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , GIT_CLEAN=True, distribution-scope=public, version=7, build-date=2025-11-26T19:44:28Z, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Dec 6 05:09:49 localhost systemd[1]: libpod-conmon-fab2676db316fcb64c5307ff7c4419e7fd2cf5070798ef79f13a4482dc71b669.scope: Deactivated successfully. Dec 6 05:09:49 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0. Dec 6 05:09:49 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:09:49.131139) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 6 05:09:49 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22 Dec 6 05:09:49 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015789131230, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2414, "num_deletes": 252, "total_data_size": 4580199, "memory_usage": 4650600, "flush_reason": "Manual Compaction"} Dec 6 05:09:49 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started Dec 6 05:09:49 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015789146518, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 2535389, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13772, "largest_seqno": 16181, "table_properties": {"data_size": 2525616, "index_size": 5830, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2885, "raw_key_size": 25522, "raw_average_key_size": 22, "raw_value_size": 2504070, "raw_average_value_size": 2204, "num_data_blocks": 257, "num_entries": 1136, "num_filter_entries": 1136, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015732, "oldest_key_time": 1765015732, "file_creation_time": 1765015789, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ec2a3a6-c7d0-4f1e-9923-5a3b782725ae", "db_session_id": "JMBO5KX1IJCJ8FWC64EX", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}} Dec 6 05:09:49 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 15430 microseconds, and 6261 cpu microseconds. Dec 6 05:09:49 localhost ceph-mon[298582]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 6 05:09:49 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:09:49.146578) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 2535389 bytes OK Dec 6 05:09:49 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:09:49.146603) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started Dec 6 05:09:49 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:09:49.148321) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done Dec 6 05:09:49 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:09:49.148341) EVENT_LOG_v1 {"time_micros": 1765015789148335, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 6 05:09:49 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:09:49.148364) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 6 05:09:49 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 4568461, prev total WAL file size 4584813, number of live WAL files 2. Dec 6 05:09:49 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:09:49 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:09:49.149373) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131303434' seq:72057594037927935, type:22 .. '7061786F73003131323936' seq:0, type:0; will stop at (end) Dec 6 05:09:49 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 6 05:09:49 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(2475KB)], [21(16MB)] Dec 6 05:09:49 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015789149415, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 19425254, "oldest_snapshot_seqno": -1} Dec 6 05:09:49 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005548789.mzhmje (monmap changed)... Dec 6 05:09:49 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005548789.mzhmje (monmap changed)... Dec 6 05:09:49 localhost ceph-mgr[288591]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005548789.mzhmje on np0005548789.localdomain Dec 6 05:09:49 localhost ceph-mgr[288591]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005548789.mzhmje on np0005548789.localdomain Dec 6 05:09:49 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:49 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:49 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:49 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:49 localhost ceph-mon[298582]: Reconfiguring mds.mds.np0005548789.vxwwsq (monmap changed)... Dec 6 05:09:49 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548789.vxwwsq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Dec 6 05:09:49 localhost ceph-mon[298582]: Reconfiguring daemon mds.mds.np0005548789.vxwwsq on np0005548789.localdomain Dec 6 05:09:49 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:49 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 11613 keys, 17625003 bytes, temperature: kUnknown Dec 6 05:09:49 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015789275779, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 17625003, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17557789, "index_size": 37097, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29061, "raw_key_size": 311818, "raw_average_key_size": 26, "raw_value_size": 17358916, "raw_average_value_size": 1494, "num_data_blocks": 1411, "num_entries": 11613, "num_filter_entries": 11613, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015623, "oldest_key_time": 0, "file_creation_time": 1765015789, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ec2a3a6-c7d0-4f1e-9923-5a3b782725ae", "db_session_id": "JMBO5KX1IJCJ8FWC64EX", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}} Dec 6 05:09:49 localhost ceph-mon[298582]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 6 05:09:49 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:09:49.276891) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 17625003 bytes Dec 6 05:09:49 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:09:49.278717) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.7 rd, 138.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 16.1 +0.0 blob) out(16.8 +0.0 blob), read-write-amplify(14.6) write-amplify(7.0) OK, records in: 12152, records dropped: 539 output_compression: NoCompression Dec 6 05:09:49 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:09:49.278749) EVENT_LOG_v1 {"time_micros": 1765015789278736, "job": 10, "event": "compaction_finished", "compaction_time_micros": 127189, "compaction_time_cpu_micros": 49550, "output_level": 6, "num_output_files": 1, "total_output_size": 17625003, "num_input_records": 12152, "num_output_records": 11613, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 6 05:09:49 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:09:49 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015789279232, "job": 10, "event": "table_file_deletion", "file_number": 23} Dec 6 05:09:49 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:09:49 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015789281796, "job": 10, "event": "table_file_deletion", "file_number": 21} Dec 6 05:09:49 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:09:49.149283) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:09:49 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:09:49.281909) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:09:49 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:09:49.281918) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:09:49 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:09:49.281922) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:09:49 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:09:49.281925) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:09:49 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:09:49.281928) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:09:49 localhost ceph-mgr[288591]: log_channel(cluster) log [DBG] : pgmap v48: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail Dec 6 05:09:49 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e93 e93: 6 total, 6 up, 6 in Dec 6 05:09:49 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:09:49.449+0000 7f047a21a640 -1 mgr handle_mgr_map I was active but no longer am Dec 6 05:09:49 localhost systemd[1]: tmp-crun.m5gbKK.mount: Deactivated successfully. Dec 6 05:09:49 localhost systemd[1]: var-lib-containers-storage-overlay-bec86e4695a48de2dcc3233b55a28b20a086b1cadb21042654e00eb469465d46-merged.mount: Deactivated successfully. Dec 6 05:09:49 localhost systemd-logind[766]: Session 71 logged out. Waiting for processes to exit. Dec 6 05:09:49 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: ignoring --setuser ceph since I am not root Dec 6 05:09:49 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: ignoring --setgroup ceph since I am not root Dec 6 05:09:49 localhost ceph-mgr[288591]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mgr, pid 2 Dec 6 05:09:49 localhost ceph-mgr[288591]: pidfile_write: ignore empty --pid-file Dec 6 05:09:49 localhost ceph-mgr[288591]: mgr[py] Loading python module 'alerts' Dec 6 05:09:49 localhost ceph-osd[32665]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0. Dec 6 05:09:49 localhost ceph-mgr[288591]: mgr[py] Module alerts has missing NOTIFY_TYPES member Dec 6 05:09:49 localhost ceph-mgr[288591]: mgr[py] Loading python module 'balancer' Dec 6 05:09:49 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:09:49.670+0000 7f2667da7140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member Dec 6 05:09:49 localhost ceph-mgr[288591]: mgr[py] Module balancer has missing NOTIFY_TYPES member Dec 6 05:09:49 localhost ceph-mgr[288591]: mgr[py] Loading python module 'cephadm' Dec 6 05:09:49 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:09:49.766+0000 7f2667da7140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member Dec 6 05:09:49 localhost podman[305657]: Dec 6 05:09:49 localhost podman[305657]: 2025-12-06 10:09:49.790989804 +0000 UTC m=+0.074651811 container create e735288c00e33917682d2ac61c72386afbdfe943ce3e3fae6ceb4b55ce872562 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_edison, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, version=7, name=rhceph, CEPH_POINT_RELEASE=, RELEASE=main) Dec 6 05:09:49 localhost sshd[305672]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:09:49 localhost systemd[1]: Started libpod-conmon-e735288c00e33917682d2ac61c72386afbdfe943ce3e3fae6ceb4b55ce872562.scope. Dec 6 05:09:49 localhost podman[305657]: 2025-12-06 10:09:49.754241906 +0000 UTC m=+0.037903923 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:09:49 localhost systemd[1]: Started libcrun container. Dec 6 05:09:49 localhost podman[305657]: 2025-12-06 10:09:49.885650618 +0000 UTC m=+0.169312615 container init e735288c00e33917682d2ac61c72386afbdfe943ce3e3fae6ceb4b55ce872562 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_edison, release=1763362218, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, name=rhceph, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64) Dec 6 05:09:49 localhost systemd[1]: libpod-e735288c00e33917682d2ac61c72386afbdfe943ce3e3fae6ceb4b55ce872562.scope: Deactivated successfully. Dec 6 05:09:49 localhost sharp_edison[305675]: 167 167 Dec 6 05:09:49 localhost podman[305657]: 2025-12-06 10:09:49.912612671 +0000 UTC m=+0.196274668 container start e735288c00e33917682d2ac61c72386afbdfe943ce3e3fae6ceb4b55ce872562 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_edison, ceph=True, RELEASE=main, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, distribution-scope=public, vcs-type=git, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 05:09:49 localhost podman[305657]: 2025-12-06 10:09:49.912951451 +0000 UTC m=+0.196613438 container attach e735288c00e33917682d2ac61c72386afbdfe943ce3e3fae6ceb4b55ce872562 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_edison, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., version=7, maintainer=Guillaume Abrioux , RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7) Dec 6 05:09:49 localhost podman[305657]: 2025-12-06 10:09:49.914860338 +0000 UTC m=+0.198522325 container died e735288c00e33917682d2ac61c72386afbdfe943ce3e3fae6ceb4b55ce872562 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_edison, distribution-scope=public, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, RELEASE=main, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux , architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7) Dec 6 05:09:49 localhost systemd-logind[766]: New session 72 of user ceph-admin. Dec 6 05:09:49 localhost systemd[1]: Started Session 72 of User ceph-admin. Dec 6 05:09:50 localhost podman[305682]: 2025-12-06 10:09:50.012034947 +0000 UTC m=+0.085083405 container remove e735288c00e33917682d2ac61c72386afbdfe943ce3e3fae6ceb4b55ce872562 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_edison, io.openshift.tags=rhceph ceph, ceph=True, CEPH_POINT_RELEASE=, architecture=x86_64, maintainer=Guillaume Abrioux , GIT_BRANCH=main, GIT_CLEAN=True, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, release=1763362218, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.buildah.version=1.41.4, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7) Dec 6 05:09:50 localhost systemd[1]: libpod-conmon-e735288c00e33917682d2ac61c72386afbdfe943ce3e3fae6ceb4b55ce872562.scope: Deactivated successfully. Dec 6 05:09:50 localhost systemd[1]: session-71.scope: Deactivated successfully. Dec 6 05:09:50 localhost systemd[1]: session-71.scope: Consumed 27.292s CPU time. Dec 6 05:09:50 localhost systemd-logind[766]: Removed session 71. Dec 6 05:09:50 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' Dec 6 05:09:50 localhost ceph-mon[298582]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548789.mzhmje", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:09:50 localhost ceph-mon[298582]: from='client.? 172.18.0.200:0/90840268' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Dec 6 05:09:50 localhost ceph-mon[298582]: Activating manager daemon np0005548785.vhqlsq Dec 6 05:09:50 localhost ceph-mon[298582]: from='client.? 172.18.0.200:0/90840268' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Dec 6 05:09:50 localhost ceph-mon[298582]: Manager daemon np0005548785.vhqlsq is now available Dec 6 05:09:50 localhost ceph-mon[298582]: removing stray HostCache host record np0005548787.localdomain.devices.0 Dec 6 05:09:50 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005548787.localdomain.devices.0"} : dispatch Dec 6 05:09:50 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005548787.localdomain.devices.0"}]': finished Dec 6 05:09:50 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005548787.localdomain.devices.0"} : dispatch Dec 6 05:09:50 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005548787.localdomain.devices.0"}]': finished Dec 6 05:09:50 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548785.vhqlsq/mirror_snapshot_schedule"} : dispatch Dec 6 05:09:50 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548785.vhqlsq/trash_purge_schedule"} : dispatch Dec 6 05:09:50 localhost ceph-mgr[288591]: mgr[py] Loading python module 'crash' Dec 6 05:09:50 localhost ceph-mgr[288591]: mgr[py] Module crash has missing NOTIFY_TYPES member Dec 6 05:09:50 localhost ceph-mgr[288591]: mgr[py] Loading python module 'dashboard' Dec 6 05:09:50 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:09:50.456+0000 7f2667da7140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member Dec 6 05:09:50 localhost systemd[1]: tmp-crun.akhboo.mount: Deactivated successfully. Dec 6 05:09:50 localhost systemd[1]: var-lib-containers-storage-overlay-35ec4e3289f7e23672bf5f8de3af2ac5e3fcf72697529a3b00546e774b912359-merged.mount: Deactivated successfully. Dec 6 05:09:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:09:50 localhost podman[305749]: 2025-12-06 10:09:50.599220788 +0000 UTC m=+0.097267224 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Dec 6 05:09:50 localhost podman[305749]: 2025-12-06 10:09:50.605004452 +0000 UTC m=+0.103050868 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true) Dec 6 05:09:50 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:09:51 localhost ceph-mgr[288591]: mgr[py] Loading python module 'devicehealth' Dec 6 05:09:51 localhost podman[305829]: 2025-12-06 10:09:51.026516907 +0000 UTC m=+0.117286456 container exec ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, GIT_CLEAN=True, architecture=x86_64, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, distribution-scope=public, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, build-date=2025-11-26T19:44:28Z, RELEASE=main, CEPH_POINT_RELEASE=) Dec 6 05:09:51 localhost ceph-mgr[288591]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member Dec 6 05:09:51 localhost ceph-mgr[288591]: mgr[py] Loading python module 'diskprediction_local' Dec 6 05:09:51 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:09:51.069+0000 7f2667da7140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member Dec 6 05:09:51 localhost nova_compute[282193]: 2025-12-06 10:09:51.089 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:09:51 localhost nova_compute[282193]: 2025-12-06 10:09:51.090 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:09:51 localhost nova_compute[282193]: 2025-12-06 10:09:51.091 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:09:51 localhost nova_compute[282193]: 2025-12-06 10:09:51.091 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:09:51 localhost nova_compute[282193]: 2025-12-06 10:09:51.111 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:09:51 localhost nova_compute[282193]: 2025-12-06 10:09:51.112 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:09:51 localhost podman[305829]: 2025-12-06 10:09:51.139306238 +0000 UTC m=+0.230075827 container exec_died ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, ceph=True, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, vcs-type=git, version=7, io.buildah.version=1.41.4, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph) Dec 6 05:09:51 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. Dec 6 05:09:51 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. Dec 6 05:09:51 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: from numpy import show_config as show_numpy_config Dec 6 05:09:51 localhost ceph-mgr[288591]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member Dec 6 05:09:51 localhost ceph-mgr[288591]: mgr[py] Loading python module 'influx' Dec 6 05:09:51 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:09:51.215+0000 7f2667da7140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member Dec 6 05:09:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:09:51 localhost ceph-mgr[288591]: mgr[py] Module influx has missing NOTIFY_TYPES member Dec 6 05:09:51 localhost ceph-mgr[288591]: mgr[py] Loading python module 'insights' Dec 6 05:09:51 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:09:51.278+0000 7f2667da7140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member Dec 6 05:09:51 localhost ceph-mgr[288591]: mgr[py] Loading python module 'iostat' Dec 6 05:09:51 localhost podman[305875]: 2025-12-06 10:09:51.364819445 +0000 UTC m=+0.128802933 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 6 05:09:51 localhost ceph-mgr[288591]: mgr[py] Module iostat has missing NOTIFY_TYPES member Dec 6 05:09:51 localhost ceph-mgr[288591]: mgr[py] Loading python module 'k8sevents' Dec 6 05:09:51 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:09:51.400+0000 7f2667da7140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member Dec 6 05:09:51 localhost podman[305875]: 2025-12-06 10:09:51.402748429 +0000 UTC m=+0.166731857 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 6 05:09:51 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:09:51 localhost ceph-mon[298582]: [06/Dec/2025:10:09:50] ENGINE Bus STARTING Dec 6 05:09:51 localhost systemd[1]: tmp-crun.WuPcYX.mount: Deactivated successfully. Dec 6 05:09:51 localhost ceph-mgr[288591]: mgr[py] Loading python module 'localpool' Dec 6 05:09:51 localhost ceph-mgr[288591]: mgr[py] Loading python module 'mds_autoscaler' Dec 6 05:09:51 localhost ceph-mgr[288591]: mgr[py] Loading python module 'mirroring' Dec 6 05:09:52 localhost ceph-mgr[288591]: mgr[py] Loading python module 'nfs' Dec 6 05:09:52 localhost ceph-mgr[288591]: mgr[py] Module nfs has missing NOTIFY_TYPES member Dec 6 05:09:52 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:09:52.188+0000 7f2667da7140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member Dec 6 05:09:52 localhost ceph-mgr[288591]: mgr[py] Loading python module 'orchestrator' Dec 6 05:09:52 localhost ceph-mgr[288591]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member Dec 6 05:09:52 localhost ceph-mgr[288591]: mgr[py] Loading python module 'osd_perf_query' Dec 6 05:09:52 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:09:52.356+0000 7f2667da7140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member Dec 6 05:09:52 localhost ceph-mgr[288591]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member Dec 6 05:09:52 localhost ceph-mgr[288591]: mgr[py] Loading python module 'osd_support' Dec 6 05:09:52 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:09:52.420+0000 7f2667da7140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member Dec 6 05:09:52 localhost ceph-mgr[288591]: mgr[py] Module osd_support has missing NOTIFY_TYPES member Dec 6 05:09:52 localhost ceph-mgr[288591]: mgr[py] Loading python module 'pg_autoscaler' Dec 6 05:09:52 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:09:52.481+0000 7f2667da7140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member Dec 6 05:09:52 localhost ceph-mgr[288591]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member Dec 6 05:09:52 localhost ceph-mgr[288591]: mgr[py] Loading python module 'progress' Dec 6 05:09:52 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:09:52.550+0000 7f2667da7140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member Dec 6 05:09:52 localhost ceph-mgr[288591]: mgr[py] Module progress has missing NOTIFY_TYPES member Dec 6 05:09:52 localhost ceph-mgr[288591]: mgr[py] Loading python module 'prometheus' Dec 6 05:09:52 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:09:52.610+0000 7f2667da7140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member Dec 6 05:09:52 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' Dec 6 05:09:52 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' Dec 6 05:09:52 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' Dec 6 05:09:52 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' Dec 6 05:09:52 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' Dec 6 05:09:52 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' Dec 6 05:09:52 localhost ceph-mgr[288591]: mgr[py] Module prometheus has missing NOTIFY_TYPES member Dec 6 05:09:52 localhost ceph-mgr[288591]: mgr[py] Loading python module 'rbd_support' Dec 6 05:09:52 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:09:52.928+0000 7f2667da7140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member Dec 6 05:09:53 localhost ceph-mgr[288591]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member Dec 6 05:09:53 localhost ceph-mgr[288591]: mgr[py] Loading python module 'restful' Dec 6 05:09:53 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:09:53.014+0000 7f2667da7140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member Dec 6 05:09:53 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:09:53 localhost ceph-mgr[288591]: mgr[py] Loading python module 'rgw' Dec 6 05:09:53 localhost ceph-mgr[288591]: mgr[py] Module rgw has missing NOTIFY_TYPES member Dec 6 05:09:53 localhost ceph-mgr[288591]: mgr[py] Loading python module 'rook' Dec 6 05:09:53 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:09:53.366+0000 7f2667da7140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member Dec 6 05:09:53 localhost ceph-mgr[288591]: mgr[py] Module rook has missing NOTIFY_TYPES member Dec 6 05:09:53 localhost ceph-mgr[288591]: mgr[py] Loading python module 'selftest' Dec 6 05:09:53 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:09:53.793+0000 7f2667da7140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member Dec 6 05:09:53 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' Dec 6 05:09:53 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' Dec 6 05:09:53 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' Dec 6 05:09:53 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' Dec 6 05:09:53 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 6 05:09:53 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' Dec 6 05:09:53 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 6 05:09:53 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' Dec 6 05:09:53 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 6 05:09:53 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 6 05:09:53 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 6 05:09:53 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 6 05:09:53 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:09:53 localhost ceph-mgr[288591]: mgr[py] Module selftest has missing NOTIFY_TYPES member Dec 6 05:09:53 localhost ceph-mgr[288591]: mgr[py] Loading python module 'snap_schedule' Dec 6 05:09:53 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:09:53.854+0000 7f2667da7140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member Dec 6 05:09:53 localhost podman[241090]: time="2025-12-06T10:09:53Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:09:53 localhost ceph-mgr[288591]: mgr[py] Loading python module 'stats' Dec 6 05:09:53 localhost podman[241090]: @ - - [06/Dec/2025:10:09:53 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156104 "" "Go-http-client/1.1" Dec 6 05:09:53 localhost podman[241090]: @ - - [06/Dec/2025:10:09:53 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19235 "" "Go-http-client/1.1" Dec 6 05:09:53 localhost ceph-mgr[288591]: mgr[py] Loading python module 'status' Dec 6 05:09:54 localhost ceph-mgr[288591]: mgr[py] Module status has missing NOTIFY_TYPES member Dec 6 05:09:54 localhost ceph-mgr[288591]: mgr[py] Loading python module 'telegraf' Dec 6 05:09:54 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:09:54.050+0000 7f2667da7140 -1 mgr[py] Module status has missing NOTIFY_TYPES member Dec 6 05:09:54 localhost ceph-mgr[288591]: mgr[py] Module telegraf has missing NOTIFY_TYPES member Dec 6 05:09:54 localhost ceph-mgr[288591]: mgr[py] Loading python module 'telemetry' Dec 6 05:09:54 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:09:54.110+0000 7f2667da7140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member Dec 6 05:09:54 localhost ceph-mgr[288591]: mgr[py] Module telemetry has missing NOTIFY_TYPES member Dec 6 05:09:54 localhost ceph-mgr[288591]: mgr[py] Loading python module 'test_orchestrator' Dec 6 05:09:54 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:09:54.242+0000 7f2667da7140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member Dec 6 05:09:54 localhost ceph-mgr[288591]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member Dec 6 05:09:54 localhost ceph-mgr[288591]: mgr[py] Loading python module 'volumes' Dec 6 05:09:54 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:09:54.389+0000 7f2667da7140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member Dec 6 05:09:54 localhost ceph-mgr[288591]: mgr[py] Module volumes has missing NOTIFY_TYPES member Dec 6 05:09:54 localhost ceph-mgr[288591]: mgr[py] Loading python module 'zabbix' Dec 6 05:09:54 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:09:54.577+0000 7f2667da7140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member Dec 6 05:09:54 localhost ceph-mgr[288591]: mgr[py] Module zabbix has missing NOTIFY_TYPES member Dec 6 05:09:54 localhost ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548789-mzhmje[288579]: 2025-12-06T10:09:54.635+0000 7f2667da7140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member Dec 6 05:09:54 localhost ceph-mgr[288591]: ms_deliver_dispatch: unhandled message 0x55d45c1ff600 mon_map magic: 0 from mon.1 v2:172.18.0.104:3300/0 Dec 6 05:09:54 localhost ceph-mon[298582]: Adjusting osd_memory_target on np0005548788.localdomain to 836.6M Dec 6 05:09:54 localhost ceph-mon[298582]: Adjusting osd_memory_target on np0005548789.localdomain to 836.6M Dec 6 05:09:54 localhost ceph-mon[298582]: Adjusting osd_memory_target on np0005548790.localdomain to 836.6M Dec 6 05:09:54 localhost ceph-mon[298582]: Unable to set osd_memory_target on np0005548788.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Dec 6 05:09:54 localhost ceph-mon[298582]: Unable to set osd_memory_target on np0005548789.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 6 05:09:54 localhost ceph-mon[298582]: Unable to set osd_memory_target on np0005548790.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 6 05:09:54 localhost ceph-mon[298582]: Updating np0005548788.localdomain:/etc/ceph/ceph.conf Dec 6 05:09:54 localhost ceph-mon[298582]: Updating np0005548789.localdomain:/etc/ceph/ceph.conf Dec 6 05:09:54 localhost ceph-mon[298582]: Updating np0005548790.localdomain:/etc/ceph/ceph.conf Dec 6 05:09:55 localhost ceph-mon[298582]: Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:09:55 localhost ceph-mon[298582]: Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:09:55 localhost ceph-mon[298582]: Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:09:55 localhost ceph-mon[298582]: Updating np0005548790.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 6 05:09:55 localhost ceph-mon[298582]: Updating np0005548789.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 6 05:09:55 localhost ceph-mon[298582]: Updating np0005548788.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 6 05:09:56 localhost nova_compute[282193]: 2025-12-06 10:09:56.113 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:09:56 localhost nova_compute[282193]: 2025-12-06 10:09:56.114 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:09:56 localhost nova_compute[282193]: 2025-12-06 10:09:56.114 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:09:56 localhost nova_compute[282193]: 2025-12-06 10:09:56.115 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:09:56 localhost nova_compute[282193]: 2025-12-06 10:09:56.160 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:09:56 localhost nova_compute[282193]: 2025-12-06 10:09:56.161 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:09:56 localhost ceph-mon[298582]: Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring Dec 6 05:09:56 localhost ceph-mon[298582]: Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring Dec 6 05:09:56 localhost ceph-mon[298582]: Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring Dec 6 05:09:56 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' Dec 6 05:09:56 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' Dec 6 05:09:56 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' Dec 6 05:09:56 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' Dec 6 05:09:56 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' Dec 6 05:09:56 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' Dec 6 05:09:56 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' Dec 6 05:09:56 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Dec 6 05:09:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:09:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:09:57 localhost podman[306747]: 2025-12-06 10:09:57.923423754 +0000 UTC m=+0.078446885 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm) Dec 6 05:09:57 localhost podman[306747]: 2025-12-06 10:09:57.941233212 +0000 UTC m=+0.096256393 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Dec 6 05:09:57 localhost ceph-mon[298582]: [06/Dec/2025:10:09:56] ENGINE Error in 'start' listener >#012Traceback (most recent call last):#012 File "/lib/python3.9/site-packages/cherrypy/process/wspbus.py", line 230, in publish#012 output.append(listener(*args, **kwargs))#012 File "/lib/python3.9/site-packages/cherrypy/_cpserver.py", line 180, in start#012 super(Server, self).start()#012 File "/lib/python3.9/site-packages/cherrypy/process/servers.py", line 184, in start#012 self.wait()#012 File "/lib/python3.9/site-packages/cherrypy/process/servers.py", line 260, in wait#012 portend.occupied(*self.bound_addr, timeout=Timeouts.occupied)#012 File "/lib/python3.9/site-packages/portend.py", line 162, in occupied#012 raise Timeout("Port {port} not bound on {host}.".format(**locals()))#012portend.Timeout: Port 8765 not bound on 172.18.0.103. Dec 6 05:09:57 localhost ceph-mon[298582]: Reconfiguring daemon osd.2 on np0005548788.localdomain Dec 6 05:09:57 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' Dec 6 05:09:57 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' Dec 6 05:09:57 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' Dec 6 05:09:57 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' Dec 6 05:09:57 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Dec 6 05:09:57 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:09:57 localhost podman[306746]: 2025-12-06 10:09:57.996674312 +0000 UTC m=+0.152351203 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.openshift.tags=minimal rhel9, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, container_name=openstack_network_exporter, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal) Dec 6 05:09:58 localhost podman[306746]: 2025-12-06 10:09:58.013290883 +0000 UTC m=+0.168967744 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-minimal-container, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, vendor=Red Hat, Inc., version=9.6, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, name=ubi9-minimal, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public) Dec 6 05:09:58 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:09:58 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:09:58 localhost ceph-mon[298582]: Reconfiguring daemon osd.5 on np0005548788.localdomain Dec 6 05:09:58 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' Dec 6 05:09:58 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' Dec 6 05:09:58 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' Dec 6 05:09:58 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' Dec 6 05:09:58 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548789.mzhmje", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Dec 6 05:09:59 localhost podman[306838]: Dec 6 05:09:59 localhost podman[306838]: 2025-12-06 10:09:59.162548117 +0000 UTC m=+0.066083433 container create 630e4f393655121908b1b1f3e21b14d4d91ffef938b2c9e4235cc5b78cffb6df (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_elbakyan, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, GIT_CLEAN=True, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, architecture=x86_64, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7) Dec 6 05:09:59 localhost systemd[1]: Started libpod-conmon-630e4f393655121908b1b1f3e21b14d4d91ffef938b2c9e4235cc5b78cffb6df.scope. Dec 6 05:09:59 localhost systemd[1]: Started libcrun container. Dec 6 05:09:59 localhost podman[306838]: 2025-12-06 10:09:59.130954624 +0000 UTC m=+0.034489950 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:09:59 localhost podman[306838]: 2025-12-06 10:09:59.242709803 +0000 UTC m=+0.146245119 container init 630e4f393655121908b1b1f3e21b14d4d91ffef938b2c9e4235cc5b78cffb6df (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_elbakyan, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, release=1763362218, vcs-type=git, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, ceph=True, io.openshift.expose-services=, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream) Dec 6 05:09:59 localhost podman[306838]: 2025-12-06 10:09:59.25457393 +0000 UTC m=+0.158109236 container start 630e4f393655121908b1b1f3e21b14d4d91ffef938b2c9e4235cc5b78cffb6df (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_elbakyan, CEPH_POINT_RELEASE=, GIT_CLEAN=True, vcs-type=git, distribution-scope=public, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, version=7, release=1763362218, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main) Dec 6 05:09:59 localhost podman[306838]: 2025-12-06 10:09:59.254966543 +0000 UTC m=+0.158501909 container attach 630e4f393655121908b1b1f3e21b14d4d91ffef938b2c9e4235cc5b78cffb6df (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_elbakyan, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , RELEASE=main, ceph=True, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, distribution-scope=public, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., release=1763362218, name=rhceph, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64) Dec 6 05:09:59 localhost tender_elbakyan[306851]: 167 167 Dec 6 05:09:59 localhost podman[306838]: 2025-12-06 10:09:59.258686875 +0000 UTC m=+0.162222241 container died 630e4f393655121908b1b1f3e21b14d4d91ffef938b2c9e4235cc5b78cffb6df (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_elbakyan, CEPH_POINT_RELEASE=, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, GIT_CLEAN=True, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, release=1763362218, ceph=True, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc.) Dec 6 05:09:59 localhost systemd[1]: libpod-630e4f393655121908b1b1f3e21b14d4d91ffef938b2c9e4235cc5b78cffb6df.scope: Deactivated successfully. Dec 6 05:09:59 localhost podman[306856]: 2025-12-06 10:09:59.369817445 +0000 UTC m=+0.099985525 container remove 630e4f393655121908b1b1f3e21b14d4d91ffef938b2c9e4235cc5b78cffb6df (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_elbakyan, GIT_CLEAN=True, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, release=1763362218, vcs-type=git, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=) Dec 6 05:09:59 localhost systemd[1]: libpod-conmon-630e4f393655121908b1b1f3e21b14d4d91ffef938b2c9e4235cc5b78cffb6df.scope: Deactivated successfully. Dec 6 05:09:59 localhost ceph-mon[298582]: Reconfiguring mgr.np0005548789.mzhmje (monmap changed)... Dec 6 05:09:59 localhost ceph-mon[298582]: Reconfiguring daemon mgr.np0005548789.mzhmje on np0005548789.localdomain Dec 6 05:09:59 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' Dec 6 05:09:59 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' Dec 6 05:09:59 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548790.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Dec 6 05:09:59 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' Dec 6 05:10:00 localhost systemd[1]: tmp-crun.UPjbGj.mount: Deactivated successfully. Dec 6 05:10:00 localhost systemd[1]: var-lib-containers-storage-overlay-fa0c81e5c034f4a4210e5bf4fb8134fb99265b846dd7a1aa120726d16b34aa90-merged.mount: Deactivated successfully. Dec 6 05:10:00 localhost ceph-mon[298582]: Reconfiguring crash.np0005548790 (monmap changed)... Dec 6 05:10:00 localhost ceph-mon[298582]: Reconfiguring daemon crash.np0005548790 on np0005548790.localdomain Dec 6 05:10:00 localhost ceph-mon[298582]: overall HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm Dec 6 05:10:00 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' Dec 6 05:10:00 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' Dec 6 05:10:00 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Dec 6 05:10:01 localhost nova_compute[282193]: 2025-12-06 10:10:01.162 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:10:01 localhost nova_compute[282193]: 2025-12-06 10:10:01.165 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:10:01 localhost nova_compute[282193]: 2025-12-06 10:10:01.165 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:10:01 localhost nova_compute[282193]: 2025-12-06 10:10:01.165 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:10:01 localhost nova_compute[282193]: 2025-12-06 10:10:01.198 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:10:01 localhost nova_compute[282193]: 2025-12-06 10:10:01.199 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:10:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:10:02 localhost ceph-mon[298582]: Reconfiguring osd.0 (monmap changed)... Dec 6 05:10:02 localhost ceph-mon[298582]: Reconfiguring daemon osd.0 on np0005548790.localdomain Dec 6 05:10:02 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' Dec 6 05:10:02 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' Dec 6 05:10:02 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' Dec 6 05:10:02 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' Dec 6 05:10:02 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Dec 6 05:10:02 localhost podman[306872]: 2025-12-06 10:10:02.223137533 +0000 UTC m=+0.076497876 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:10:02 localhost podman[306872]: 2025-12-06 10:10:02.234100354 +0000 UTC m=+0.087460697 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd) Dec 6 05:10:02 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:10:03 localhost ceph-mon[298582]: [06/Dec/2025:10:10:01] ENGINE Error in 'start' listener >#012Traceback (most recent call last):#012 File "/lib/python3.9/site-packages/cherrypy/process/wspbus.py", line 230, in publish#012 output.append(listener(*args, **kwargs))#012 File "/lib/python3.9/site-packages/cherrypy/_cpserver.py", line 180, in start#012 super(Server, self).start()#012 File "/lib/python3.9/site-packages/cherrypy/process/servers.py", line 184, in start#012 self.wait()#012 File "/lib/python3.9/site-packages/cherrypy/process/servers.py", line 260, in wait#012 portend.occupied(*self.bound_addr, timeout=Timeouts.occupied)#012 File "/lib/python3.9/site-packages/portend.py", line 162, in occupied#012 raise Timeout("Port {port} not bound on {host}.".format(**locals()))#012portend.Timeout: Port 7150 not bound on 172.18.0.103. Dec 6 05:10:03 localhost ceph-mon[298582]: [06/Dec/2025:10:10:01] ENGINE Shutting down due to error in start listener:#012Traceback (most recent call last):#012 File "/lib/python3.9/site-packages/cherrypy/process/wspbus.py", line 268, in start#012 self.publish('start')#012 File "/lib/python3.9/site-packages/cherrypy/process/wspbus.py", line 248, in publish#012 raise exc#012cherrypy.process.wspbus.ChannelFailures: Timeout('Port 8765 not bound on 172.18.0.103.')#012Timeout('Port 7150 not bound on 172.18.0.103.') Dec 6 05:10:03 localhost ceph-mon[298582]: [06/Dec/2025:10:10:01] ENGINE Bus STOPPING Dec 6 05:10:03 localhost ceph-mon[298582]: [06/Dec/2025:10:10:01] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('172.18.0.103', 8765)) already shut down Dec 6 05:10:03 localhost ceph-mon[298582]: [06/Dec/2025:10:10:01] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('172.18.0.103', 7150)) already shut down Dec 6 05:10:03 localhost ceph-mon[298582]: [06/Dec/2025:10:10:01] ENGINE Bus STOPPED Dec 6 05:10:03 localhost ceph-mon[298582]: [06/Dec/2025:10:10:01] ENGINE Bus EXITING Dec 6 05:10:03 localhost ceph-mon[298582]: [06/Dec/2025:10:10:01] ENGINE Bus EXITED Dec 6 05:10:03 localhost ceph-mon[298582]: Failed to run cephadm http server: Timeout('Port 8765 not bound on 172.18.0.103.')#012Timeout('Port 7150 not bound on 172.18.0.103.') Dec 6 05:10:03 localhost ceph-mon[298582]: Reconfiguring osd.3 (monmap changed)... Dec 6 05:10:03 localhost ceph-mon[298582]: Reconfiguring daemon osd.3 on np0005548790.localdomain Dec 6 05:10:03 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' Dec 6 05:10:03 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' Dec 6 05:10:03 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' Dec 6 05:10:03 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' Dec 6 05:10:03 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:10:03 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' Dec 6 05:10:03 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:10:03 localhost sshd[306909]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:10:05 localhost ceph-mon[298582]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' Dec 6 05:10:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:10:05 localhost podman[306911]: 2025-12-06 10:10:05.92863217 +0000 UTC m=+0.091607532 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 6 05:10:05 localhost podman[306911]: 2025-12-06 10:10:05.939527249 +0000 UTC m=+0.102502571 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 6 05:10:05 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:10:06 localhost nova_compute[282193]: 2025-12-06 10:10:06.200 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:10:08 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:10:09 localhost ceph-osd[31726]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 6 05:10:09 localhost ceph-osd[31726]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.1 total, 600.0 interval#012Cumulative writes: 5895 writes, 25K keys, 5895 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5895 writes, 817 syncs, 7.22 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 46 writes, 173 keys, 46 commit groups, 1.0 writes per commit group, ingest: 0.27 MB, 0.00 MB/s#012Interval WAL: 46 writes, 20 syncs, 2.30 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 6 05:10:10 localhost ceph-mon[298582]: Health check update: 2 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) Dec 6 05:10:10 localhost ceph-mon[298582]: Health check update: 2 stray host(s) with 2 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST) Dec 6 05:10:11 localhost nova_compute[282193]: 2025-12-06 10:10:11.203 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:10:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:10:11 localhost podman[306933]: 2025-12-06 10:10:11.91502064 +0000 UTC m=+0.074635201 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:10:11 localhost systemd[299726]: Starting Mark boot as successful... Dec 6 05:10:11 localhost systemd[299726]: Finished Mark boot as successful. Dec 6 05:10:11 localhost podman[306933]: 2025-12-06 10:10:11.95515914 +0000 UTC m=+0.114773701 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:10:11 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:10:12 localhost ceph-osd[32665]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 6 05:10:12 localhost ceph-osd[32665]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.2 total, 600.0 interval#012Cumulative writes: 5115 writes, 22K keys, 5115 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5115 writes, 779 syncs, 6.57 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 201 writes, 475 keys, 201 commit groups, 1.0 writes per commit group, ingest: 0.43 MB, 0.00 MB/s#012Interval WAL: 201 writes, 93 syncs, 2.16 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 6 05:10:13 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:10:16 localhost nova_compute[282193]: 2025-12-06 10:10:16.205 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:10:16 localhost nova_compute[282193]: 2025-12-06 10:10:16.206 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:10:16 localhost nova_compute[282193]: 2025-12-06 10:10:16.207 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:10:16 localhost nova_compute[282193]: 2025-12-06 10:10:16.207 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:10:16 localhost nova_compute[282193]: 2025-12-06 10:10:16.237 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:10:16 localhost nova_compute[282193]: 2025-12-06 10:10:16.237 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:10:16 localhost openstack_network_exporter[243110]: ERROR 10:10:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:10:16 localhost openstack_network_exporter[243110]: ERROR 10:10:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:10:16 localhost openstack_network_exporter[243110]: ERROR 10:10:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:10:16 localhost openstack_network_exporter[243110]: ERROR 10:10:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:10:16 localhost openstack_network_exporter[243110]: Dec 6 05:10:16 localhost openstack_network_exporter[243110]: ERROR 10:10:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:10:16 localhost openstack_network_exporter[243110]: Dec 6 05:10:18 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:10:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:10:20 localhost podman[306959]: 2025-12-06 10:10:20.904227436 +0000 UTC m=+0.071131396 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:10:20 localhost podman[306959]: 2025-12-06 10:10:20.911445513 +0000 UTC m=+0.078349473 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3) Dec 6 05:10:20 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:10:21 localhost nova_compute[282193]: 2025-12-06 10:10:21.238 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:10:21 localhost nova_compute[282193]: 2025-12-06 10:10:21.240 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:10:21 localhost nova_compute[282193]: 2025-12-06 10:10:21.240 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:10:21 localhost nova_compute[282193]: 2025-12-06 10:10:21.240 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:10:21 localhost nova_compute[282193]: 2025-12-06 10:10:21.271 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:10:21 localhost nova_compute[282193]: 2025-12-06 10:10:21.272 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:10:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:10:21 localhost podman[306976]: 2025-12-06 10:10:21.915348065 +0000 UTC m=+0.075803137 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:10:21 localhost podman[306976]: 2025-12-06 10:10:21.929121399 +0000 UTC m=+0.089576471 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 6 05:10:21 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:10:23 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:10:23 localhost podman[241090]: time="2025-12-06T10:10:23Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:10:23 localhost podman[241090]: @ - - [06/Dec/2025:10:10:23 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156104 "" "Go-http-client/1.1" Dec 6 05:10:23 localhost podman[241090]: @ - - [06/Dec/2025:10:10:23 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19240 "" "Go-http-client/1.1" Dec 6 05:10:26 localhost nova_compute[282193]: 2025-12-06 10:10:26.272 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:10:26 localhost nova_compute[282193]: 2025-12-06 10:10:26.274 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:10:26 localhost nova_compute[282193]: 2025-12-06 10:10:26.275 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:10:26 localhost nova_compute[282193]: 2025-12-06 10:10:26.275 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:10:26 localhost nova_compute[282193]: 2025-12-06 10:10:26.314 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:10:26 localhost nova_compute[282193]: 2025-12-06 10:10:26.315 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:10:28 localhost nova_compute[282193]: 2025-12-06 10:10:28.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:10:28 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:10:28 localhost nova_compute[282193]: 2025-12-06 10:10:28.198 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:10:28 localhost nova_compute[282193]: 2025-12-06 10:10:28.199 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:10:28 localhost nova_compute[282193]: 2025-12-06 10:10:28.199 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:10:28 localhost nova_compute[282193]: 2025-12-06 10:10:28.200 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Auditing locally available compute resources for np0005548789.localdomain (node: np0005548789.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 6 05:10:28 localhost nova_compute[282193]: 2025-12-06 10:10:28.200 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:10:28 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:10:28 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3108344624' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:10:28 localhost nova_compute[282193]: 2025-12-06 10:10:28.664 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:10:28 localhost nova_compute[282193]: 2025-12-06 10:10:28.741 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:10:28 localhost nova_compute[282193]: 2025-12-06 10:10:28.742 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:10:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:10:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:10:28 localhost systemd[1]: tmp-crun.LbGKmq.mount: Deactivated successfully. Dec 6 05:10:28 localhost podman[307022]: 2025-12-06 10:10:28.947233039 +0000 UTC m=+0.101817891 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 6 05:10:28 localhost podman[307022]: 2025-12-06 10:10:28.95921818 +0000 UTC m=+0.113803092 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 6 05:10:28 localhost podman[307021]: 2025-12-06 10:10:28.921220124 +0000 UTC m=+0.083871829 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release=1755695350, config_id=edpm, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, io.openshift.expose-services=, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git) Dec 6 05:10:28 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:10:28 localhost nova_compute[282193]: 2025-12-06 10:10:28.979 282197 WARNING nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 6 05:10:28 localhost nova_compute[282193]: 2025-12-06 10:10:28.981 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Hypervisor/Node resource view: name=np0005548789.localdomain free_ram=11480MB free_disk=0.0GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 6 05:10:28 localhost nova_compute[282193]: 2025-12-06 10:10:28.981 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:10:28 localhost nova_compute[282193]: 2025-12-06 10:10:28.981 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:10:29 localhost podman[307021]: 2025-12-06 10:10:29.005159424 +0000 UTC m=+0.167811129 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vendor=Red Hat, Inc., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, io.openshift.tags=minimal rhel9, distribution-scope=public, config_id=edpm, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 6 05:10:29 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:10:29 localhost nova_compute[282193]: 2025-12-06 10:10:29.093 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 6 05:10:29 localhost nova_compute[282193]: 2025-12-06 10:10:29.093 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 6 05:10:29 localhost nova_compute[282193]: 2025-12-06 10:10:29.094 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Final resource view: name=np0005548789.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=0GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 6 05:10:29 localhost nova_compute[282193]: 2025-12-06 10:10:29.140 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:10:29 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:10:29 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3639889960' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:10:29 localhost nova_compute[282193]: 2025-12-06 10:10:29.612 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:10:29 localhost nova_compute[282193]: 2025-12-06 10:10:29.619 282197 DEBUG nova.compute.provider_tree [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Updating inventory in ProviderTree for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad with inventory: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 6 05:10:29 localhost nova_compute[282193]: 2025-12-06 10:10:29.655 282197 ERROR nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [req-9a7f33cf-8803-4290-a03d-4b69fc5456e1] Failed to update inventory to [{'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}}] for resource provider with UUID 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad. Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n update conflict: Inventory for 'DISK_GB' on resource provider '0d33e88e-6335-4a94-8f21-32ba5b8bb7ad' in use. ", "code": "placement.inventory.inuse", "request_id": "req-9a7f33cf-8803-4290-a03d-4b69fc5456e1"}]}#033[00m Dec 6 05:10:29 localhost nova_compute[282193]: 2025-12-06 10:10:29.655 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:10:29 localhost nova_compute[282193]: 2025-12-06 10:10:29.656 282197 ERROR nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Error updating PCI resources for node np0005548789.localdomain.: nova.exception.PlacementPciException: Failed to gather or report PCI resources to Placement: There was a conflict when trying to complete your request. Dec 6 05:10:29 localhost nova_compute[282193]: Dec 6 05:10:29 localhost nova_compute[282193]: update conflict: Inventory for 'DISK_GB' on resource provider '0d33e88e-6335-4a94-8f21-32ba5b8bb7ad' in use. Dec 6 05:10:29 localhost nova_compute[282193]: 2025-12-06 10:10:29.656 282197 ERROR nova.compute.manager Traceback (most recent call last): Dec 6 05:10:29 localhost nova_compute[282193]: 2025-12-06 10:10:29.656 282197 ERROR nova.compute.manager File "/usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py", line 1288, in _update_to_placement Dec 6 05:10:29 localhost nova_compute[282193]: 2025-12-06 10:10:29.656 282197 ERROR nova.compute.manager self.reportclient.update_from_provider_tree( Dec 6 05:10:29 localhost nova_compute[282193]: 2025-12-06 10:10:29.656 282197 ERROR nova.compute.manager File "/usr/lib/python3.9/site-packages/nova/scheduler/client/report.py", line 1484, in update_from_provider_tree Dec 6 05:10:29 localhost nova_compute[282193]: 2025-12-06 10:10:29.656 282197 ERROR nova.compute.manager self.set_inventory_for_provider( Dec 6 05:10:29 localhost nova_compute[282193]: 2025-12-06 10:10:29.656 282197 ERROR nova.compute.manager File "/usr/lib/python3.9/site-packages/nova/scheduler/client/report.py", line 987, in set_inventory_for_provider Dec 6 05:10:29 localhost nova_compute[282193]: 2025-12-06 10:10:29.656 282197 ERROR nova.compute.manager raise exception.InventoryInUse(err['detail']) Dec 6 05:10:29 localhost nova_compute[282193]: 2025-12-06 10:10:29.656 282197 ERROR nova.compute.manager nova.exception.InventoryInUse: There was a conflict when trying to complete your request. Dec 6 05:10:29 localhost nova_compute[282193]: 2025-12-06 10:10:29.656 282197 ERROR nova.compute.manager Dec 6 05:10:29 localhost nova_compute[282193]: 2025-12-06 10:10:29.656 282197 ERROR nova.compute.manager update conflict: Inventory for 'DISK_GB' on resource provider '0d33e88e-6335-4a94-8f21-32ba5b8bb7ad' in use. Dec 6 05:10:29 localhost nova_compute[282193]: 2025-12-06 10:10:29.656 282197 ERROR nova.compute.manager Dec 6 05:10:29 localhost nova_compute[282193]: 2025-12-06 10:10:29.656 282197 ERROR nova.compute.manager During handling of the above exception, another exception occurred: Dec 6 05:10:29 localhost nova_compute[282193]: 2025-12-06 10:10:29.656 282197 ERROR nova.compute.manager Dec 6 05:10:29 localhost nova_compute[282193]: 2025-12-06 10:10:29.656 282197 ERROR nova.compute.manager Traceback (most recent call last): Dec 6 05:10:29 localhost nova_compute[282193]: 2025-12-06 10:10:29.656 282197 ERROR nova.compute.manager File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 10513, in _update_available_resource_for_node Dec 6 05:10:29 localhost nova_compute[282193]: 2025-12-06 10:10:29.656 282197 ERROR nova.compute.manager self.rt.update_available_resource(context, nodename, Dec 6 05:10:29 localhost nova_compute[282193]: 2025-12-06 10:10:29.656 282197 ERROR nova.compute.manager File "/usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py", line 889, in update_available_resource Dec 6 05:10:29 localhost nova_compute[282193]: 2025-12-06 10:10:29.656 282197 ERROR nova.compute.manager self._update_available_resource(context, resources, startup=startup) Dec 6 05:10:29 localhost nova_compute[282193]: 2025-12-06 10:10:29.656 282197 ERROR nova.compute.manager File "/usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py", line 414, in inner Dec 6 05:10:29 localhost nova_compute[282193]: 2025-12-06 10:10:29.656 282197 ERROR nova.compute.manager return f(*args, **kwargs) Dec 6 05:10:29 localhost nova_compute[282193]: 2025-12-06 10:10:29.656 282197 ERROR nova.compute.manager File "/usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py", line 994, in _update_available_resource Dec 6 05:10:29 localhost nova_compute[282193]: 2025-12-06 10:10:29.656 282197 ERROR nova.compute.manager self._update(context, cn, startup=startup) Dec 6 05:10:29 localhost nova_compute[282193]: 2025-12-06 10:10:29.656 282197 ERROR nova.compute.manager File "/usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py", line 1303, in _update Dec 6 05:10:29 localhost nova_compute[282193]: 2025-12-06 10:10:29.656 282197 ERROR nova.compute.manager self._update_to_placement(context, compute_node, startup) Dec 6 05:10:29 localhost nova_compute[282193]: 2025-12-06 10:10:29.656 282197 ERROR nova.compute.manager File "/usr/lib/python3.9/site-packages/retrying.py", line 49, in wrapped_f Dec 6 05:10:29 localhost nova_compute[282193]: 2025-12-06 10:10:29.656 282197 ERROR nova.compute.manager return Retrying(*dargs, **dkw).call(f, *args, **kw) Dec 6 05:10:29 localhost nova_compute[282193]: 2025-12-06 10:10:29.656 282197 ERROR nova.compute.manager File "/usr/lib/python3.9/site-packages/retrying.py", line 206, in call Dec 6 05:10:29 localhost nova_compute[282193]: 2025-12-06 10:10:29.656 282197 ERROR nova.compute.manager return attempt.get(self._wrap_exception) Dec 6 05:10:29 localhost nova_compute[282193]: 2025-12-06 10:10:29.656 282197 ERROR nova.compute.manager File "/usr/lib/python3.9/site-packages/retrying.py", line 247, in get Dec 6 05:10:29 localhost nova_compute[282193]: 2025-12-06 10:10:29.656 282197 ERROR nova.compute.manager six.reraise(self.value[0], self.value[1], self.value[2]) Dec 6 05:10:29 localhost nova_compute[282193]: 2025-12-06 10:10:29.656 282197 ERROR nova.compute.manager File "/usr/lib/python3.9/site-packages/six.py", line 709, in reraise Dec 6 05:10:29 localhost nova_compute[282193]: 2025-12-06 10:10:29.656 282197 ERROR nova.compute.manager raise value Dec 6 05:10:29 localhost nova_compute[282193]: 2025-12-06 10:10:29.656 282197 ERROR nova.compute.manager File "/usr/lib/python3.9/site-packages/retrying.py", line 200, in call Dec 6 05:10:29 localhost nova_compute[282193]: 2025-12-06 10:10:29.656 282197 ERROR nova.compute.manager attempt = Attempt(fn(*args, **kwargs), attempt_number, False) Dec 6 05:10:29 localhost nova_compute[282193]: 2025-12-06 10:10:29.656 282197 ERROR nova.compute.manager File "/usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py", line 1298, in _update_to_placement Dec 6 05:10:29 localhost nova_compute[282193]: 2025-12-06 10:10:29.656 282197 ERROR nova.compute.manager raise exception.PlacementPciException(error=str(e)) Dec 6 05:10:29 localhost nova_compute[282193]: 2025-12-06 10:10:29.656 282197 ERROR nova.compute.manager nova.exception.PlacementPciException: Failed to gather or report PCI resources to Placement: There was a conflict when trying to complete your request. Dec 6 05:10:29 localhost nova_compute[282193]: 2025-12-06 10:10:29.656 282197 ERROR nova.compute.manager Dec 6 05:10:29 localhost nova_compute[282193]: 2025-12-06 10:10:29.656 282197 ERROR nova.compute.manager update conflict: Inventory for 'DISK_GB' on resource provider '0d33e88e-6335-4a94-8f21-32ba5b8bb7ad' in use. Dec 6 05:10:29 localhost nova_compute[282193]: 2025-12-06 10:10:29.656 282197 ERROR nova.compute.manager #033[00m Dec 6 05:10:30 localhost nova_compute[282193]: 2025-12-06 10:10:30.662 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:10:30 localhost nova_compute[282193]: 2025-12-06 10:10:30.662 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 6 05:10:30 localhost nova_compute[282193]: 2025-12-06 10:10:30.662 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 6 05:10:30 localhost nova_compute[282193]: 2025-12-06 10:10:30.906 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 6 05:10:30 localhost nova_compute[282193]: 2025-12-06 10:10:30.907 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquired lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 6 05:10:30 localhost nova_compute[282193]: 2025-12-06 10:10:30.908 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 6 05:10:30 localhost nova_compute[282193]: 2025-12-06 10:10:30.908 282197 DEBUG nova.objects.instance [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 6 05:10:31 localhost nova_compute[282193]: 2025-12-06 10:10:31.316 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:10:31 localhost nova_compute[282193]: 2025-12-06 10:10:31.318 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:10:31 localhost nova_compute[282193]: 2025-12-06 10:10:31.319 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:10:31 localhost nova_compute[282193]: 2025-12-06 10:10:31.319 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:10:31 localhost nova_compute[282193]: 2025-12-06 10:10:31.321 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updating instance_info_cache with network_info: [{"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 6 05:10:31 localhost nova_compute[282193]: 2025-12-06 10:10:31.335 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Releasing lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 6 05:10:31 localhost nova_compute[282193]: 2025-12-06 10:10:31.336 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 6 05:10:31 localhost nova_compute[282193]: 2025-12-06 10:10:31.336 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:10:31 localhost nova_compute[282193]: 2025-12-06 10:10:31.337 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:10:31 localhost nova_compute[282193]: 2025-12-06 10:10:31.337 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:10:31 localhost nova_compute[282193]: 2025-12-06 10:10:31.338 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:10:31 localhost nova_compute[282193]: 2025-12-06 10:10:31.339 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:10:31 localhost nova_compute[282193]: 2025-12-06 10:10:31.339 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 6 05:10:31 localhost nova_compute[282193]: 2025-12-06 10:10:31.369 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:10:31 localhost nova_compute[282193]: 2025-12-06 10:10:31.371 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:10:32 localhost nova_compute[282193]: 2025-12-06 10:10:32.183 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:10:32 localhost nova_compute[282193]: 2025-12-06 10:10:32.184 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:10:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:10:32 localhost podman[307083]: 2025-12-06 10:10:32.900998669 +0000 UTC m=+0.066724172 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Dec 6 05:10:32 localhost podman[307083]: 2025-12-06 10:10:32.916182116 +0000 UTC m=+0.081907639 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 6 05:10:32 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:10:33 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:10:36 localhost nova_compute[282193]: 2025-12-06 10:10:36.372 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4995-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:10:36 localhost nova_compute[282193]: 2025-12-06 10:10:36.373 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:10:36 localhost nova_compute[282193]: 2025-12-06 10:10:36.373 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:10:36 localhost nova_compute[282193]: 2025-12-06 10:10:36.374 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:10:36 localhost nova_compute[282193]: 2025-12-06 10:10:36.374 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:10:36 localhost nova_compute[282193]: 2025-12-06 10:10:36.377 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:10:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:10:36 localhost podman[307103]: 2025-12-06 10:10:36.893813166 +0000 UTC m=+0.058355079 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 6 05:10:36 localhost podman[307103]: 2025-12-06 10:10:36.90820081 +0000 UTC m=+0.072742763 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 6 05:10:36 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:10:37 localhost nova_compute[282193]: 2025-12-06 10:10:37.177 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:10:37 localhost sshd[307127]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:10:37 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e94 e94: 6 total, 6 up, 6 in Dec 6 05:10:37 localhost systemd[1]: session-72.scope: Deactivated successfully. Dec 6 05:10:37 localhost systemd[1]: session-72.scope: Consumed 6.581s CPU time. Dec 6 05:10:37 localhost systemd-logind[766]: Session 72 logged out. Waiting for processes to exit. Dec 6 05:10:37 localhost systemd-logind[766]: Removed session 72. Dec 6 05:10:38 localhost ceph-mon[298582]: from='client.? 172.18.0.200:0/3346912753' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Dec 6 05:10:38 localhost ceph-mon[298582]: Activating manager daemon np0005548788.yvwbqq Dec 6 05:10:38 localhost ceph-mon[298582]: from='client.? 172.18.0.200:0/3346912753' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Dec 6 05:10:38 localhost ceph-mon[298582]: Manager daemon np0005548788.yvwbqq is now available Dec 6 05:10:38 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:10:38 localhost sshd[307128]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:10:38 localhost systemd-logind[766]: New session 73 of user ceph-admin. Dec 6 05:10:38 localhost systemd[1]: Started Session 73 of User ceph-admin. Dec 6 05:10:39 localhost ceph-mon[298582]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548788.yvwbqq/mirror_snapshot_schedule"} : dispatch Dec 6 05:10:39 localhost ceph-mon[298582]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548788.yvwbqq/trash_purge_schedule"} : dispatch Dec 6 05:10:39 localhost podman[307241]: 2025-12-06 10:10:39.422442928 +0000 UTC m=+0.100565323 container exec ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, release=1763362218, vcs-type=git, distribution-scope=public, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, vendor=Red Hat, Inc., ceph=True, maintainer=Guillaume Abrioux , RELEASE=main, version=7, com.redhat.component=rhceph-container) Dec 6 05:10:39 localhost podman[307241]: 2025-12-06 10:10:39.531235137 +0000 UTC m=+0.209357482 container exec_died ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, version=7, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, vcs-type=git, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, RELEASE=main) Dec 6 05:10:39 localhost sshd[307289]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:10:40 localhost ceph-mon[298582]: [06/Dec/2025:10:10:39] ENGINE Bus STARTING Dec 6 05:10:40 localhost ceph-mon[298582]: [06/Dec/2025:10:10:39] ENGINE Serving on http://172.18.0.106:8765 Dec 6 05:10:40 localhost ceph-mon[298582]: [06/Dec/2025:10:10:39] ENGINE Serving on https://172.18.0.106:7150 Dec 6 05:10:40 localhost ceph-mon[298582]: [06/Dec/2025:10:10:39] ENGINE Bus STARTED Dec 6 05:10:40 localhost ceph-mon[298582]: [06/Dec/2025:10:10:39] ENGINE Client ('172.18.0.106', 48356) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Dec 6 05:10:40 localhost ceph-mon[298582]: Health check cleared: CEPHADM_STRAY_DAEMON (was: 2 stray daemon(s) not managed by cephadm) Dec 6 05:10:40 localhost ceph-mon[298582]: Health check cleared: CEPHADM_STRAY_HOST (was: 2 stray host(s) with 2 daemon(s) not managed by cephadm) Dec 6 05:10:40 localhost ceph-mon[298582]: Cluster is now healthy Dec 6 05:10:41 localhost ceph-mon[298582]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' Dec 6 05:10:41 localhost ceph-mon[298582]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' Dec 6 05:10:41 localhost ceph-mon[298582]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' Dec 6 05:10:41 localhost ceph-mon[298582]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' Dec 6 05:10:41 localhost ceph-mon[298582]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' Dec 6 05:10:41 localhost ceph-mon[298582]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' Dec 6 05:10:41 localhost nova_compute[282193]: 2025-12-06 10:10:41.375 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:10:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:10:42 localhost podman[307503]: 2025-12-06 10:10:42.108025579 +0000 UTC m=+0.081462529 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 6 05:10:42 localhost podman[307503]: 2025-12-06 10:10:42.152197457 +0000 UTC m=+0.125634397 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller) Dec 6 05:10:42 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:10:42 localhost ceph-mon[298582]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' Dec 6 05:10:42 localhost ceph-mon[298582]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' Dec 6 05:10:42 localhost ceph-mon[298582]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 6 05:10:42 localhost ceph-mon[298582]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' Dec 6 05:10:42 localhost ceph-mon[298582]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 6 05:10:42 localhost ceph-mon[298582]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' Dec 6 05:10:42 localhost ceph-mon[298582]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 6 05:10:42 localhost ceph-mon[298582]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 6 05:10:42 localhost ceph-mon[298582]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' Dec 6 05:10:42 localhost ceph-mon[298582]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' Dec 6 05:10:42 localhost ceph-mon[298582]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 6 05:10:42 localhost ceph-mon[298582]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 6 05:10:42 localhost ceph-mon[298582]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:10:42 localhost ceph-mon[298582]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' Dec 6 05:10:43 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:10:43 localhost ceph-mon[298582]: Adjusting osd_memory_target on np0005548789.localdomain to 836.6M Dec 6 05:10:43 localhost ceph-mon[298582]: Unable to set osd_memory_target on np0005548789.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 6 05:10:43 localhost ceph-mon[298582]: Adjusting osd_memory_target on np0005548788.localdomain to 836.6M Dec 6 05:10:43 localhost ceph-mon[298582]: Unable to set osd_memory_target on np0005548788.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Dec 6 05:10:43 localhost ceph-mon[298582]: Adjusting osd_memory_target on np0005548790.localdomain to 836.6M Dec 6 05:10:43 localhost ceph-mon[298582]: Unable to set osd_memory_target on np0005548790.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 6 05:10:43 localhost ceph-mon[298582]: Updating np0005548788.localdomain:/etc/ceph/ceph.conf Dec 6 05:10:43 localhost ceph-mon[298582]: Updating np0005548789.localdomain:/etc/ceph/ceph.conf Dec 6 05:10:43 localhost ceph-mon[298582]: Updating np0005548790.localdomain:/etc/ceph/ceph.conf Dec 6 05:10:43 localhost ceph-mon[298582]: Saving service mon spec with placement label:mon Dec 6 05:10:44 localhost ceph-mon[298582]: Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:10:44 localhost ceph-mon[298582]: Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:10:44 localhost ceph-mon[298582]: Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:10:44 localhost ceph-mon[298582]: Updating np0005548789.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 6 05:10:44 localhost ceph-mon[298582]: Updating np0005548788.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 6 05:10:44 localhost ceph-mon[298582]: Updating np0005548790.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 6 05:10:45 localhost sshd[308168]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:10:45 localhost ceph-mon[298582]: Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring Dec 6 05:10:45 localhost ceph-mon[298582]: Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring Dec 6 05:10:45 localhost ceph-mon[298582]: Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring Dec 6 05:10:45 localhost ceph-mon[298582]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' Dec 6 05:10:45 localhost ceph-mon[298582]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' Dec 6 05:10:45 localhost ceph-mon[298582]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' Dec 6 05:10:45 localhost ceph-mon[298582]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' Dec 6 05:10:45 localhost ceph-mon[298582]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' Dec 6 05:10:45 localhost ceph-mon[298582]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' Dec 6 05:10:45 localhost ceph-mon[298582]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' Dec 6 05:10:45 localhost ceph-mon[298582]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' Dec 6 05:10:45 localhost ceph-mon[298582]: Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) Dec 6 05:10:45 localhost ceph-mon[298582]: Health check failed: 1 stray host(s) with 1 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST) Dec 6 05:10:45 localhost ceph-mon[298582]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 6 05:10:46 localhost nova_compute[282193]: 2025-12-06 10:10:46.379 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:10:46 localhost podman[308222]: Dec 6 05:10:46 localhost podman[308222]: 2025-12-06 10:10:46.514772759 +0000 UTC m=+0.070666833 container create bc433e600361f755858e9c8ddb792925505b20a4e8da8c435ef0b72f3d813569 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_mendeleev, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, GIT_BRANCH=main, ceph=True, GIT_CLEAN=True, version=7, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, distribution-scope=public, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, description=Red Hat Ceph Storage 7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container) Dec 6 05:10:46 localhost ceph-mon[298582]: Reconfiguring mon.np0005548788 (monmap changed)... Dec 6 05:10:46 localhost ceph-mon[298582]: Reconfiguring daemon mon.np0005548788 on np0005548788.localdomain Dec 6 05:10:46 localhost ceph-mon[298582]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' Dec 6 05:10:46 localhost ceph-mon[298582]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' Dec 6 05:10:46 localhost ceph-mon[298582]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 6 05:10:46 localhost systemd[1]: Started libpod-conmon-bc433e600361f755858e9c8ddb792925505b20a4e8da8c435ef0b72f3d813569.scope. Dec 6 05:10:46 localhost systemd[1]: Started libcrun container. Dec 6 05:10:46 localhost podman[308222]: 2025-12-06 10:10:46.490902205 +0000 UTC m=+0.046796259 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Dec 6 05:10:46 localhost podman[308222]: 2025-12-06 10:10:46.593436413 +0000 UTC m=+0.149330467 container init bc433e600361f755858e9c8ddb792925505b20a4e8da8c435ef0b72f3d813569 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_mendeleev, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, vendor=Red Hat, Inc., ceph=True, RELEASE=main, maintainer=Guillaume Abrioux , vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, version=7, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, distribution-scope=public) Dec 6 05:10:46 localhost podman[308222]: 2025-12-06 10:10:46.605614041 +0000 UTC m=+0.161508135 container start bc433e600361f755858e9c8ddb792925505b20a4e8da8c435ef0b72f3d813569 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_mendeleev, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, com.redhat.component=rhceph-container, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., RELEASE=main, io.openshift.expose-services=, CEPH_POINT_RELEASE=, version=7, description=Red Hat Ceph Storage 7, vcs-type=git, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, distribution-scope=public) Dec 6 05:10:46 localhost podman[308222]: 2025-12-06 10:10:46.606114766 +0000 UTC m=+0.162008860 container attach bc433e600361f755858e9c8ddb792925505b20a4e8da8c435ef0b72f3d813569 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_mendeleev, CEPH_POINT_RELEASE=, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, name=rhceph, GIT_BRANCH=main, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7) Dec 6 05:10:46 localhost nostalgic_mendeleev[308237]: 167 167 Dec 6 05:10:46 localhost podman[308222]: 2025-12-06 10:10:46.612990145 +0000 UTC m=+0.168884269 container died bc433e600361f755858e9c8ddb792925505b20a4e8da8c435ef0b72f3d813569 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_mendeleev, com.redhat.component=rhceph-container, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, version=7, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.41.4, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, ceph=True, GIT_CLEAN=True, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git) Dec 6 05:10:46 localhost systemd[1]: libpod-bc433e600361f755858e9c8ddb792925505b20a4e8da8c435ef0b72f3d813569.scope: Deactivated successfully. Dec 6 05:10:46 localhost openstack_network_exporter[243110]: ERROR 10:10:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:10:46 localhost openstack_network_exporter[243110]: ERROR 10:10:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:10:46 localhost openstack_network_exporter[243110]: ERROR 10:10:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:10:46 localhost openstack_network_exporter[243110]: ERROR 10:10:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:10:46 localhost openstack_network_exporter[243110]: Dec 6 05:10:46 localhost openstack_network_exporter[243110]: ERROR 10:10:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:10:46 localhost openstack_network_exporter[243110]: Dec 6 05:10:46 localhost systemd[1]: var-lib-containers-storage-overlay-14d99ef0677c403f82710e2aea2b890e704bbbb80c25b89182d4ba91110aaa2c-merged.mount: Deactivated successfully. Dec 6 05:10:46 localhost podman[308242]: 2025-12-06 10:10:46.711692386 +0000 UTC m=+0.083336227 container remove bc433e600361f755858e9c8ddb792925505b20a4e8da8c435ef0b72f3d813569 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_mendeleev, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, ceph=True, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, CEPH_POINT_RELEASE=, release=1763362218, version=7, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, architecture=x86_64, com.redhat.component=rhceph-container, distribution-scope=public, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc.) Dec 6 05:10:46 localhost systemd[1]: libpod-conmon-bc433e600361f755858e9c8ddb792925505b20a4e8da8c435ef0b72f3d813569.scope: Deactivated successfully. Dec 6 05:10:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:10:47.301 160509 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:10:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:10:47.301 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:10:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:10:47.303 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:10:47 localhost ceph-mon[298582]: Reconfiguring mon.np0005548789 (monmap changed)... Dec 6 05:10:47 localhost ceph-mon[298582]: Reconfiguring daemon mon.np0005548789 on np0005548789.localdomain Dec 6 05:10:47 localhost ceph-mon[298582]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' Dec 6 05:10:47 localhost ceph-mon[298582]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' Dec 6 05:10:47 localhost ceph-mon[298582]: Reconfiguring mon.np0005548790 (monmap changed)... Dec 6 05:10:47 localhost ceph-mon[298582]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Dec 6 05:10:47 localhost ceph-mon[298582]: Reconfiguring daemon mon.np0005548790 on np0005548790.localdomain Dec 6 05:10:48 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:10:48 localhost ceph-mon[298582]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' Dec 6 05:10:48 localhost ceph-mon[298582]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' Dec 6 05:10:48 localhost ceph-mon[298582]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:10:48 localhost ceph-mon[298582]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' Dec 6 05:10:48 localhost ceph-mon[298582]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' Dec 6 05:10:51 localhost nova_compute[282193]: 2025-12-06 10:10:51.381 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:10:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:10:51 localhost podman[308276]: 2025-12-06 10:10:51.926087107 +0000 UTC m=+0.078839329 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:10:51 localhost podman[308276]: 2025-12-06 10:10:51.931469061 +0000 UTC m=+0.084221333 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125) Dec 6 05:10:51 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:10:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:10:52 localhost podman[308294]: 2025-12-06 10:10:52.925370578 +0000 UTC m=+0.094719302 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:10:52 localhost podman[308294]: 2025-12-06 10:10:52.929981368 +0000 UTC m=+0.099330092 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 6 05:10:52 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:10:53 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:10:53 localhost podman[241090]: time="2025-12-06T10:10:53Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:10:53 localhost podman[241090]: @ - - [06/Dec/2025:10:10:53 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156104 "" "Go-http-client/1.1" Dec 6 05:10:53 localhost podman[241090]: @ - - [06/Dec/2025:10:10:53 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19244 "" "Go-http-client/1.1" Dec 6 05:10:56 localhost nova_compute[282193]: 2025-12-06 10:10:56.384 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:10:58 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:10:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:10:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:10:59 localhost podman[308318]: 2025-12-06 10:10:59.926953924 +0000 UTC m=+0.082987214 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0) Dec 6 05:10:59 localhost podman[308318]: 2025-12-06 10:10:59.935603117 +0000 UTC m=+0.091636417 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:10:59 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:10:59 localhost podman[308317]: 2025-12-06 10:10:59.89707975 +0000 UTC m=+0.060303779 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, release=1755695350, distribution-scope=public, name=ubi9-minimal, vendor=Red Hat, Inc., config_id=edpm, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Dec 6 05:10:59 localhost podman[308317]: 2025-12-06 10:10:59.982193119 +0000 UTC m=+0.145417228 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, maintainer=Red Hat, Inc., version=9.6, managed_by=edpm_ansible, distribution-scope=public, io.buildah.version=1.33.7, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, container_name=openstack_network_exporter, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 6 05:10:59 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:11:01 localhost nova_compute[282193]: 2025-12-06 10:11:01.386 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:11:01 localhost nova_compute[282193]: 2025-12-06 10:11:01.390 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:11:03 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:11:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:11:03 localhost systemd[1]: tmp-crun.ApMCtM.mount: Deactivated successfully. Dec 6 05:11:03 localhost podman[308359]: 2025-12-06 10:11:03.910796979 +0000 UTC m=+0.073333723 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 6 05:11:03 localhost podman[308359]: 2025-12-06 10:11:03.921948307 +0000 UTC m=+0.084485011 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 6 05:11:03 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:11:06 localhost nova_compute[282193]: 2025-12-06 10:11:06.391 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:11:06 localhost nova_compute[282193]: 2025-12-06 10:11:06.393 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:11:06 localhost nova_compute[282193]: 2025-12-06 10:11:06.393 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:11:06 localhost nova_compute[282193]: 2025-12-06 10:11:06.393 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:11:06 localhost nova_compute[282193]: 2025-12-06 10:11:06.420 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:11:06 localhost nova_compute[282193]: 2025-12-06 10:11:06.421 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:11:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.915 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'name': 'test', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005548789.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'hostId': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.916 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 6 05:11:07 localhost systemd[1]: tmp-crun.7rrq45.mount: Deactivated successfully. Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.925 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:11:07 localhost podman[308378]: 2025-12-06 10:11:07.927971845 +0000 UTC m=+0.087546803 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6a551a3c-5c4e-4690-8df9-c915229b41fc', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:11:07.916200', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'e1cfb804-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12286.165523842, 'message_signature': '4ff9b19a2bbe21e0b342f04f5438c8369e2ed19fce3da36dc64bff1e7ac99fdd'}]}, 'timestamp': '2025-12-06 10:11:07.926183', '_unique_id': 'c4fe21413cdc4558a717cef866749f8f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.928 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.929 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.944 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.945 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3371413f-2436-4b3c-b422-f480a56c7bbf', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:11:07.929093', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'e1d2a91a-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12286.178458474, 'message_signature': '579ddb45c1481bd5ff6e8d83bff62dd9d08a1277ffcb3219a091a128838ea3f9'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:11:07.929093', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'e1d2b32e-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12286.178458474, 'message_signature': '87af5c97462660c6943020825028b6c58fd8515592385074ae68b904302edfd3'}]}, 'timestamp': '2025-12-06 10:11:07.945559', '_unique_id': 'a3377aa847ea4b3890ef68fb3135b7fe'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.946 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.947 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:11:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:07.947 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 6 05:11:07 localhost podman[308378]: 2025-12-06 10:11:07.999271276 +0000 UTC m=+0.158846214 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.009 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.010 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:11:08 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '47141ab3-0844-411d-8a2c-86c31a55f19d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:11:07.947244', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'e1dc87fa-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12286.196594964, 'message_signature': 'a03ee8df6507e66a9c5054914ee68622480bb16486de4536f98681e550910909'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:11:07.947244', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'e1dc968c-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12286.196594964, 'message_signature': 'cb773b9a6e2258a72d25c49719c77c611123629c2a11e5add9a89add9374a6fe'}]}, 'timestamp': '2025-12-06 10:11:08.010401', '_unique_id': 'b981ba402786401aba6c11677f6e9ddb'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.011 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.012 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.012 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets volume: 114 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'abb17b2f-d625-422d-bcf0-2b81c851ca47', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 114, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:11:08.012446', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'e1dcf2a8-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12286.165523842, 'message_signature': '8d5cb15c54668a4f2a665a14a02421684d8844ee1c7f810327b160134b8faec8'}]}, 'timestamp': '2025-12-06 10:11:08.012785', '_unique_id': '4e178d8ae13d4701b104235527f49640'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.013 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.014 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.014 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.014 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd8eb6bad-8909-4e6d-9109-58c964ed42a0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:11:08.014269', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'e1dd39ac-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12286.178458474, 'message_signature': 'def9db4621ad36b1cd440cdbead3b7a7cf575eb91cc36f7312912186efdbe6d7'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:11:08.014269', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'e1dd44ec-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12286.178458474, 'message_signature': 'a6599481f00e49e156c75968a3ed332dcf2a6b1ab913f1508e2a19ff16b7bc2b'}]}, 'timestamp': '2025-12-06 10:11:08.014869', '_unique_id': '62c84d8b43d74cdc83b29e27b4332cc0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.015 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.016 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.016 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.016 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3d5ecb1b-e75f-48ca-8a05-d4ef7b539682', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:11:08.016391', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'e1dd8c90-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12286.196594964, 'message_signature': '498edbc9ab5dd7d053492497e6a4afe6cf4d5816883424ada0fe19fc8ed55949'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:11:08.016391', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'e1dd97ee-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12286.196594964, 'message_signature': '6d905890c6e527350b732d7d7bffb073f05af027b42a8d2446cc3f5712b31a3c'}]}, 'timestamp': '2025-12-06 10:11:08.016972', '_unique_id': '3fcc958705cb4e2fa88980385e0b47d5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.017 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.018 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.030 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/cpu volume: 14920000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'abdf334c-9f5d-491f-b136-6f24a3afcfe4', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 14920000000, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'timestamp': '2025-12-06T10:11:08.018373', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': 'e1dfc848-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12286.280201277, 'message_signature': 'a393bb92b4ebdd70b5e3d8fc85438ccc2bc13d8688220f51dea336a82c8b7eeb'}]}, 'timestamp': '2025-12-06 10:11:08.031355', '_unique_id': 'ff8d8883aadd462dbd06deca35ce8714'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.032 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.033 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.033 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.033 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.bytes volume: 389120 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.033 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'eb6ad6d5-af84-4065-9dd1-e35a94fb7300', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 389120, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:11:08.033509', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'e1e02928-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12286.196594964, 'message_signature': '26f8c293abd67e70ecde2ef7ab1e48f880a1317395c227170c98f1ecd39c44a6'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:11:08.033509', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'e1e036d4-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12286.196594964, 'message_signature': 'e2b6af6dfda0889900732ca3e6f58256b5cd0fd2e70efdf01754c7553b67ee79'}]}, 'timestamp': '2025-12-06 10:11:08.034156', '_unique_id': '5d1c6b007e554e8295ba036ee8906c66'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.034 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.035 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.035 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.latency volume: 1252245154 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.latency volume: 27668224 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '576d1ba6-b4d4-4ff0-8a9a-cdf027b08f1a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1252245154, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:11:08.035690', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'e1e0808a-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12286.196594964, 'message_signature': '3ade1e527dfea1d2a8f37ed72c6950c3c830cbb08df78550f5777065a5f34b02'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 27668224, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:11:08.035690', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'e1e08b70-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12286.196594964, 'message_signature': '99a5fc80f66a0d9ffb67f204ca88e691645787a06762e6b3c70100fe1efec849'}]}, 'timestamp': '2025-12-06 10:11:08.036338', '_unique_id': '361d7c6b65ad49308926a18b9832b555'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.036 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.037 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.037 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.037 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/memory.usage volume: 51.80859375 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f3edc4c1-1c7b-4194-ad67-77748fd0ca18', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.80859375, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'timestamp': '2025-12-06T10:11:08.037961', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': 'e1e0d788-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12286.280201277, 'message_signature': 'd221d7ecf72babe72b7689c61db18ff7903176a001fe9f1c31de0549528e1d5b'}]}, 'timestamp': '2025-12-06 10:11:08.038326', '_unique_id': '6e4e88b9d4224921a9d91ef46eb1be12'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.039 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.040 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.040 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.bytes volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd5a9c4bb-6953-4aae-bf50-264632178113', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:11:08.040428', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'e1e13854-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12286.165523842, 'message_signature': 'b0584c8517b5aeb58e3ebf2ac0d5ac46c809cb6822f9ebf323553758ad6c3b77'}]}, 'timestamp': '2025-12-06 10:11:08.040779', '_unique_id': 'a89fda6a6e5d436e965aa8b7e59512b2'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.041 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.042 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.042 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e42604a0-c74d-4e58-980b-d8d3c2aa85f6', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:11:08.042293', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'e1e1817e-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12286.165523842, 'message_signature': '65c66e14a1a7f0ef84afb9efae57c92121649756b8eda912d1f3455ce61c67c5'}]}, 'timestamp': '2025-12-06 10:11:08.042632', '_unique_id': 'a54d12c419bb4dfaad6fc63e3fe284c6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.043 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f8356347-dfd5-4df8-a35a-3bd83cd74f37', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:11:08.044006', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'e1e1c30a-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12286.165523842, 'message_signature': '46e7392be5eed51750f7bfe4a4dab5fd8f5bc7ced6d6b74350d9f7a2543d5933'}]}, 'timestamp': '2025-12-06 10:11:08.044342', '_unique_id': '82a10df32f184630a946cdd1a8354de0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.044 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.045 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.045 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.045 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f0999b32-1164-4f82-9d4e-2a05337460e7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:11:08.045864', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'e1e20b80-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12286.165523842, 'message_signature': '28aea3f12c222ac332f1ac4d0f61b28526867a2312214a0fd75e050fd89b25a3'}]}, 'timestamp': '2025-12-06 10:11:08.046164', '_unique_id': '1fb05da130f1463fa247a082f19bc716'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.046 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.047 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.047 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.048 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e69bb4cb-4a41-49ec-a1b0-41702cb0ffee', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:11:08.047834', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'e1e25950-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12286.178458474, 'message_signature': '8786f9dec6dd16d13160104ec043a47f0c8d38d3db5fb8c7d9c745d597b8f4f0'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:11:08.047834', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'e1e2654e-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12286.178458474, 'message_signature': '976c22fc89ab1969002a93838d426f7a755df6063116f40a2157636ae232d594'}]}, 'timestamp': '2025-12-06 10:11:08.048453', '_unique_id': '24c34027905b4e04be8be031a02ab3f0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.049 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.050 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.050 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.requests volume: 47 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.050 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ab63e8c2-438a-42b3-9ce8-1cd43cd5f6ed', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 47, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:11:08.050151', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'e1e2b62a-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12286.196594964, 'message_signature': 'a565ce6f79354b3b8912b8fcafba7819d2fded9874087f77cd23788ff547b4a2'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:11:08.050151', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'e1e2c5d4-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12286.196594964, 'message_signature': 'aeb3f29ee8fa33a5535ecec1909853aa2929ee2f0a43a15f7eb624db973150a3'}]}, 'timestamp': '2025-12-06 10:11:08.050991', '_unique_id': '15eda63a75b043f7b922046d7208eeec'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.051 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.052 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.052 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '84a985dc-d2a7-4a88-aa69-f0afd4d48fec', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:11:08.052752', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'e1e337e4-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12286.165523842, 'message_signature': 'd8d4f2e032942245d0a9aab4612ea629dd932383637302a8322e436a45485015'}]}, 'timestamp': '2025-12-06 10:11:08.053896', '_unique_id': '97178ea7420746909a913cf60291b5b1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.054 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.055 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.055 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.bytes volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ef50095b-d11f-45ba-86ae-afc6e951eed0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:11:08.055476', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'e1e38320-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12286.165523842, 'message_signature': '99c09749bcd2e78ff2a1206d61c0c2e45f25228da3b649b29db3a53e9d0264dc'}]}, 'timestamp': '2025-12-06 10:11:08.055801', '_unique_id': '2f03a6b9b72c435f8d296d718caa440c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.056 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.057 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.057 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2d64ae6f-26d4-4885-810d-73b2264ac25b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:11:08.057529', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'e1e3d30c-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12286.165523842, 'message_signature': 'aa206b7a57a0ac6652d10ad99f1799e51496395ff2814924f7c616cc26c2fcd6'}]}, 'timestamp': '2025-12-06 10:11:08.057849', '_unique_id': 'a0bda3d5e99e492bb51a056b1bb21d11'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.058 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.059 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.059 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets volume: 60 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4297a311-f951-4139-8860-51b4aabc733b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 60, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:11:08.059274', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'e1e4181c-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12286.165523842, 'message_signature': 'adf0cb14bef6b369759cf48b554e0bdc81e96a33951538a0b6f54ac0de1052fa'}]}, 'timestamp': '2025-12-06 10:11:08.059629', '_unique_id': '9883808d087c44adac469b9bfc48e80d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.060 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.061 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.latency volume: 1525105336 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.061 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.latency volume: 106716064 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4c1adc02-2068-4aa6-b7fc-86b3f3348cee', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1525105336, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:11:08.061013', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'e1e45ad4-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12286.196594964, 'message_signature': 'd255580a3bcbf86621893f86124e7e0e056920e0c847322ef5f111e4a1f5af00'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 106716064, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:11:08.061013', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'e1e46736-d28b-11f0-aaf2-fa163e118844', 'monotonic_time': 12286.196594964, 'message_signature': 'bc19a6180a5ad0433d60a00b4757807e56f587affbf2ac06450f1a258edb190b'}]}, 'timestamp': '2025-12-06 10:11:08.061604', '_unique_id': '91e5e3cb2a0c4669901e349a58207373'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:11:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:11:08.062 12 ERROR oslo_messaging.notify.messaging Dec 6 05:11:08 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:11:11 localhost nova_compute[282193]: 2025-12-06 10:11:11.422 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:11:11 localhost nova_compute[282193]: 2025-12-06 10:11:11.423 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:11:11 localhost nova_compute[282193]: 2025-12-06 10:11:11.423 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:11:11 localhost nova_compute[282193]: 2025-12-06 10:11:11.424 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:11:11 localhost nova_compute[282193]: 2025-12-06 10:11:11.463 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:11:11 localhost nova_compute[282193]: 2025-12-06 10:11:11.464 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:11:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:11:12 localhost podman[308400]: 2025-12-06 10:11:12.922057483 +0000 UTC m=+0.082000796 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true) Dec 6 05:11:13 localhost podman[308400]: 2025-12-06 10:11:13.013078242 +0000 UTC m=+0.173021545 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller) Dec 6 05:11:13 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:11:13 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:11:15 localhost sshd[308425]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:11:16 localhost nova_compute[282193]: 2025-12-06 10:11:16.465 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:11:16 localhost nova_compute[282193]: 2025-12-06 10:11:16.467 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:11:16 localhost nova_compute[282193]: 2025-12-06 10:11:16.467 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:11:16 localhost nova_compute[282193]: 2025-12-06 10:11:16.468 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:11:16 localhost nova_compute[282193]: 2025-12-06 10:11:16.509 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:11:16 localhost nova_compute[282193]: 2025-12-06 10:11:16.510 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:11:16 localhost openstack_network_exporter[243110]: ERROR 10:11:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:11:16 localhost openstack_network_exporter[243110]: ERROR 10:11:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:11:16 localhost openstack_network_exporter[243110]: Dec 6 05:11:16 localhost openstack_network_exporter[243110]: ERROR 10:11:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:11:16 localhost openstack_network_exporter[243110]: ERROR 10:11:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:11:16 localhost openstack_network_exporter[243110]: ERROR 10:11:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:11:16 localhost openstack_network_exporter[243110]: Dec 6 05:11:18 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:11:21 localhost nova_compute[282193]: 2025-12-06 10:11:21.511 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:11:21 localhost nova_compute[282193]: 2025-12-06 10:11:21.513 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:11:21 localhost nova_compute[282193]: 2025-12-06 10:11:21.513 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:11:21 localhost nova_compute[282193]: 2025-12-06 10:11:21.513 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:11:21 localhost nova_compute[282193]: 2025-12-06 10:11:21.552 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:11:21 localhost nova_compute[282193]: 2025-12-06 10:11:21.553 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:11:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:11:22 localhost podman[308427]: 2025-12-06 10:11:22.933922745 +0000 UTC m=+0.089895795 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:11:22 localhost podman[308427]: 2025-12-06 10:11:22.968295786 +0000 UTC m=+0.124268876 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible) Dec 6 05:11:22 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:11:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:11:23 localhost systemd[1]: tmp-crun.7QqjCW.mount: Deactivated successfully. Dec 6 05:11:23 localhost podman[308445]: 2025-12-06 10:11:23.095233843 +0000 UTC m=+0.088093250 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:11:23 localhost podman[308445]: 2025-12-06 10:11:23.133580205 +0000 UTC m=+0.126439612 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:11:23 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:11:23 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:11:23 localhost podman[241090]: time="2025-12-06T10:11:23Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:11:23 localhost podman[241090]: @ - - [06/Dec/2025:10:11:23 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156104 "" "Go-http-client/1.1" Dec 6 05:11:23 localhost podman[241090]: @ - - [06/Dec/2025:10:11:23 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19239 "" "Go-http-client/1.1" Dec 6 05:11:26 localhost nova_compute[282193]: 2025-12-06 10:11:26.555 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:11:26 localhost nova_compute[282193]: 2025-12-06 10:11:26.558 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:11:26 localhost nova_compute[282193]: 2025-12-06 10:11:26.558 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:11:26 localhost nova_compute[282193]: 2025-12-06 10:11:26.558 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:11:26 localhost nova_compute[282193]: 2025-12-06 10:11:26.587 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:11:26 localhost nova_compute[282193]: 2025-12-06 10:11:26.588 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:11:28 localhost nova_compute[282193]: 2025-12-06 10:11:28.180 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:11:28 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:11:28 localhost nova_compute[282193]: 2025-12-06 10:11:28.275 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:11:28 localhost nova_compute[282193]: 2025-12-06 10:11:28.276 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:11:28 localhost nova_compute[282193]: 2025-12-06 10:11:28.276 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:11:28 localhost nova_compute[282193]: 2025-12-06 10:11:28.276 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Auditing locally available compute resources for np0005548789.localdomain (node: np0005548789.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 6 05:11:28 localhost nova_compute[282193]: 2025-12-06 10:11:28.277 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:11:28 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:11:28 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1823885354' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:11:28 localhost nova_compute[282193]: 2025-12-06 10:11:28.760 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:11:28 localhost nova_compute[282193]: 2025-12-06 10:11:28.822 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:11:28 localhost nova_compute[282193]: 2025-12-06 10:11:28.822 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:11:29 localhost nova_compute[282193]: 2025-12-06 10:11:29.036 282197 WARNING nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 6 05:11:29 localhost nova_compute[282193]: 2025-12-06 10:11:29.038 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Hypervisor/Node resource view: name=np0005548789.localdomain free_ram=11452MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 6 05:11:29 localhost nova_compute[282193]: 2025-12-06 10:11:29.038 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:11:29 localhost nova_compute[282193]: 2025-12-06 10:11:29.039 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:11:29 localhost nova_compute[282193]: 2025-12-06 10:11:29.133 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 6 05:11:29 localhost nova_compute[282193]: 2025-12-06 10:11:29.134 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 6 05:11:29 localhost nova_compute[282193]: 2025-12-06 10:11:29.134 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Final resource view: name=np0005548789.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 6 05:11:29 localhost nova_compute[282193]: 2025-12-06 10:11:29.187 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:11:29 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:11:29 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1834841947' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:11:29 localhost nova_compute[282193]: 2025-12-06 10:11:29.600 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:11:29 localhost nova_compute[282193]: 2025-12-06 10:11:29.606 282197 DEBUG nova.compute.provider_tree [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed in ProviderTree for provider: 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 6 05:11:29 localhost nova_compute[282193]: 2025-12-06 10:11:29.627 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 6 05:11:29 localhost nova_compute[282193]: 2025-12-06 10:11:29.630 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Compute_service record updated for np0005548789.localdomain:np0005548789.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 6 05:11:29 localhost nova_compute[282193]: 2025-12-06 10:11:29.631 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:11:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:11:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:11:30 localhost podman[308511]: 2025-12-06 10:11:30.927239702 +0000 UTC m=+0.086941875 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, container_name=openstack_network_exporter, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc.) Dec 6 05:11:30 localhost podman[308511]: 2025-12-06 10:11:30.969033229 +0000 UTC m=+0.128735402 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, version=9.6, config_id=edpm, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, distribution-scope=public, release=1755695350, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., name=ubi9-minimal) Dec 6 05:11:30 localhost systemd[1]: tmp-crun.xOHtTr.mount: Deactivated successfully. Dec 6 05:11:30 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:11:30 localhost podman[308512]: 2025-12-06 10:11:30.99050615 +0000 UTC m=+0.145407757 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, tcib_managed=true) Dec 6 05:11:31 localhost podman[308512]: 2025-12-06 10:11:31.003164123 +0000 UTC m=+0.158065740 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 6 05:11:31 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:11:31 localhost nova_compute[282193]: 2025-12-06 10:11:31.589 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:11:31 localhost nova_compute[282193]: 2025-12-06 10:11:31.591 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:11:31 localhost nova_compute[282193]: 2025-12-06 10:11:31.591 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:11:31 localhost nova_compute[282193]: 2025-12-06 10:11:31.591 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:11:31 localhost nova_compute[282193]: 2025-12-06 10:11:31.623 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:11:31 localhost nova_compute[282193]: 2025-12-06 10:11:31.624 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:11:31 localhost nova_compute[282193]: 2025-12-06 10:11:31.632 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:11:31 localhost nova_compute[282193]: 2025-12-06 10:11:31.633 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 6 05:11:31 localhost nova_compute[282193]: 2025-12-06 10:11:31.633 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 6 05:11:32 localhost nova_compute[282193]: 2025-12-06 10:11:32.147 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 6 05:11:32 localhost nova_compute[282193]: 2025-12-06 10:11:32.147 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquired lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 6 05:11:32 localhost nova_compute[282193]: 2025-12-06 10:11:32.148 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 6 05:11:32 localhost nova_compute[282193]: 2025-12-06 10:11:32.148 282197 DEBUG nova.objects.instance [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 6 05:11:32 localhost nova_compute[282193]: 2025-12-06 10:11:32.542 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updating instance_info_cache with network_info: [{"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 6 05:11:32 localhost nova_compute[282193]: 2025-12-06 10:11:32.563 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Releasing lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 6 05:11:32 localhost nova_compute[282193]: 2025-12-06 10:11:32.564 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 6 05:11:32 localhost nova_compute[282193]: 2025-12-06 10:11:32.564 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:11:32 localhost nova_compute[282193]: 2025-12-06 10:11:32.564 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:11:32 localhost nova_compute[282193]: 2025-12-06 10:11:32.565 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:11:32 localhost nova_compute[282193]: 2025-12-06 10:11:32.565 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:11:32 localhost nova_compute[282193]: 2025-12-06 10:11:32.565 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 6 05:11:33 localhost nova_compute[282193]: 2025-12-06 10:11:33.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:11:33 localhost nova_compute[282193]: 2025-12-06 10:11:33.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:11:33 localhost nova_compute[282193]: 2025-12-06 10:11:33.182 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:11:33 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:11:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:11:34 localhost podman[308550]: 2025-12-06 10:11:34.917208273 +0000 UTC m=+0.077836220 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3) Dec 6 05:11:34 localhost podman[308550]: 2025-12-06 10:11:34.927249697 +0000 UTC m=+0.087877634 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 6 05:11:34 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:11:36 localhost nova_compute[282193]: 2025-12-06 10:11:36.625 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:11:36 localhost nova_compute[282193]: 2025-12-06 10:11:36.627 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:11:36 localhost nova_compute[282193]: 2025-12-06 10:11:36.627 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:11:36 localhost nova_compute[282193]: 2025-12-06 10:11:36.627 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:11:36 localhost nova_compute[282193]: 2025-12-06 10:11:36.661 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:11:36 localhost nova_compute[282193]: 2025-12-06 10:11:36.662 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:11:36 localhost nova_compute[282193]: 2025-12-06 10:11:36.663 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:11:38 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:11:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:11:38 localhost podman[308570]: 2025-12-06 10:11:38.92490465 +0000 UTC m=+0.083656596 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 6 05:11:38 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 6 05:11:38 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/416774902' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 6 05:11:38 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 6 05:11:38 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/416774902' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 6 05:11:38 localhost podman[308570]: 2025-12-06 10:11:38.960248802 +0000 UTC m=+0.119000798 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:11:38 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:11:41 localhost nova_compute[282193]: 2025-12-06 10:11:41.697 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:11:41 localhost nova_compute[282193]: 2025-12-06 10:11:41.699 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:11:41 localhost nova_compute[282193]: 2025-12-06 10:11:41.700 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5035 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:11:41 localhost nova_compute[282193]: 2025-12-06 10:11:41.700 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:11:41 localhost nova_compute[282193]: 2025-12-06 10:11:41.701 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:11:41 localhost nova_compute[282193]: 2025-12-06 10:11:41.704 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:11:41 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e95 e95: 6 total, 6 up, 6 in Dec 6 05:11:41 localhost systemd[1]: session-73.scope: Deactivated successfully. Dec 6 05:11:41 localhost systemd[1]: session-73.scope: Consumed 6.320s CPU time. Dec 6 05:11:41 localhost systemd-logind[766]: Session 73 logged out. Waiting for processes to exit. Dec 6 05:11:41 localhost systemd-logind[766]: Removed session 73. Dec 6 05:11:42 localhost sshd[308593]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:11:42 localhost systemd-logind[766]: New session 74 of user ceph-admin. Dec 6 05:11:42 localhost systemd[1]: Started Session 74 of User ceph-admin. Dec 6 05:11:42 localhost ceph-mon[298582]: from='client.? 172.18.0.200:0/3000437731' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Dec 6 05:11:42 localhost ceph-mon[298582]: Activating manager daemon np0005548790.kvkfyr Dec 6 05:11:42 localhost ceph-mon[298582]: from='client.? 172.18.0.200:0/3000437731' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Dec 6 05:11:42 localhost ceph-mon[298582]: Manager daemon np0005548790.kvkfyr is now available Dec 6 05:11:42 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548790.kvkfyr/mirror_snapshot_schedule"} : dispatch Dec 6 05:11:42 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548790.kvkfyr/mirror_snapshot_schedule"} : dispatch Dec 6 05:11:42 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548790.kvkfyr/trash_purge_schedule"} : dispatch Dec 6 05:11:42 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548790.kvkfyr/trash_purge_schedule"} : dispatch Dec 6 05:11:43 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:11:43 localhost podman[308705]: 2025-12-06 10:11:43.288158903 +0000 UTC m=+0.145551711 container exec ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, GIT_BRANCH=main, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, distribution-scope=public, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, maintainer=Guillaume Abrioux , release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, vcs-type=git) Dec 6 05:11:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:11:43 localhost sshd[308732]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:11:43 localhost podman[308724]: 2025-12-06 10:11:43.407292792 +0000 UTC m=+0.096947858 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 6 05:11:43 localhost podman[308724]: 2025-12-06 10:11:43.443347655 +0000 UTC m=+0.133002701 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 6 05:11:43 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:11:43 localhost podman[308705]: 2025-12-06 10:11:43.498868327 +0000 UTC m=+0.356261075 container exec_died ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, architecture=x86_64, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, io.openshift.expose-services=, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, maintainer=Guillaume Abrioux , version=7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public) Dec 6 05:11:43 localhost ceph-mon[298582]: [06/Dec/2025:10:11:43] ENGINE Bus STARTING Dec 6 05:11:43 localhost ceph-mon[298582]: [06/Dec/2025:10:11:43] ENGINE Serving on https://172.18.0.108:7150 Dec 6 05:11:43 localhost ceph-mon[298582]: [06/Dec/2025:10:11:43] ENGINE Client ('172.18.0.108', 49786) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Dec 6 05:11:43 localhost ceph-mon[298582]: [06/Dec/2025:10:11:43] ENGINE Serving on http://172.18.0.108:8765 Dec 6 05:11:43 localhost ceph-mon[298582]: [06/Dec/2025:10:11:43] ENGINE Bus STARTED Dec 6 05:11:44 localhost ceph-mon[298582]: Health check cleared: CEPHADM_STRAY_DAEMON (was: 1 stray daemon(s) not managed by cephadm) Dec 6 05:11:44 localhost ceph-mon[298582]: Health check cleared: CEPHADM_STRAY_HOST (was: 1 stray host(s) with 1 daemon(s) not managed by cephadm) Dec 6 05:11:44 localhost ceph-mon[298582]: Cluster is now healthy Dec 6 05:11:44 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:11:44 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:11:44 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:11:44 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:11:44 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:11:44 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:11:46 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:11:46 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:11:46 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 6 05:11:46 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 6 05:11:46 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 6 05:11:46 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 6 05:11:46 localhost ceph-mon[298582]: Adjusting osd_memory_target on np0005548790.localdomain to 836.6M Dec 6 05:11:46 localhost ceph-mon[298582]: Unable to set osd_memory_target on np0005548790.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 6 05:11:46 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:11:46 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:11:46 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 6 05:11:46 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 6 05:11:46 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 6 05:11:46 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 6 05:11:46 localhost ceph-mon[298582]: Adjusting osd_memory_target on np0005548788.localdomain to 836.6M Dec 6 05:11:46 localhost ceph-mon[298582]: Unable to set osd_memory_target on np0005548788.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Dec 6 05:11:46 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:11:46 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:11:46 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 6 05:11:46 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 6 05:11:46 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 6 05:11:46 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 6 05:11:46 localhost ceph-mon[298582]: Adjusting osd_memory_target on np0005548789.localdomain to 836.6M Dec 6 05:11:46 localhost ceph-mon[298582]: Unable to set osd_memory_target on np0005548789.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 6 05:11:46 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:11:46 localhost ceph-mon[298582]: Updating np0005548788.localdomain:/etc/ceph/ceph.conf Dec 6 05:11:46 localhost ceph-mon[298582]: Updating np0005548789.localdomain:/etc/ceph/ceph.conf Dec 6 05:11:46 localhost ceph-mon[298582]: Updating np0005548790.localdomain:/etc/ceph/ceph.conf Dec 6 05:11:46 localhost openstack_network_exporter[243110]: ERROR 10:11:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:11:46 localhost openstack_network_exporter[243110]: ERROR 10:11:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:11:46 localhost openstack_network_exporter[243110]: ERROR 10:11:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:11:46 localhost openstack_network_exporter[243110]: ERROR 10:11:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:11:46 localhost openstack_network_exporter[243110]: Dec 6 05:11:46 localhost openstack_network_exporter[243110]: ERROR 10:11:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:11:46 localhost openstack_network_exporter[243110]: Dec 6 05:11:46 localhost nova_compute[282193]: 2025-12-06 10:11:46.700 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:11:46 localhost nova_compute[282193]: 2025-12-06 10:11:46.704 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:11:46 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0. Dec 6 05:11:46 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:11:46.803870) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 6 05:11:46 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25 Dec 6 05:11:46 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015906803901, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 2726, "num_deletes": 257, "total_data_size": 11909141, "memory_usage": 12424696, "flush_reason": "Manual Compaction"} Dec 6 05:11:46 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started Dec 6 05:11:46 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015906831401, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 7341396, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16186, "largest_seqno": 18907, "table_properties": {"data_size": 7330634, "index_size": 6691, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3013, "raw_key_size": 26504, "raw_average_key_size": 22, "raw_value_size": 7307651, "raw_average_value_size": 6130, "num_data_blocks": 287, "num_entries": 1192, "num_filter_entries": 1192, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015789, "oldest_key_time": 1765015789, "file_creation_time": 1765015906, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ec2a3a6-c7d0-4f1e-9923-5a3b782725ae", "db_session_id": "JMBO5KX1IJCJ8FWC64EX", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}} Dec 6 05:11:46 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 27572 microseconds, and 7306 cpu microseconds. Dec 6 05:11:46 localhost ceph-mon[298582]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 6 05:11:46 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:11:46.831436) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 7341396 bytes OK Dec 6 05:11:46 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:11:46.831458) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started Dec 6 05:11:46 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:11:46.833356) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done Dec 6 05:11:46 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:11:46.833370) EVENT_LOG_v1 {"time_micros": 1765015906833366, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 6 05:11:46 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:11:46.833387) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 6 05:11:46 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 11896294, prev total WAL file size 11896294, number of live WAL files 2. Dec 6 05:11:46 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:11:46 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:11:46.834570) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131323935' seq:72057594037927935, type:22 .. '7061786F73003131353437' seq:0, type:0; will stop at (end) Dec 6 05:11:46 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 6 05:11:46 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(7169KB)], [24(16MB)] Dec 6 05:11:46 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015906834639, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 24966399, "oldest_snapshot_seqno": -1} Dec 6 05:11:46 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 12264 keys, 20846487 bytes, temperature: kUnknown Dec 6 05:11:46 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015906952342, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 20846487, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 20774394, "index_size": 40312, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30725, "raw_key_size": 327295, "raw_average_key_size": 26, "raw_value_size": 20563644, "raw_average_value_size": 1676, "num_data_blocks": 1548, "num_entries": 12264, "num_filter_entries": 12264, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015623, "oldest_key_time": 0, "file_creation_time": 1765015906, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ec2a3a6-c7d0-4f1e-9923-5a3b782725ae", "db_session_id": "JMBO5KX1IJCJ8FWC64EX", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}} Dec 6 05:11:46 localhost ceph-mon[298582]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 6 05:11:46 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:11:46.952566) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 20846487 bytes Dec 6 05:11:46 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:11:46.954405) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 212.0 rd, 177.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(7.0, 16.8 +0.0 blob) out(19.9 +0.0 blob), read-write-amplify(6.2) write-amplify(2.8) OK, records in: 12805, records dropped: 541 output_compression: NoCompression Dec 6 05:11:46 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:11:46.954421) EVENT_LOG_v1 {"time_micros": 1765015906954414, "job": 12, "event": "compaction_finished", "compaction_time_micros": 117766, "compaction_time_cpu_micros": 52577, "output_level": 6, "num_output_files": 1, "total_output_size": 20846487, "num_input_records": 12805, "num_output_records": 12264, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 6 05:11:46 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:11:46 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015906954981, "job": 12, "event": "table_file_deletion", "file_number": 26} Dec 6 05:11:46 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:11:46 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015906956388, "job": 12, "event": "table_file_deletion", "file_number": 24} Dec 6 05:11:46 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:11:46.834500) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:11:46 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:11:46.956413) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:11:46 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:11:46.956417) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:11:46 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:11:46.956418) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:11:46 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:11:46.956419) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:11:46 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:11:46.956421) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:11:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:11:47.302 160509 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:11:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:11:47.302 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:11:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:11:47.303 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:11:47 localhost ceph-mon[298582]: Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:11:47 localhost ceph-mon[298582]: Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:11:48 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:11:48 localhost ceph-mon[298582]: Updating np0005548789.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 6 05:11:48 localhost ceph-mon[298582]: Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf Dec 6 05:11:48 localhost ceph-mon[298582]: Updating np0005548788.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 6 05:11:48 localhost ceph-mon[298582]: Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring Dec 6 05:11:48 localhost ceph-mon[298582]: Updating np0005548790.localdomain:/etc/ceph/ceph.client.admin.keyring Dec 6 05:11:48 localhost ceph-mon[298582]: Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring Dec 6 05:11:48 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:11:48 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:11:48 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:11:48 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:11:49 localhost ceph-mon[298582]: Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring Dec 6 05:11:49 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:11:49 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:11:49 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:11:49 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:11:49 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:11:50 localhost ceph-mon[298582]: Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) Dec 6 05:11:50 localhost ceph-mon[298582]: Health check failed: 1 stray host(s) with 1 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST) Dec 6 05:11:51 localhost nova_compute[282193]: 2025-12-06 10:11:51.704 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:11:52 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:11:53 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:11:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:11:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:11:53 localhost podman[241090]: time="2025-12-06T10:11:53Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:11:53 localhost systemd[1]: tmp-crun.8jdRDF.mount: Deactivated successfully. Dec 6 05:11:53 localhost podman[309648]: 2025-12-06 10:11:53.962747965 +0000 UTC m=+0.118999387 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 6 05:11:53 localhost podman[309648]: 2025-12-06 10:11:53.969957464 +0000 UTC m=+0.126208886 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:11:53 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:11:53 localhost podman[309647]: 2025-12-06 10:11:53.934937862 +0000 UTC m=+0.091481132 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0) Dec 6 05:11:53 localhost podman[241090]: @ - - [06/Dec/2025:10:11:53 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156104 "" "Go-http-client/1.1" Dec 6 05:11:54 localhost sshd[309688]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:11:54 localhost podman[309647]: 2025-12-06 10:11:54.064025323 +0000 UTC m=+0.220568643 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 6 05:11:54 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:11:54 localhost podman[241090]: @ - - [06/Dec/2025:10:11:54 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19250 "" "Go-http-client/1.1" Dec 6 05:11:56 localhost nova_compute[282193]: 2025-12-06 10:11:56.706 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:11:56 localhost nova_compute[282193]: 2025-12-06 10:11:56.706 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:11:56 localhost nova_compute[282193]: 2025-12-06 10:11:56.707 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:11:56 localhost nova_compute[282193]: 2025-12-06 10:11:56.707 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:11:56 localhost nova_compute[282193]: 2025-12-06 10:11:56.707 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:11:56 localhost nova_compute[282193]: 2025-12-06 10:11:56.711 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:11:58 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:12:01 localhost nova_compute[282193]: 2025-12-06 10:12:01.711 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:12:01 localhost nova_compute[282193]: 2025-12-06 10:12:01.716 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:12:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:12:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:12:01 localhost podman[309691]: 2025-12-06 10:12:01.924428353 +0000 UTC m=+0.077893681 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 6 05:12:01 localhost podman[309691]: 2025-12-06 10:12:01.934267751 +0000 UTC m=+0.087733069 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:12:01 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:12:02 localhost podman[309690]: 2025-12-06 10:12:02.014530053 +0000 UTC m=+0.171221479 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, architecture=x86_64) Dec 6 05:12:02 localhost podman[309690]: 2025-12-06 10:12:02.02698025 +0000 UTC m=+0.183671676 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-type=git, architecture=x86_64, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, name=ubi9-minimal, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_id=edpm, version=9.6) Dec 6 05:12:02 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:12:03 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:12:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:12:05 localhost systemd[1]: tmp-crun.gEpSPc.mount: Deactivated successfully. Dec 6 05:12:05 localhost podman[309730]: 2025-12-06 10:12:05.920026634 +0000 UTC m=+0.082001177 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=multipathd) Dec 6 05:12:05 localhost podman[309730]: 2025-12-06 10:12:05.938272706 +0000 UTC m=+0.100247279 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:12:05 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:12:06 localhost nova_compute[282193]: 2025-12-06 10:12:06.713 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:12:06 localhost nova_compute[282193]: 2025-12-06 10:12:06.719 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:12:08 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:12:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:12:09 localhost podman[309747]: 2025-12-06 10:12:09.918227624 +0000 UTC m=+0.079829600 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:12:09 localhost podman[309747]: 2025-12-06 10:12:09.924382181 +0000 UTC m=+0.085984137 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 6 05:12:09 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:12:11 localhost nova_compute[282193]: 2025-12-06 10:12:11.714 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:12:11 localhost nova_compute[282193]: 2025-12-06 10:12:11.720 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:12:13 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:12:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:12:13 localhost podman[309771]: 2025-12-06 10:12:13.901357528 +0000 UTC m=+0.066425034 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller) Dec 6 05:12:13 localhost podman[309771]: 2025-12-06 10:12:13.94238547 +0000 UTC m=+0.107452966 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS) Dec 6 05:12:13 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:12:16 localhost openstack_network_exporter[243110]: ERROR 10:12:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:12:16 localhost openstack_network_exporter[243110]: ERROR 10:12:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:12:16 localhost openstack_network_exporter[243110]: ERROR 10:12:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:12:16 localhost openstack_network_exporter[243110]: ERROR 10:12:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:12:16 localhost openstack_network_exporter[243110]: Dec 6 05:12:16 localhost openstack_network_exporter[243110]: ERROR 10:12:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:12:16 localhost openstack_network_exporter[243110]: Dec 6 05:12:16 localhost nova_compute[282193]: 2025-12-06 10:12:16.716 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:12:16 localhost nova_compute[282193]: 2025-12-06 10:12:16.722 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:12:18 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:12:21 localhost nova_compute[282193]: 2025-12-06 10:12:21.717 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:12:21 localhost nova_compute[282193]: 2025-12-06 10:12:21.725 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:12:23 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:12:23 localhost podman[241090]: time="2025-12-06T10:12:23Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:12:23 localhost podman[241090]: @ - - [06/Dec/2025:10:12:23 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156104 "" "Go-http-client/1.1" Dec 6 05:12:23 localhost podman[241090]: @ - - [06/Dec/2025:10:12:23 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19233 "" "Go-http-client/1.1" Dec 6 05:12:24 localhost sshd[309797]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:12:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:12:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:12:24 localhost podman[309800]: 2025-12-06 10:12:24.909928831 +0000 UTC m=+0.068869907 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:12:24 localhost podman[309800]: 2025-12-06 10:12:24.914868131 +0000 UTC m=+0.073809217 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 6 05:12:24 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:12:24 localhost podman[309799]: 2025-12-06 10:12:24.983603314 +0000 UTC m=+0.142187330 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:12:24 localhost podman[309799]: 2025-12-06 10:12:24.988602415 +0000 UTC m=+0.147186501 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:12:25 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:12:26 localhost nova_compute[282193]: 2025-12-06 10:12:26.719 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:12:26 localhost nova_compute[282193]: 2025-12-06 10:12:26.728 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:12:28 localhost nova_compute[282193]: 2025-12-06 10:12:28.180 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:12:28 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:12:29 localhost nova_compute[282193]: 2025-12-06 10:12:29.134 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:12:29 localhost nova_compute[282193]: 2025-12-06 10:12:29.135 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:12:29 localhost nova_compute[282193]: 2025-12-06 10:12:29.135 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:12:29 localhost nova_compute[282193]: 2025-12-06 10:12:29.135 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Auditing locally available compute resources for np0005548789.localdomain (node: np0005548789.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 6 05:12:29 localhost nova_compute[282193]: 2025-12-06 10:12:29.136 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:12:29 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:12:29 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/825719281' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:12:29 localhost nova_compute[282193]: 2025-12-06 10:12:29.567 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:12:29 localhost nova_compute[282193]: 2025-12-06 10:12:29.623 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:12:29 localhost nova_compute[282193]: 2025-12-06 10:12:29.624 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:12:29 localhost nova_compute[282193]: 2025-12-06 10:12:29.766 282197 WARNING nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 6 05:12:29 localhost nova_compute[282193]: 2025-12-06 10:12:29.767 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Hypervisor/Node resource view: name=np0005548789.localdomain free_ram=11439MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 6 05:12:29 localhost nova_compute[282193]: 2025-12-06 10:12:29.767 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:12:29 localhost nova_compute[282193]: 2025-12-06 10:12:29.768 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:12:29 localhost nova_compute[282193]: 2025-12-06 10:12:29.841 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 6 05:12:29 localhost nova_compute[282193]: 2025-12-06 10:12:29.842 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 6 05:12:29 localhost nova_compute[282193]: 2025-12-06 10:12:29.842 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Final resource view: name=np0005548789.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 6 05:12:29 localhost nova_compute[282193]: 2025-12-06 10:12:29.898 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:12:30 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:12:30 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2307179959' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:12:30 localhost nova_compute[282193]: 2025-12-06 10:12:30.351 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:12:30 localhost nova_compute[282193]: 2025-12-06 10:12:30.355 282197 DEBUG nova.compute.provider_tree [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed in ProviderTree for provider: 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 6 05:12:31 localhost nova_compute[282193]: 2025-12-06 10:12:31.089 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 6 05:12:31 localhost nova_compute[282193]: 2025-12-06 10:12:31.092 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Compute_service record updated for np0005548789.localdomain:np0005548789.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 6 05:12:31 localhost nova_compute[282193]: 2025-12-06 10:12:31.093 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.325s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:12:31 localhost nova_compute[282193]: 2025-12-06 10:12:31.721 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:12:31 localhost nova_compute[282193]: 2025-12-06 10:12:31.730 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:12:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:12:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:12:32 localhost systemd[1]: tmp-crun.qplCn8.mount: Deactivated successfully. Dec 6 05:12:32 localhost podman[309885]: 2025-12-06 10:12:32.93959262 +0000 UTC m=+0.100195778 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, release=1755695350, architecture=x86_64, name=ubi9-minimal, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, distribution-scope=public, build-date=2025-08-20T13:12:41, config_id=edpm) Dec 6 05:12:32 localhost systemd[1]: tmp-crun.rlKe8N.mount: Deactivated successfully. Dec 6 05:12:32 localhost podman[309886]: 2025-12-06 10:12:32.987097679 +0000 UTC m=+0.145028686 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 6 05:12:33 localhost podman[309886]: 2025-12-06 10:12:33.003588198 +0000 UTC m=+0.161519315 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible) Dec 6 05:12:33 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:12:33 localhost podman[309885]: 2025-12-06 10:12:33.054845122 +0000 UTC m=+0.215448300 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, release=1755695350, architecture=x86_64, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal) Dec 6 05:12:33 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:12:33 localhost nova_compute[282193]: 2025-12-06 10:12:33.094 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:12:33 localhost nova_compute[282193]: 2025-12-06 10:12:33.095 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 6 05:12:33 localhost nova_compute[282193]: 2025-12-06 10:12:33.095 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 6 05:12:33 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:12:34 localhost sshd[309924]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:12:36 localhost nova_compute[282193]: 2025-12-06 10:12:36.301 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 6 05:12:36 localhost nova_compute[282193]: 2025-12-06 10:12:36.301 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquired lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 6 05:12:36 localhost nova_compute[282193]: 2025-12-06 10:12:36.302 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 6 05:12:36 localhost nova_compute[282193]: 2025-12-06 10:12:36.302 282197 DEBUG nova.objects.instance [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 6 05:12:36 localhost nova_compute[282193]: 2025-12-06 10:12:36.722 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:12:36 localhost nova_compute[282193]: 2025-12-06 10:12:36.733 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:12:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:12:36 localhost podman[309926]: 2025-12-06 10:12:36.926687033 +0000 UTC m=+0.083972105 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd) Dec 6 05:12:36 localhost podman[309926]: 2025-12-06 10:12:36.941192913 +0000 UTC m=+0.098478015 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:12:36 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:12:38 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:12:38 localhost nova_compute[282193]: 2025-12-06 10:12:38.927 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updating instance_info_cache with network_info: [{"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 6 05:12:40 localhost nova_compute[282193]: 2025-12-06 10:12:40.289 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Releasing lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 6 05:12:40 localhost nova_compute[282193]: 2025-12-06 10:12:40.290 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 6 05:12:40 localhost nova_compute[282193]: 2025-12-06 10:12:40.290 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:12:40 localhost nova_compute[282193]: 2025-12-06 10:12:40.291 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:12:40 localhost nova_compute[282193]: 2025-12-06 10:12:40.291 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:12:40 localhost nova_compute[282193]: 2025-12-06 10:12:40.291 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:12:40 localhost nova_compute[282193]: 2025-12-06 10:12:40.292 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:12:40 localhost nova_compute[282193]: 2025-12-06 10:12:40.292 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:12:40 localhost nova_compute[282193]: 2025-12-06 10:12:40.292 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 6 05:12:40 localhost nova_compute[282193]: 2025-12-06 10:12:40.375 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:12:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:12:40 localhost podman[309945]: 2025-12-06 10:12:40.909123475 +0000 UTC m=+0.072346093 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 6 05:12:40 localhost podman[309945]: 2025-12-06 10:12:40.943144937 +0000 UTC m=+0.106367545 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 6 05:12:40 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:12:41 localhost nova_compute[282193]: 2025-12-06 10:12:41.724 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:12:41 localhost nova_compute[282193]: 2025-12-06 10:12:41.735 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:12:42 localhost nova_compute[282193]: 2025-12-06 10:12:42.176 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:12:43 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:12:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:12:44 localhost podman[309969]: 2025-12-06 10:12:44.926180217 +0000 UTC m=+0.083982156 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 6 05:12:45 localhost podman[309969]: 2025-12-06 10:12:45.006738718 +0000 UTC m=+0.164540667 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:12:45 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:12:46 localhost sshd[309994]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:12:46 localhost openstack_network_exporter[243110]: ERROR 10:12:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:12:46 localhost openstack_network_exporter[243110]: ERROR 10:12:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:12:46 localhost openstack_network_exporter[243110]: ERROR 10:12:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:12:46 localhost openstack_network_exporter[243110]: Dec 6 05:12:46 localhost openstack_network_exporter[243110]: ERROR 10:12:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:12:46 localhost openstack_network_exporter[243110]: ERROR 10:12:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:12:46 localhost openstack_network_exporter[243110]: Dec 6 05:12:46 localhost nova_compute[282193]: 2025-12-06 10:12:46.725 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:12:46 localhost nova_compute[282193]: 2025-12-06 10:12:46.739 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:12:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:12:47.303 160509 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:12:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:12:47.303 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:12:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:12:47.304 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:12:48 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:12:51 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:12:51 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:12:51 localhost nova_compute[282193]: 2025-12-06 10:12:51.727 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:12:51 localhost nova_compute[282193]: 2025-12-06 10:12:51.742 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:12:51 localhost sshd[310083]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:12:52 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:12:53 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:12:53 localhost podman[241090]: time="2025-12-06T10:12:53Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:12:53 localhost podman[241090]: @ - - [06/Dec/2025:10:12:53 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156104 "" "Go-http-client/1.1" Dec 6 05:12:53 localhost podman[241090]: @ - - [06/Dec/2025:10:12:53 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19239 "" "Go-http-client/1.1" Dec 6 05:12:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:12:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:12:55 localhost podman[310085]: 2025-12-06 10:12:55.929422869 +0000 UTC m=+0.086683708 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true) Dec 6 05:12:55 localhost podman[310085]: 2025-12-06 10:12:55.958975875 +0000 UTC m=+0.116236764 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:12:55 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:12:56 localhost podman[310086]: 2025-12-06 10:12:56.037151784 +0000 UTC m=+0.191666030 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 6 05:12:56 localhost podman[310086]: 2025-12-06 10:12:56.048104096 +0000 UTC m=+0.202618352 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 6 05:12:56 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:12:56 localhost nova_compute[282193]: 2025-12-06 10:12:56.729 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:12:56 localhost nova_compute[282193]: 2025-12-06 10:12:56.746 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:12:58 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:12:59 localhost sshd[310126]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:13:01 localhost nova_compute[282193]: 2025-12-06 10:13:01.731 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:13:01 localhost nova_compute[282193]: 2025-12-06 10:13:01.748 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:13:03 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:13:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:13:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:13:03 localhost podman[310129]: 2025-12-06 10:13:03.345298948 +0000 UTC m=+0.092395721 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:13:03 localhost podman[310129]: 2025-12-06 10:13:03.356836109 +0000 UTC m=+0.103932872 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:13:03 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:13:03 localhost podman[310128]: 2025-12-06 10:13:03.452140816 +0000 UTC m=+0.203487537 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, name=ubi9-minimal, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, version=9.6, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter) Dec 6 05:13:03 localhost podman[310128]: 2025-12-06 10:13:03.465423408 +0000 UTC m=+0.216770189 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, config_id=edpm, distribution-scope=public, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, vcs-type=git, vendor=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.openshift.expose-services=, release=1755695350) Dec 6 05:13:03 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:13:06 localhost ovn_metadata_agent[160504]: 2025-12-06 10:13:06.287 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:6c:02', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:a8:2f:0c:cb:a1'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:13:06 localhost nova_compute[282193]: 2025-12-06 10:13:06.288 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:13:06 localhost ovn_metadata_agent[160504]: 2025-12-06 10:13:06.288 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 6 05:13:06 localhost nova_compute[282193]: 2025-12-06 10:13:06.733 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:13:06 localhost nova_compute[282193]: 2025-12-06 10:13:06.751 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:13:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:13:07 localhost systemd[299726]: Created slice User Background Tasks Slice. Dec 6 05:13:07 localhost systemd[1]: tmp-crun.tAkA2i.mount: Deactivated successfully. Dec 6 05:13:07 localhost systemd[299726]: Starting Cleanup of User's Temporary Files and Directories... Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.921 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'name': 'test', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005548789.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'hostId': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.922 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 6 05:13:07 localhost podman[310167]: 2025-12-06 10:13:07.926164253 +0000 UTC m=+0.087734970 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, container_name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 6 05:13:07 localhost systemd[299726]: Finished Cleanup of User's Temporary Files and Directories. Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.960 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.bytes volume: 389120 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.961 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd84eb649-5227-4cd7-9961-62eaf2e2a530', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 389120, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:13:07.922494', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '295ba7be-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12406.171877886, 'message_signature': 'f8c89a1b37b6045ff5c1c92552c7fa9e99e3ffb2ff17c10bdc4e6cdc9e07f153'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:13:07.922494', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '295bb998-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12406.171877886, 'message_signature': 'f0954ec12cd95355957bc131864a0e35028a7f28d9c2a6c582bd33b380571505'}]}, 'timestamp': '2025-12-06 10:13:07.961881', '_unique_id': 'ecb51f24b3884837a66c5cab4f58c35b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.964 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.965 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 6 05:13:07 localhost podman[310167]: 2025-12-06 10:13:07.967895428 +0000 UTC m=+0.129466085 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd) Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.968 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets volume: 60 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '82767e30-e191-42d4-bb9a-f621c20250be', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 60, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:13:07.965205', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '295cc838-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12406.214585251, 'message_signature': '97f05359325c16d3e90e096fdbeb50d7eb61d9943c77e1efebc596c602670431'}]}, 'timestamp': '2025-12-06 10:13:07.968855', '_unique_id': 'bb7b6f00fad54ac8bf965980fad55020'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.969 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.970 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.970 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets volume: 114 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6679a9f2-5cc1-41d1-8470-eee04794afc8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 114, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:13:07.970654', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '295d1edc-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12406.214585251, 'message_signature': '7a226da46dff33aae00674eae49a1a9a4e6d35343ed110eebb8b03e7f470ac23'}]}, 'timestamp': '2025-12-06 10:13:07.971016', '_unique_id': '2464464a28da452d92b7f61ff24c172c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.971 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.972 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.972 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.972 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1cd8b4b9-7381-4ca6-a307-b7b2c03af76c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:13:07.972587', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '295d6914-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12406.171877886, 'message_signature': 'c3c9cdf6f90716c69827bab286630150cb328f343b66571f7596c776b9bf9cdd'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:13:07.972587', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '295d74fe-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12406.171877886, 'message_signature': '3e1f761ce008e07649e19432c8576fc78f3d4b75bfdaada092dbc7516e87a0f4'}]}, 'timestamp': '2025-12-06 10:13:07.973229', '_unique_id': '310903ac26f74793a3bf08b625576f9c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.973 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.974 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.974 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0d22e28c-cd9a-42ba-89d4-57a802fabfe7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:13:07.974779', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '295dbee6-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12406.214585251, 'message_signature': 'e0674405ee9494eac5f920f5e6f086a8924ce59dd9538a1ba2e6b8baf5371362'}]}, 'timestamp': '2025-12-06 10:13:07.975098', '_unique_id': '64b3c707908b4310ab93763918ded3e6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.975 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.976 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.976 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 6 05:13:07 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.986 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.986 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8b31efef-1dd8-468d-ba52-f3e2542fa5f5', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:13:07.977077', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '295f89d8-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12406.22645253, 'message_signature': '12ab0509efc7282082db6b1cc735ed5c8ddfb4e26012886731cfed2686063d45'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:13:07.977077', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '295f9838-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12406.22645253, 'message_signature': 'eef4be7e50b0dd05e65879eceaa9becaf25b6bd6d3fcfe3bfa1a6bcf59bd93db'}]}, 'timestamp': '2025-12-06 10:13:07.987199', '_unique_id': 'ab48a5b9c8d64bc0ac3525e1a696dcbf'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.988 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.989 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.989 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.bytes volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:13:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd83d2fcf-826b-415e-9482-f5a19897f5f4', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:13:07.989315', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '295ff6ac-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12406.214585251, 'message_signature': 'a32041bb09425331a3aed75144248c3ee77c3224b44540a13883e68432c810a4'}]}, 'timestamp': '2025-12-06 10:13:07.989618', '_unique_id': '9b6ad71cdc3a4d38a2b47fd5353eae07'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.990 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.991 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.latency volume: 1252245154 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.991 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.latency volume: 27668224 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5934d9e9-f746-45d9-bb42-cf7c264f1a44', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1252245154, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:13:07.991042', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '2960396e-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12406.171877886, 'message_signature': '63e0603a5631e9ca373897be9a8adeb2f5d1c25b037713045d1ba25a3d69ca24'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 27668224, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:13:07.991042', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '2960435a-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12406.171877886, 'message_signature': '2d2cf43ed9d5d71b9d25102b04c02fd23544f966a36ba415acd0ce8b9751dc53'}]}, 'timestamp': '2025-12-06 10:13:07.991562', '_unique_id': '4bfee6b595a84a56bb672b2578198ac9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.992 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.bytes volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6d3f11a5-3cbe-4604-a07f-0ac8326376a3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:13:07.993027', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '29608720-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12406.214585251, 'message_signature': '473fb06ba7104563ceccf6f30afea1a2e2ccd162ae65e5f2909009135b7b767c'}]}, 'timestamp': '2025-12-06 10:13:07.993311', '_unique_id': '89c3129a836547e3b515c0d6c64451bf'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.993 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:07.994 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.005 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/cpu volume: 15550000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '006dd3a3-913f-40ef-92e7-9a80f542a067', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 15550000000, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'timestamp': '2025-12-06T10:13:07.994616', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '296276a2-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12406.254781029, 'message_signature': 'd4f4e8db836963581e959dc5618d1ddcafaf0b0935bbefedcd6a15aea80564cb'}]}, 'timestamp': '2025-12-06 10:13:08.006028', '_unique_id': 'a07db29e67bc4d429e9787dc8627d08f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.006 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.007 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.008 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.008 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '527bcbb3-b029-4819-9d30-1f128ddb2f0a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:13:08.008041', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '2962d3e0-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12406.22645253, 'message_signature': 'd4ace50fa84f3bc20f3e6fa911f3545f6748ad76564051abf90b41c033540853'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:13:08.008041', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '2962dfe8-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12406.22645253, 'message_signature': '9fea6bf5f97789402f66c587a71f73f914162a2e50c81fdac57b086827ada9fa'}]}, 'timestamp': '2025-12-06 10:13:08.008684', '_unique_id': 'cdee290f4e2c4f7f86df8a5431600ef9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.009 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.010 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.010 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5c41d694-bada-4e7c-ab5b-1565ae640f08', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:13:08.010396', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '29632fa2-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12406.214585251, 'message_signature': '27fc98996a0ae8ff7bd0eef1e7ff147efdb9daf7d1f20469b2f2bbd2f0c593d7'}]}, 'timestamp': '2025-12-06 10:13:08.010737', '_unique_id': 'eff96055e1324eb7956af407e00f7df1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.011 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.012 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.012 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '909ebb4b-7fa0-46bc-8170-99c354182877', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:13:08.012426', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '29637eda-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12406.214585251, 'message_signature': '65c6c90b585af6d58f91abf4724c515f2728373700803598d9f583dd88a716aa'}]}, 'timestamp': '2025-12-06 10:13:08.012781', '_unique_id': '507bc4c131344d309dd196b6131a5bd7'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.013 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.014 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.014 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.requests volume: 47 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.014 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3e50b859-6142-4653-bd27-a02f4a1d36f1', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 47, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:13:08.014451', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '2963cdd6-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12406.171877886, 'message_signature': 'ff1ccd3c03d3a0acec488e1e5ab0b18639c55133eee702545c480ec6811863de'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:13:08.014451', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '2963daa6-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12406.171877886, 'message_signature': '1cb03bc822b3126907e7559b660efc8d0b5f04919f2e0ff2a4aacc9d42aae646'}]}, 'timestamp': '2025-12-06 10:13:08.015100', '_unique_id': '4a8a4c19159b4057aad05a5acb2ae83c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.015 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.017 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.017 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.017 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b807ea80-a38a-4253-957d-5e83ed2e1c99', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:13:08.017260', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '29643ba4-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12406.171877886, 'message_signature': 'a4bc7859cd5b1305d8211378b074a8495e59e2ccbb9099a752866f072b72aee1'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:13:08.017260', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '296448ec-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12406.171877886, 'message_signature': '2cc9cc4961d5055e92f41adb2d05c0a811fb22dd08f680f53e0ba70ece042fa6'}]}, 'timestamp': '2025-12-06 10:13:08.017934', '_unique_id': 'cecb6fd436f747e187ce6fed16d2344c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.018 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.019 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.019 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '83f0051e-dd00-4759-98ff-8c7bb699e674', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:13:08.019863', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '2964a0c6-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12406.214585251, 'message_signature': 'a7443abf972f5c2b6dd30704b6d415c078241d16610cacd384666e1b0fe1fccc'}]}, 'timestamp': '2025-12-06 10:13:08.020152', '_unique_id': 'b771690ffc4042988081195b5841586e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.020 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.021 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.021 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b7f632f6-5c6c-46de-8c74-c4bd5f205ad1', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:13:08.021751', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '2964ea4a-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12406.214585251, 'message_signature': 'bb684431a5bb6be8875ccbe7772a61b30ac85af769de4aaf7752e988c90e8896'}]}, 'timestamp': '2025-12-06 10:13:08.022034', '_unique_id': '41524a40402c4dacb24738b9b3817894'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.022 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.023 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.023 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.023 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7f5f4fa5-68cf-4a9f-a890-881e1659eb72', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:13:08.023416', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '29652a50-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12406.22645253, 'message_signature': 'c4d4321a7ecd942e015b2326352518d84fd0b8d17e4bfc6836377c71b5a445b3'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:13:08.023416', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '296536c6-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12406.22645253, 'message_signature': '5630a7b229904870f02f1246201dfcb48cd9a002b8cbcd2a53a5c394bca38ef1'}]}, 'timestamp': '2025-12-06 10:13:08.024011', '_unique_id': '24ae9ef4a58b4be394cd007b917ed9fc'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.024 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.025 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.025 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.025 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '518ef467-835d-4f25-8449-2aadd965aab0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:13:08.025541', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '29657d84-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12406.214585251, 'message_signature': 'fa06268950bcc911d86ad6b4fc003762519c451ed733f364e48d94e32c13d380'}]}, 'timestamp': '2025-12-06 10:13:08.025817', '_unique_id': 'a9edda383e224f14923adf0622c91eac'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.026 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.027 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.027 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.latency volume: 1525105336 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.027 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.latency volume: 106716064 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ea89dc9a-fcfc-47cb-b9f1-530f0d606f82', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1525105336, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:13:08.027099', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '2965ba38-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12406.171877886, 'message_signature': '4fd3a8f16b4a960e077a071bb78e43e61b1bbf7832780b1811258d1f2c550183'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 106716064, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:13:08.027099', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '2965c30c-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12406.171877886, 'message_signature': '0068336568ce3261075a4f9795d33df052ead2530746238e8a7d42b0228e7f61'}]}, 'timestamp': '2025-12-06 10:13:08.027566', '_unique_id': '08da8ccd66d8493ea5da49254c333e74'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.028 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/memory.usage volume: 51.80859375 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd35ccbb6-17da-4b82-9148-188756f59058', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.80859375, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'timestamp': '2025-12-06T10:13:08.028899', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '29660074-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12406.254781029, 'message_signature': 'cf4674c1337ec75d576e67d9489e51a893ef1c0751483947e4beab36bd318cee'}]}, 'timestamp': '2025-12-06 10:13:08.029142', '_unique_id': '074b28bc3cd04fe3b1cc9390b3d01bdf'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:13:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:13:08.029 12 ERROR oslo_messaging.notify.messaging Dec 6 05:13:08 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:13:11 localhost nova_compute[282193]: 2025-12-06 10:13:11.735 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:13:11 localhost nova_compute[282193]: 2025-12-06 10:13:11.754 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:13:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:13:11 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0. Dec 6 05:13:11 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:13:11.872983) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 6 05:13:11 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28 Dec 6 05:13:11 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015991873029, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1299, "num_deletes": 255, "total_data_size": 1639360, "memory_usage": 1671184, "flush_reason": "Manual Compaction"} Dec 6 05:13:11 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started Dec 6 05:13:11 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015991881781, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 1044636, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18912, "largest_seqno": 20206, "table_properties": {"data_size": 1039395, "index_size": 2712, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11725, "raw_average_key_size": 20, "raw_value_size": 1028560, "raw_average_value_size": 1764, "num_data_blocks": 115, "num_entries": 583, "num_filter_entries": 583, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015907, "oldest_key_time": 1765015907, "file_creation_time": 1765015991, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ec2a3a6-c7d0-4f1e-9923-5a3b782725ae", "db_session_id": "JMBO5KX1IJCJ8FWC64EX", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}} Dec 6 05:13:11 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 8874 microseconds, and 3540 cpu microseconds. Dec 6 05:13:11 localhost ceph-mon[298582]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 6 05:13:11 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:13:11.881855) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 1044636 bytes OK Dec 6 05:13:11 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:13:11.881880) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started Dec 6 05:13:11 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:13:11.884123) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done Dec 6 05:13:11 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:13:11.884142) EVENT_LOG_v1 {"time_micros": 1765015991884136, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 6 05:13:11 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:13:11.884162) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 6 05:13:11 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1633168, prev total WAL file size 1633492, number of live WAL files 2. Dec 6 05:13:11 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:13:11 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:13:11.884713) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033373637' seq:72057594037927935, type:22 .. '6C6F676D0034303138' seq:0, type:0; will stop at (end) Dec 6 05:13:11 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 6 05:13:11 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(1020KB)], [27(19MB)] Dec 6 05:13:11 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015991884790, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 21891123, "oldest_snapshot_seqno": -1} Dec 6 05:13:11 localhost podman[310186]: 2025-12-06 10:13:11.912777242 +0000 UTC m=+0.079761129 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:13:11 localhost podman[310186]: 2025-12-06 10:13:11.919303349 +0000 UTC m=+0.086287256 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:13:11 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:13:12 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 12314 keys, 21753852 bytes, temperature: kUnknown Dec 6 05:13:12 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015992030224, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 21753852, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 21679511, "index_size": 42432, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30853, "raw_key_size": 329412, "raw_average_key_size": 26, "raw_value_size": 21465963, "raw_average_value_size": 1743, "num_data_blocks": 1635, "num_entries": 12314, "num_filter_entries": 12314, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015623, "oldest_key_time": 0, "file_creation_time": 1765015991, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ec2a3a6-c7d0-4f1e-9923-5a3b782725ae", "db_session_id": "JMBO5KX1IJCJ8FWC64EX", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}} Dec 6 05:13:12 localhost ceph-mon[298582]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 6 05:13:12 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:13:12.030557) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 21753852 bytes Dec 6 05:13:12 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:13:12.032809) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 150.4 rd, 149.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 19.9 +0.0 blob) out(20.7 +0.0 blob), read-write-amplify(41.8) write-amplify(20.8) OK, records in: 12847, records dropped: 533 output_compression: NoCompression Dec 6 05:13:12 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:13:12.032839) EVENT_LOG_v1 {"time_micros": 1765015992032826, "job": 14, "event": "compaction_finished", "compaction_time_micros": 145530, "compaction_time_cpu_micros": 54720, "output_level": 6, "num_output_files": 1, "total_output_size": 21753852, "num_input_records": 12847, "num_output_records": 12314, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 6 05:13:12 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:13:12 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015992033142, "job": 14, "event": "table_file_deletion", "file_number": 29} Dec 6 05:13:12 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:13:12 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015992035864, "job": 14, "event": "table_file_deletion", "file_number": 27} Dec 6 05:13:12 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:13:11.884602) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:13:12 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:13:12.035971) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:13:12 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:13:12.035979) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:13:12 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:13:12.035983) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:13:12 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:13:12.035986) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:13:12 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:13:12.035990) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:13:12 localhost ovn_metadata_agent[160504]: 2025-12-06 10:13:12.290 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b142a5ef-fbed-4e92-aa78-e3ad080c6370, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 05:13:13 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e96 e96: 6 total, 6 up, 6 in Dec 6 05:13:13 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:13:15 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e97 e97: 6 total, 6 up, 6 in Dec 6 05:13:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:13:15 localhost podman[310208]: 2025-12-06 10:13:15.901546476 +0000 UTC m=+0.066064323 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true) Dec 6 05:13:15 localhost podman[310208]: 2025-12-06 10:13:15.938146965 +0000 UTC m=+0.102664842 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Dec 6 05:13:15 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:13:16 localhost openstack_network_exporter[243110]: ERROR 10:13:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:13:16 localhost openstack_network_exporter[243110]: ERROR 10:13:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:13:16 localhost openstack_network_exporter[243110]: ERROR 10:13:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:13:16 localhost openstack_network_exporter[243110]: ERROR 10:13:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:13:16 localhost openstack_network_exporter[243110]: Dec 6 05:13:16 localhost openstack_network_exporter[243110]: ERROR 10:13:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:13:16 localhost openstack_network_exporter[243110]: Dec 6 05:13:16 localhost nova_compute[282193]: 2025-12-06 10:13:16.739 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:13:16 localhost nova_compute[282193]: 2025-12-06 10:13:16.758 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:13:18 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:13:21 localhost nova_compute[282193]: 2025-12-06 10:13:21.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:13:21 localhost nova_compute[282193]: 2025-12-06 10:13:21.181 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Dec 6 05:13:21 localhost nova_compute[282193]: 2025-12-06 10:13:21.205 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Dec 6 05:13:21 localhost nova_compute[282193]: 2025-12-06 10:13:21.741 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:13:21 localhost nova_compute[282193]: 2025-12-06 10:13:21.760 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:13:23 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:13:23 localhost podman[241090]: time="2025-12-06T10:13:23Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:13:23 localhost podman[241090]: @ - - [06/Dec/2025:10:13:23 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156104 "" "Go-http-client/1.1" Dec 6 05:13:23 localhost podman[241090]: @ - - [06/Dec/2025:10:13:23 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19232 "" "Go-http-client/1.1" Dec 6 05:13:25 localhost sshd[310234]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:13:26 localhost nova_compute[282193]: 2025-12-06 10:13:26.743 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:13:26 localhost nova_compute[282193]: 2025-12-06 10:13:26.763 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:13:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:13:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:13:26 localhost podman[310237]: 2025-12-06 10:13:26.937740227 +0000 UTC m=+0.088437930 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 6 05:13:26 localhost systemd[1]: tmp-crun.5pqDIj.mount: Deactivated successfully. Dec 6 05:13:26 localhost podman[310236]: 2025-12-06 10:13:26.997126886 +0000 UTC m=+0.148052307 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:13:27 localhost podman[310237]: 2025-12-06 10:13:27.02332125 +0000 UTC m=+0.174018923 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 6 05:13:27 localhost podman[310236]: 2025-12-06 10:13:27.030541729 +0000 UTC m=+0.181467200 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true) Dec 6 05:13:27 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:13:27 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:13:28 localhost nova_compute[282193]: 2025-12-06 10:13:28.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:13:28 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:13:30 localhost nova_compute[282193]: 2025-12-06 10:13:30.200 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:13:30 localhost nova_compute[282193]: 2025-12-06 10:13:30.225 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:13:30 localhost nova_compute[282193]: 2025-12-06 10:13:30.226 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:13:30 localhost nova_compute[282193]: 2025-12-06 10:13:30.226 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:13:30 localhost nova_compute[282193]: 2025-12-06 10:13:30.227 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Auditing locally available compute resources for np0005548789.localdomain (node: np0005548789.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 6 05:13:30 localhost nova_compute[282193]: 2025-12-06 10:13:30.227 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:13:30 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:13:30 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2010010665' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:13:30 localhost nova_compute[282193]: 2025-12-06 10:13:30.693 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:13:31 localhost nova_compute[282193]: 2025-12-06 10:13:31.205 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:13:31 localhost nova_compute[282193]: 2025-12-06 10:13:31.206 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:13:31 localhost nova_compute[282193]: 2025-12-06 10:13:31.427 282197 WARNING nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 6 05:13:31 localhost nova_compute[282193]: 2025-12-06 10:13:31.428 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Hypervisor/Node resource view: name=np0005548789.localdomain free_ram=11432MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 6 05:13:31 localhost nova_compute[282193]: 2025-12-06 10:13:31.429 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:13:31 localhost nova_compute[282193]: 2025-12-06 10:13:31.429 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:13:31 localhost nova_compute[282193]: 2025-12-06 10:13:31.745 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:13:31 localhost nova_compute[282193]: 2025-12-06 10:13:31.765 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:13:31 localhost nova_compute[282193]: 2025-12-06 10:13:31.889 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 6 05:13:31 localhost nova_compute[282193]: 2025-12-06 10:13:31.889 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 6 05:13:31 localhost nova_compute[282193]: 2025-12-06 10:13:31.890 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Final resource view: name=np0005548789.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 6 05:13:32 localhost nova_compute[282193]: 2025-12-06 10:13:32.192 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Refreshing inventories for resource provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Dec 6 05:13:32 localhost nova_compute[282193]: 2025-12-06 10:13:32.436 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Updating ProviderTree inventory for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Dec 6 05:13:32 localhost nova_compute[282193]: 2025-12-06 10:13:32.437 282197 DEBUG nova.compute.provider_tree [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Updating inventory in ProviderTree for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 6 05:13:32 localhost nova_compute[282193]: 2025-12-06 10:13:32.463 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Refreshing aggregate associations for resource provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Dec 6 05:13:32 localhost nova_compute[282193]: 2025-12-06 10:13:32.507 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Refreshing trait associations for resource provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad, traits: HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_RESCUE_BFV,HW_CPU_X86_AVX2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SHA,HW_CPU_X86_BMI2,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSSE3,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_AVX,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AMD_SVM,HW_CPU_X86_FMA3,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_F16C,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_ABM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Dec 6 05:13:32 localhost nova_compute[282193]: 2025-12-06 10:13:32.553 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:13:32 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:13:32 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/113296038' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:13:32 localhost nova_compute[282193]: 2025-12-06 10:13:32.980 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:13:32 localhost nova_compute[282193]: 2025-12-06 10:13:32.987 282197 DEBUG nova.compute.provider_tree [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed in ProviderTree for provider: 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 6 05:13:33 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:13:33 localhost nova_compute[282193]: 2025-12-06 10:13:33.341 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 6 05:13:33 localhost nova_compute[282193]: 2025-12-06 10:13:33.344 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Compute_service record updated for np0005548789.localdomain:np0005548789.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 6 05:13:33 localhost nova_compute[282193]: 2025-12-06 10:13:33.344 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.915s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:13:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:13:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:13:33 localhost podman[310323]: 2025-12-06 10:13:33.909088487 +0000 UTC m=+0.072163288 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:13:33 localhost podman[310323]: 2025-12-06 10:13:33.950198662 +0000 UTC m=+0.113273433 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 6 05:13:33 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:13:33 localhost podman[310322]: 2025-12-06 10:13:33.972211789 +0000 UTC m=+0.135461215 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, distribution-scope=public, vendor=Red Hat, Inc., version=9.6, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Dec 6 05:13:33 localhost podman[310322]: 2025-12-06 10:13:33.988195124 +0000 UTC m=+0.151444590 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, name=ubi9-minimal, container_name=openstack_network_exporter, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, vcs-type=git, version=9.6, maintainer=Red Hat, Inc.) Dec 6 05:13:34 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:13:35 localhost nova_compute[282193]: 2025-12-06 10:13:35.326 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:13:35 localhost nova_compute[282193]: 2025-12-06 10:13:35.327 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:13:35 localhost nova_compute[282193]: 2025-12-06 10:13:35.328 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 6 05:13:35 localhost nova_compute[282193]: 2025-12-06 10:13:35.328 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 6 05:13:35 localhost nova_compute[282193]: 2025-12-06 10:13:35.740 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 6 05:13:35 localhost nova_compute[282193]: 2025-12-06 10:13:35.741 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquired lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 6 05:13:35 localhost nova_compute[282193]: 2025-12-06 10:13:35.741 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 6 05:13:35 localhost nova_compute[282193]: 2025-12-06 10:13:35.741 282197 DEBUG nova.objects.instance [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 6 05:13:36 localhost nova_compute[282193]: 2025-12-06 10:13:36.269 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updating instance_info_cache with network_info: [{"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 6 05:13:36 localhost nova_compute[282193]: 2025-12-06 10:13:36.286 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Releasing lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 6 05:13:36 localhost nova_compute[282193]: 2025-12-06 10:13:36.287 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 6 05:13:36 localhost nova_compute[282193]: 2025-12-06 10:13:36.287 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:13:36 localhost nova_compute[282193]: 2025-12-06 10:13:36.287 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:13:36 localhost nova_compute[282193]: 2025-12-06 10:13:36.287 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:13:36 localhost nova_compute[282193]: 2025-12-06 10:13:36.288 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:13:36 localhost nova_compute[282193]: 2025-12-06 10:13:36.288 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:13:36 localhost nova_compute[282193]: 2025-12-06 10:13:36.288 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:13:36 localhost nova_compute[282193]: 2025-12-06 10:13:36.288 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 6 05:13:36 localhost nova_compute[282193]: 2025-12-06 10:13:36.747 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:13:36 localhost nova_compute[282193]: 2025-12-06 10:13:36.768 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:13:38 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:13:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:13:38 localhost systemd[1]: tmp-crun.WgOfX3.mount: Deactivated successfully. Dec 6 05:13:38 localhost podman[310362]: 2025-12-06 10:13:38.923823359 +0000 UTC m=+0.085524462 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:13:38 localhost podman[310362]: 2025-12-06 10:13:38.934972367 +0000 UTC m=+0.096673500 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Dec 6 05:13:38 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:13:39 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e98 e98: 6 total, 6 up, 6 in Dec 6 05:13:40 localhost nova_compute[282193]: 2025-12-06 10:13:40.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:13:40 localhost nova_compute[282193]: 2025-12-06 10:13:40.181 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Dec 6 05:13:40 localhost ovn_controller[154851]: 2025-12-06T10:13:40Z|00062|memory_trim|INFO|Detected inactivity (last active 30024 ms ago): trimming memory Dec 6 05:13:41 localhost nova_compute[282193]: 2025-12-06 10:13:41.770 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:13:41 localhost nova_compute[282193]: 2025-12-06 10:13:41.772 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:13:41 localhost nova_compute[282193]: 2025-12-06 10:13:41.772 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:13:41 localhost nova_compute[282193]: 2025-12-06 10:13:41.772 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:13:41 localhost nova_compute[282193]: 2025-12-06 10:13:41.779 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:13:41 localhost nova_compute[282193]: 2025-12-06 10:13:41.779 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:13:41 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0. Dec 6 05:13:41 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:13:41.900137) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 6 05:13:41 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31 Dec 6 05:13:41 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016021900300, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 628, "num_deletes": 250, "total_data_size": 1101923, "memory_usage": 1118280, "flush_reason": "Manual Compaction"} Dec 6 05:13:41 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started Dec 6 05:13:41 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016021909654, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 653774, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20211, "largest_seqno": 20834, "table_properties": {"data_size": 651051, "index_size": 706, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7627, "raw_average_key_size": 20, "raw_value_size": 645236, "raw_average_value_size": 1743, "num_data_blocks": 31, "num_entries": 370, "num_filter_entries": 370, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015991, "oldest_key_time": 1765015991, "file_creation_time": 1765016021, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ec2a3a6-c7d0-4f1e-9923-5a3b782725ae", "db_session_id": "JMBO5KX1IJCJ8FWC64EX", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}} Dec 6 05:13:41 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 9602 microseconds, and 4370 cpu microseconds. Dec 6 05:13:41 localhost ceph-mon[298582]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 6 05:13:41 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:13:41.909733) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 653774 bytes OK Dec 6 05:13:41 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:13:41.909797) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started Dec 6 05:13:41 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:13:41.911634) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done Dec 6 05:13:41 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:13:41.911657) EVENT_LOG_v1 {"time_micros": 1765016021911650, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 6 05:13:41 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:13:41.911683) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 6 05:13:41 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 1098429, prev total WAL file size 1099178, number of live WAL files 2. Dec 6 05:13:41 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:13:41 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:13:41.912366) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033373534' seq:72057594037927935, type:22 .. '6D6772737461740034303035' seq:0, type:0; will stop at (end) Dec 6 05:13:41 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 6 05:13:41 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(638KB)], [30(20MB)] Dec 6 05:13:41 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016021912410, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 22407626, "oldest_snapshot_seqno": -1} Dec 6 05:13:42 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 12172 keys, 20264592 bytes, temperature: kUnknown Dec 6 05:13:42 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016022020721, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 20264592, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 20195696, "index_size": 37371, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30469, "raw_key_size": 326684, "raw_average_key_size": 26, "raw_value_size": 19989071, "raw_average_value_size": 1642, "num_data_blocks": 1423, "num_entries": 12172, "num_filter_entries": 12172, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015623, "oldest_key_time": 0, "file_creation_time": 1765016021, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ec2a3a6-c7d0-4f1e-9923-5a3b782725ae", "db_session_id": "JMBO5KX1IJCJ8FWC64EX", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}} Dec 6 05:13:42 localhost ceph-mon[298582]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 6 05:13:42 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:13:42.021291) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 20264592 bytes Dec 6 05:13:42 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:13:42.023427) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 206.4 rd, 186.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 20.7 +0.0 blob) out(19.3 +0.0 blob), read-write-amplify(65.3) write-amplify(31.0) OK, records in: 12684, records dropped: 512 output_compression: NoCompression Dec 6 05:13:42 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:13:42.023469) EVENT_LOG_v1 {"time_micros": 1765016022023450, "job": 16, "event": "compaction_finished", "compaction_time_micros": 108547, "compaction_time_cpu_micros": 54425, "output_level": 6, "num_output_files": 1, "total_output_size": 20264592, "num_input_records": 12684, "num_output_records": 12172, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 6 05:13:42 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:13:42 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016022023862, "job": 16, "event": "table_file_deletion", "file_number": 32} Dec 6 05:13:42 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:13:42 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016022027984, "job": 16, "event": "table_file_deletion", "file_number": 30} Dec 6 05:13:42 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:13:41.912297) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:13:42 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:13:42.028077) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:13:42 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:13:42.028084) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:13:42 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:13:42.028088) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:13:42 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:13:42.028092) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:13:42 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:13:42.028095) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:13:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:13:42 localhost podman[310381]: 2025-12-06 10:13:42.919622617 +0000 UTC m=+0.078076107 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 6 05:13:42 localhost podman[310381]: 2025-12-06 10:13:42.960086533 +0000 UTC m=+0.118540063 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 6 05:13:42 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:13:43 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e99 e99: 6 total, 6 up, 6 in Dec 6 05:13:43 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e99 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:13:44 localhost ovn_metadata_agent[160504]: 2025-12-06 10:13:44.605 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:6c:02', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:a8:2f:0c:cb:a1'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:13:44 localhost nova_compute[282193]: 2025-12-06 10:13:44.607 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:13:44 localhost ovn_metadata_agent[160504]: 2025-12-06 10:13:44.607 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 6 05:13:44 localhost ovn_metadata_agent[160504]: 2025-12-06 10:13:44.607 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b142a5ef-fbed-4e92-aa78-e3ad080c6370, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 05:13:46 localhost openstack_network_exporter[243110]: ERROR 10:13:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:13:46 localhost openstack_network_exporter[243110]: ERROR 10:13:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:13:46 localhost openstack_network_exporter[243110]: ERROR 10:13:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:13:46 localhost openstack_network_exporter[243110]: Dec 6 05:13:46 localhost openstack_network_exporter[243110]: ERROR 10:13:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:13:46 localhost openstack_network_exporter[243110]: Dec 6 05:13:46 localhost openstack_network_exporter[243110]: ERROR 10:13:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:13:46 localhost nova_compute[282193]: 2025-12-06 10:13:46.807 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:13:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:13:46 localhost podman[310404]: 2025-12-06 10:13:46.916995051 +0000 UTC m=+0.080147690 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 6 05:13:46 localhost podman[310404]: 2025-12-06 10:13:46.98330018 +0000 UTC m=+0.146452779 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 6 05:13:46 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:13:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:13:47.304 160509 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:13:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:13:47.305 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:13:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:13:47.305 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:13:48 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e99 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:13:51 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e100 e100: 6 total, 6 up, 6 in Dec 6 05:13:51 localhost nova_compute[282193]: 2025-12-06 10:13:51.807 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:13:51 localhost nova_compute[282193]: 2025-12-06 10:13:51.812 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:13:51 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e101 e101: 6 total, 6 up, 6 in Dec 6 05:13:52 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:13:52 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:13:52 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:13:52 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:13:52 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:13:52 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:13:53 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:13:53 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:13:53 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:13:53 localhost podman[241090]: time="2025-12-06T10:13:53Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:13:53 localhost podman[241090]: @ - - [06/Dec/2025:10:13:53 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156104 "" "Go-http-client/1.1" Dec 6 05:13:53 localhost podman[241090]: @ - - [06/Dec/2025:10:13:53 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19245 "" "Go-http-client/1.1" Dec 6 05:13:56 localhost nova_compute[282193]: 2025-12-06 10:13:56.810 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:13:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:13:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:13:57 localhost podman[310571]: 2025-12-06 10:13:57.942391224 +0000 UTC m=+0.092901276 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 6 05:13:57 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:13:57 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e102 e102: 6 total, 6 up, 6 in Dec 6 05:13:57 localhost podman[310571]: 2025-12-06 10:13:57.979084095 +0000 UTC m=+0.129594087 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 6 05:13:57 localhost systemd[1]: tmp-crun.44WKJy.mount: Deactivated successfully. Dec 6 05:13:58 localhost podman[310572]: 2025-12-06 10:13:58.00398483 +0000 UTC m=+0.151153431 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 6 05:13:58 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:13:58 localhost podman[310572]: 2025-12-06 10:13:58.04194774 +0000 UTC m=+0.189116331 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 6 05:13:58 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:13:58 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:14:01 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:01.771 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:14:01Z, description=, device_id=0ab66a60-f76b-4775-891d-30b21387ddeb, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=16157674-15c9-4198-b993-24e8d7e375d4, ip_allocation=immediate, mac_address=fa:16:3e:fd:9f:e6, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=261, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-06T10:14:01Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:14:01 localhost nova_compute[282193]: 2025-12-06 10:14:01.813 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:01 localhost podman[310627]: 2025-12-06 10:14:01.990387862 +0000 UTC m=+0.057287297 container kill e4e5ac05442955f65040fee10f3aa342d80518f3e4a072b3d79fe4302f513cf7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:14:01 localhost dnsmasq[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 2 addresses Dec 6 05:14:01 localhost dnsmasq-dhcp[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:14:01 localhost dnsmasq-dhcp[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:14:02 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:02.255 263652 INFO neutron.agent.dhcp.agent [None req-42da3db9-5f85-4496-b637-b29e49580b80 - - - - - -] DHCP configuration for ports {'16157674-15c9-4198-b993-24e8d7e375d4'} is completed#033[00m Dec 6 05:14:02 localhost nova_compute[282193]: 2025-12-06 10:14:02.695 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:02 localhost sshd[310647]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:14:03 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:14:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:14:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:14:04 localhost systemd[1]: tmp-crun.FkoD5Z.mount: Deactivated successfully. Dec 6 05:14:04 localhost podman[310649]: 2025-12-06 10:14:04.998480821 +0000 UTC m=+0.156298436 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, vcs-type=git, architecture=x86_64, release=1755695350, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6) Dec 6 05:14:05 localhost podman[310649]: 2025-12-06 10:14:05.012244448 +0000 UTC m=+0.170062053 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, vcs-type=git, version=9.6, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, distribution-scope=public, config_id=edpm, architecture=x86_64, name=ubi9-minimal) Dec 6 05:14:05 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:14:05 localhost podman[310650]: 2025-12-06 10:14:04.950813557 +0000 UTC m=+0.106744926 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3) Dec 6 05:14:05 localhost podman[310650]: 2025-12-06 10:14:05.082068854 +0000 UTC m=+0.238000263 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:14:05 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:14:05 localhost systemd[1]: tmp-crun.JD9ODh.mount: Deactivated successfully. Dec 6 05:14:06 localhost nova_compute[282193]: 2025-12-06 10:14:06.816 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:06 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e103 e103: 6 total, 6 up, 6 in Dec 6 05:14:07 localhost nova_compute[282193]: 2025-12-06 10:14:07.206 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:07 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:07.435 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:14:07Z, description=, device_id=2f787882-add0-4a41-8d77-7a5b4b61daf9, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=4357478b-5997-4b61-92d0-dc1f719e522a, ip_allocation=immediate, mac_address=fa:16:3e:66:1c:2e, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=310, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-06T10:14:07Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:14:07 localhost sshd[310691]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:14:07 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:07.497 263652 INFO neutron.agent.linux.ip_lib [None req-074d74f6-5db8-4c50-9a2c-39d6f0090f5e - - - - - -] Device tap0d21f8b1-1c cannot be used as it has no MAC address#033[00m Dec 6 05:14:07 localhost nova_compute[282193]: 2025-12-06 10:14:07.516 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:07 localhost kernel: device tap0d21f8b1-1c entered promiscuous mode Dec 6 05:14:07 localhost NetworkManager[5973]: [1765016047.5247] manager: (tap0d21f8b1-1c): new Generic device (/org/freedesktop/NetworkManager/Devices/18) Dec 6 05:14:07 localhost ovn_controller[154851]: 2025-12-06T10:14:07Z|00063|binding|INFO|Claiming lport 0d21f8b1-1c42-49c6-b23d-1bbbeb5764dc for this chassis. Dec 6 05:14:07 localhost ovn_controller[154851]: 2025-12-06T10:14:07Z|00064|binding|INFO|0d21f8b1-1c42-49c6-b23d-1bbbeb5764dc: Claiming unknown Dec 6 05:14:07 localhost nova_compute[282193]: 2025-12-06 10:14:07.529 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:07 localhost systemd-udevd[310717]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:14:07 localhost ovn_metadata_agent[160504]: 2025-12-06 10:14:07.537 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-36939d22-422f-458f-92f5-9d57586edeca', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-36939d22-422f-458f-92f5-9d57586edeca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8b45ed0d762747b4a27ad78d879f59e8', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=44318dec-0297-43bb-9acd-7dd1c9b801f2, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=0d21f8b1-1c42-49c6-b23d-1bbbeb5764dc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:14:07 localhost ovn_metadata_agent[160504]: 2025-12-06 10:14:07.539 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 0d21f8b1-1c42-49c6-b23d-1bbbeb5764dc in datapath 36939d22-422f-458f-92f5-9d57586edeca bound to our chassis#033[00m Dec 6 05:14:07 localhost ovn_metadata_agent[160504]: 2025-12-06 10:14:07.540 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Port ca98273b-9dbe-42be-bcd2-d67d252201d4 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 6 05:14:07 localhost ovn_metadata_agent[160504]: 2025-12-06 10:14:07.540 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 36939d22-422f-458f-92f5-9d57586edeca, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:14:07 localhost ovn_metadata_agent[160504]: 2025-12-06 10:14:07.541 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[72e7a196-c13e-4af3-b9f2-eca1136ce55a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:14:07 localhost ovn_controller[154851]: 2025-12-06T10:14:07Z|00065|binding|INFO|Setting lport 0d21f8b1-1c42-49c6-b23d-1bbbeb5764dc ovn-installed in OVS Dec 6 05:14:07 localhost ovn_controller[154851]: 2025-12-06T10:14:07Z|00066|binding|INFO|Setting lport 0d21f8b1-1c42-49c6-b23d-1bbbeb5764dc up in Southbound Dec 6 05:14:07 localhost nova_compute[282193]: 2025-12-06 10:14:07.567 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:07 localhost nova_compute[282193]: 2025-12-06 10:14:07.602 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:07 localhost nova_compute[282193]: 2025-12-06 10:14:07.625 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:07 localhost dnsmasq[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 3 addresses Dec 6 05:14:07 localhost dnsmasq-dhcp[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:14:07 localhost dnsmasq-dhcp[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:14:07 localhost podman[310721]: 2025-12-06 10:14:07.643884771 +0000 UTC m=+0.068594260 container kill e4e5ac05442955f65040fee10f3aa342d80518f3e4a072b3d79fe4302f513cf7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:14:08 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:14:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:14:09 localhost podman[310770]: 2025-12-06 10:14:09.923359572 +0000 UTC m=+0.083937344 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 6 05:14:09 localhost podman[310770]: 2025-12-06 10:14:09.939249244 +0000 UTC m=+0.099827006 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125) Dec 6 05:14:09 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:14:10 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:10.582 263652 INFO neutron.agent.dhcp.agent [None req-ab70d153-a60d-4172-b9de-59342f3ebf6f - - - - - -] DHCP configuration for ports {'4357478b-5997-4b61-92d0-dc1f719e522a'} is completed#033[00m Dec 6 05:14:10 localhost nova_compute[282193]: 2025-12-06 10:14:10.843 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:11 localhost podman[310812]: Dec 6 05:14:11 localhost podman[310812]: 2025-12-06 10:14:11.009322628 +0000 UTC m=+0.091230215 container create fd947986845710586151bc7ddabe4490ad234e4fd121303154ccf3ff85955923 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-36939d22-422f-458f-92f5-9d57586edeca, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 6 05:14:11 localhost systemd[1]: Started libpod-conmon-fd947986845710586151bc7ddabe4490ad234e4fd121303154ccf3ff85955923.scope. Dec 6 05:14:11 localhost systemd[1]: Started libcrun container. Dec 6 05:14:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e3d569aeb509718e295757645bec02890965246579d7d2efa87e073e25a6102/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:14:11 localhost podman[310812]: 2025-12-06 10:14:10.966682176 +0000 UTC m=+0.048589813 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:14:11 localhost podman[310812]: 2025-12-06 10:14:11.066832741 +0000 UTC m=+0.148740358 container init fd947986845710586151bc7ddabe4490ad234e4fd121303154ccf3ff85955923 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-36939d22-422f-458f-92f5-9d57586edeca, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:14:11 localhost podman[310812]: 2025-12-06 10:14:11.074992099 +0000 UTC m=+0.156899716 container start fd947986845710586151bc7ddabe4490ad234e4fd121303154ccf3ff85955923 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-36939d22-422f-458f-92f5-9d57586edeca, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:14:11 localhost dnsmasq[310830]: started, version 2.85 cachesize 150 Dec 6 05:14:11 localhost dnsmasq[310830]: DNS service limited to local subnets Dec 6 05:14:11 localhost dnsmasq[310830]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:14:11 localhost dnsmasq[310830]: warning: no upstream servers configured Dec 6 05:14:11 localhost dnsmasq-dhcp[310830]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 6 05:14:11 localhost dnsmasq[310830]: read /var/lib/neutron/dhcp/36939d22-422f-458f-92f5-9d57586edeca/addn_hosts - 0 addresses Dec 6 05:14:11 localhost dnsmasq-dhcp[310830]: read /var/lib/neutron/dhcp/36939d22-422f-458f-92f5-9d57586edeca/host Dec 6 05:14:11 localhost dnsmasq-dhcp[310830]: read /var/lib/neutron/dhcp/36939d22-422f-458f-92f5-9d57586edeca/opts Dec 6 05:14:11 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:11.251 263652 INFO neutron.agent.dhcp.agent [None req-ae14176a-f701-4204-b1b3-90b50faffd79 - - - - - -] DHCP configuration for ports {'f98701d7-f054-4731-b824-f62710fa355c'} is completed#033[00m Dec 6 05:14:11 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:11.717 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:14:11Z, description=, device_id=2f787882-add0-4a41-8d77-7a5b4b61daf9, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=38bb6272-0a5f-4360-9a7d-15229cc8d9fb, ip_allocation=immediate, mac_address=fa:16:3e:e8:15:5b, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:14:03Z, description=, dns_domain=, id=36939d22-422f-458f-92f5-9d57586edeca, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ImagesNegativeTestJSON-1337158581-network, port_security_enabled=True, project_id=8b45ed0d762747b4a27ad78d879f59e8, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=23147, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=303, status=ACTIVE, subnets=['cce3d12e-2f8e-42a3-aa58-2dc0ad5211f6'], tags=[], tenant_id=8b45ed0d762747b4a27ad78d879f59e8, updated_at=2025-12-06T10:14:04Z, vlan_transparent=None, network_id=36939d22-422f-458f-92f5-9d57586edeca, port_security_enabled=False, project_id=8b45ed0d762747b4a27ad78d879f59e8, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=321, status=DOWN, tags=[], tenant_id=8b45ed0d762747b4a27ad78d879f59e8, updated_at=2025-12-06T10:14:11Z on network 36939d22-422f-458f-92f5-9d57586edeca#033[00m Dec 6 05:14:11 localhost nova_compute[282193]: 2025-12-06 10:14:11.838 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:12 localhost podman[310848]: 2025-12-06 10:14:12.016898739 +0000 UTC m=+0.067150095 container kill fd947986845710586151bc7ddabe4490ad234e4fd121303154ccf3ff85955923 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-36939d22-422f-458f-92f5-9d57586edeca, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 6 05:14:12 localhost dnsmasq[310830]: read /var/lib/neutron/dhcp/36939d22-422f-458f-92f5-9d57586edeca/addn_hosts - 1 addresses Dec 6 05:14:12 localhost dnsmasq-dhcp[310830]: read /var/lib/neutron/dhcp/36939d22-422f-458f-92f5-9d57586edeca/host Dec 6 05:14:12 localhost dnsmasq-dhcp[310830]: read /var/lib/neutron/dhcp/36939d22-422f-458f-92f5-9d57586edeca/opts Dec 6 05:14:12 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:12.308 263652 INFO neutron.agent.dhcp.agent [None req-8e73d45a-70c8-48ae-8c05-326d53f59fd3 - - - - - -] DHCP configuration for ports {'38bb6272-0a5f-4360-9a7d-15229cc8d9fb'} is completed#033[00m Dec 6 05:14:12 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:12.837 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:14:11Z, description=, device_id=2f787882-add0-4a41-8d77-7a5b4b61daf9, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=38bb6272-0a5f-4360-9a7d-15229cc8d9fb, ip_allocation=immediate, mac_address=fa:16:3e:e8:15:5b, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:14:03Z, description=, dns_domain=, id=36939d22-422f-458f-92f5-9d57586edeca, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ImagesNegativeTestJSON-1337158581-network, port_security_enabled=True, project_id=8b45ed0d762747b4a27ad78d879f59e8, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=23147, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=303, status=ACTIVE, subnets=['cce3d12e-2f8e-42a3-aa58-2dc0ad5211f6'], tags=[], tenant_id=8b45ed0d762747b4a27ad78d879f59e8, updated_at=2025-12-06T10:14:04Z, vlan_transparent=None, network_id=36939d22-422f-458f-92f5-9d57586edeca, port_security_enabled=False, project_id=8b45ed0d762747b4a27ad78d879f59e8, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=321, status=DOWN, tags=[], tenant_id=8b45ed0d762747b4a27ad78d879f59e8, updated_at=2025-12-06T10:14:11Z on network 36939d22-422f-458f-92f5-9d57586edeca#033[00m Dec 6 05:14:13 localhost dnsmasq[310830]: read /var/lib/neutron/dhcp/36939d22-422f-458f-92f5-9d57586edeca/addn_hosts - 1 addresses Dec 6 05:14:13 localhost dnsmasq-dhcp[310830]: read /var/lib/neutron/dhcp/36939d22-422f-458f-92f5-9d57586edeca/host Dec 6 05:14:13 localhost podman[310885]: 2025-12-06 10:14:13.067996849 +0000 UTC m=+0.064735413 container kill fd947986845710586151bc7ddabe4490ad234e4fd121303154ccf3ff85955923 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-36939d22-422f-458f-92f5-9d57586edeca, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:14:13 localhost dnsmasq-dhcp[310830]: read /var/lib/neutron/dhcp/36939d22-422f-458f-92f5-9d57586edeca/opts Dec 6 05:14:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:14:13 localhost systemd[1]: tmp-crun.IggIgx.mount: Deactivated successfully. Dec 6 05:14:13 localhost podman[310901]: 2025-12-06 10:14:13.192539802 +0000 UTC m=+0.096744992 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 6 05:14:13 localhost podman[310901]: 2025-12-06 10:14:13.226294075 +0000 UTC m=+0.130499305 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 6 05:14:13 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:14:13 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:14:13 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:13.271 263652 INFO neutron.agent.dhcp.agent [None req-6ea15905-d29b-41f1-9f31-b6463b7fd42b - - - - - -] DHCP configuration for ports {'38bb6272-0a5f-4360-9a7d-15229cc8d9fb'} is completed#033[00m Dec 6 05:14:16 localhost podman[310944]: 2025-12-06 10:14:16.113712228 +0000 UTC m=+0.061742352 container kill fd947986845710586151bc7ddabe4490ad234e4fd121303154ccf3ff85955923 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-36939d22-422f-458f-92f5-9d57586edeca, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true) Dec 6 05:14:16 localhost dnsmasq[310830]: read /var/lib/neutron/dhcp/36939d22-422f-458f-92f5-9d57586edeca/addn_hosts - 0 addresses Dec 6 05:14:16 localhost dnsmasq-dhcp[310830]: read /var/lib/neutron/dhcp/36939d22-422f-458f-92f5-9d57586edeca/host Dec 6 05:14:16 localhost dnsmasq-dhcp[310830]: read /var/lib/neutron/dhcp/36939d22-422f-458f-92f5-9d57586edeca/opts Dec 6 05:14:16 localhost nova_compute[282193]: 2025-12-06 10:14:16.270 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:16 localhost kernel: device tap0d21f8b1-1c left promiscuous mode Dec 6 05:14:16 localhost ovn_controller[154851]: 2025-12-06T10:14:16Z|00067|binding|INFO|Releasing lport 0d21f8b1-1c42-49c6-b23d-1bbbeb5764dc from this chassis (sb_readonly=0) Dec 6 05:14:16 localhost ovn_controller[154851]: 2025-12-06T10:14:16Z|00068|binding|INFO|Setting lport 0d21f8b1-1c42-49c6-b23d-1bbbeb5764dc down in Southbound Dec 6 05:14:16 localhost ovn_metadata_agent[160504]: 2025-12-06 10:14:16.279 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-36939d22-422f-458f-92f5-9d57586edeca', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-36939d22-422f-458f-92f5-9d57586edeca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8b45ed0d762747b4a27ad78d879f59e8', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548789.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=44318dec-0297-43bb-9acd-7dd1c9b801f2, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=0d21f8b1-1c42-49c6-b23d-1bbbeb5764dc) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:14:16 localhost ovn_metadata_agent[160504]: 2025-12-06 10:14:16.281 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 0d21f8b1-1c42-49c6-b23d-1bbbeb5764dc in datapath 36939d22-422f-458f-92f5-9d57586edeca unbound from our chassis#033[00m Dec 6 05:14:16 localhost ovn_metadata_agent[160504]: 2025-12-06 10:14:16.284 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 36939d22-422f-458f-92f5-9d57586edeca, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:14:16 localhost ovn_metadata_agent[160504]: 2025-12-06 10:14:16.285 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[0512d92d-4c69-481d-afba-85c9ce51bb8f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:14:16 localhost nova_compute[282193]: 2025-12-06 10:14:16.292 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:16 localhost openstack_network_exporter[243110]: ERROR 10:14:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:14:16 localhost openstack_network_exporter[243110]: ERROR 10:14:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:14:16 localhost openstack_network_exporter[243110]: ERROR 10:14:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:14:16 localhost openstack_network_exporter[243110]: ERROR 10:14:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:14:16 localhost openstack_network_exporter[243110]: Dec 6 05:14:16 localhost openstack_network_exporter[243110]: ERROR 10:14:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:14:16 localhost openstack_network_exporter[243110]: Dec 6 05:14:16 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:16.829 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:14:16Z, description=, device_id=9588c462-2236-4443-8871-1214f0871ce4, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=847a9809-6760-48bb-9d9c-06fbef3d7c32, ip_allocation=immediate, mac_address=fa:16:3e:a8:ca:5e, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=373, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-06T10:14:16Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:14:16 localhost nova_compute[282193]: 2025-12-06 10:14:16.841 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:16 localhost nova_compute[282193]: 2025-12-06 10:14:16.845 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:17 localhost systemd[1]: tmp-crun.JCEAoE.mount: Deactivated successfully. Dec 6 05:14:17 localhost podman[310984]: 2025-12-06 10:14:17.065708865 +0000 UTC m=+0.061753753 container kill e4e5ac05442955f65040fee10f3aa342d80518f3e4a072b3d79fe4302f513cf7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:14:17 localhost dnsmasq[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 4 addresses Dec 6 05:14:17 localhost dnsmasq-dhcp[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:14:17 localhost dnsmasq-dhcp[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:14:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:14:17 localhost podman[310999]: 2025-12-06 10:14:17.159003351 +0000 UTC m=+0.070264680 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 6 05:14:17 localhost podman[310999]: 2025-12-06 10:14:17.190251418 +0000 UTC m=+0.101512747 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:14:17 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:14:17 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:17.303 263652 INFO neutron.agent.dhcp.agent [None req-c3ea56a6-a536-42f9-82c5-a0ad1ac6f4ba - - - - - -] DHCP configuration for ports {'847a9809-6760-48bb-9d9c-06fbef3d7c32'} is completed#033[00m Dec 6 05:14:17 localhost nova_compute[282193]: 2025-12-06 10:14:17.525 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:17 localhost dnsmasq[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 3 addresses Dec 6 05:14:17 localhost dnsmasq-dhcp[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:14:17 localhost dnsmasq-dhcp[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:14:17 localhost podman[311047]: 2025-12-06 10:14:17.870221452 +0000 UTC m=+0.059487453 container kill e4e5ac05442955f65040fee10f3aa342d80518f3e4a072b3d79fe4302f513cf7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:14:17 localhost ovn_controller[154851]: 2025-12-06T10:14:17Z|00069|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:14:17 localhost nova_compute[282193]: 2025-12-06 10:14:17.939 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:18 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:14:18 localhost sshd[311097]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:14:18 localhost podman[311087]: 2025-12-06 10:14:18.665313725 +0000 UTC m=+0.064688931 container kill fd947986845710586151bc7ddabe4490ad234e4fd121303154ccf3ff85955923 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-36939d22-422f-458f-92f5-9d57586edeca, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 6 05:14:18 localhost systemd[1]: tmp-crun.50wNEk.mount: Deactivated successfully. Dec 6 05:14:18 localhost dnsmasq[310830]: exiting on receipt of SIGTERM Dec 6 05:14:18 localhost systemd[1]: libpod-fd947986845710586151bc7ddabe4490ad234e4fd121303154ccf3ff85955923.scope: Deactivated successfully. Dec 6 05:14:18 localhost sshd[311116]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:14:18 localhost podman[311104]: 2025-12-06 10:14:18.743290448 +0000 UTC m=+0.058328879 container died fd947986845710586151bc7ddabe4490ad234e4fd121303154ccf3ff85955923 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-36939d22-422f-458f-92f5-9d57586edeca, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 6 05:14:18 localhost systemd[1]: tmp-crun.Lr6h01.mount: Deactivated successfully. Dec 6 05:14:18 localhost podman[311104]: 2025-12-06 10:14:18.78726927 +0000 UTC m=+0.102307661 container cleanup fd947986845710586151bc7ddabe4490ad234e4fd121303154ccf3ff85955923 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-36939d22-422f-458f-92f5-9d57586edeca, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:14:18 localhost systemd[1]: libpod-conmon-fd947986845710586151bc7ddabe4490ad234e4fd121303154ccf3ff85955923.scope: Deactivated successfully. Dec 6 05:14:18 localhost podman[311105]: 2025-12-06 10:14:18.828441737 +0000 UTC m=+0.141193709 container remove fd947986845710586151bc7ddabe4490ad234e4fd121303154ccf3ff85955923 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-36939d22-422f-458f-92f5-9d57586edeca, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Dec 6 05:14:19 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:19.069 263652 INFO neutron.agent.dhcp.agent [None req-00688aeb-9ecb-47a3-bc4a-d83a6ef58ca8 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:14:19 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:19.070 263652 INFO neutron.agent.dhcp.agent [None req-00688aeb-9ecb-47a3-bc4a-d83a6ef58ca8 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:14:19 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:19.338 263652 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:14:19 localhost systemd[1]: var-lib-containers-storage-overlay-9e3d569aeb509718e295757645bec02890965246579d7d2efa87e073e25a6102-merged.mount: Deactivated successfully. Dec 6 05:14:19 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fd947986845710586151bc7ddabe4490ad234e4fd121303154ccf3ff85955923-userdata-shm.mount: Deactivated successfully. Dec 6 05:14:19 localhost systemd[1]: run-netns-qdhcp\x2d36939d22\x2d422f\x2d458f\x2d92f5\x2d9d57586edeca.mount: Deactivated successfully. Dec 6 05:14:21 localhost nova_compute[282193]: 2025-12-06 10:14:21.843 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:21 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:21.858 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:14:21Z, description=, device_id=0e54fb37-e53e-4ada-9f5a-9b02f9c2b583, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=e351be0c-02ee-47aa-b870-fb989dd95d2f, ip_allocation=immediate, mac_address=fa:16:3e:49:b2:2b, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=421, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-06T10:14:21Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:14:22 localhost dnsmasq[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 4 addresses Dec 6 05:14:22 localhost dnsmasq-dhcp[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:14:22 localhost dnsmasq-dhcp[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:14:22 localhost podman[311151]: 2025-12-06 10:14:22.08814476 +0000 UTC m=+0.067576129 container kill e4e5ac05442955f65040fee10f3aa342d80518f3e4a072b3d79fe4302f513cf7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:14:22 localhost nova_compute[282193]: 2025-12-06 10:14:22.232 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:22 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:22.288 263652 INFO neutron.agent.dhcp.agent [None req-51e62f00-f029-41fd-9040-e8d3ceb145b6 - - - - - -] DHCP configuration for ports {'e351be0c-02ee-47aa-b870-fb989dd95d2f'} is completed#033[00m Dec 6 05:14:22 localhost nova_compute[282193]: 2025-12-06 10:14:22.604 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:23 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:14:23 localhost podman[241090]: time="2025-12-06T10:14:23Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:14:23 localhost podman[241090]: @ - - [06/Dec/2025:10:14:23 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156104 "" "Go-http-client/1.1" Dec 6 05:14:24 localhost podman[241090]: @ - - [06/Dec/2025:10:14:23 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19248 "" "Go-http-client/1.1" Dec 6 05:14:24 localhost neutron_sriov_agent[256690]: 2025-12-06 10:14:24.050 2 INFO neutron.agent.securitygroups_rpc [None req-713c535f-db70-452f-a97f-68d844244da8 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Security group member updated ['bfad329a-0ea3-4b02-8e91-9d15749f8c9b']#033[00m Dec 6 05:14:26 localhost sshd[311171]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:14:26 localhost nova_compute[282193]: 2025-12-06 10:14:26.347 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:26 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:26.816 263652 INFO neutron.agent.linux.ip_lib [None req-c9b7ffbf-b077-45de-930d-9e646f528dda - - - - - -] Device tap674505ce-f8 cannot be used as it has no MAC address#033[00m Dec 6 05:14:26 localhost nova_compute[282193]: 2025-12-06 10:14:26.839 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:26 localhost nova_compute[282193]: 2025-12-06 10:14:26.847 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:26 localhost kernel: device tap674505ce-f8 entered promiscuous mode Dec 6 05:14:26 localhost NetworkManager[5973]: [1765016066.8507] manager: (tap674505ce-f8): new Generic device (/org/freedesktop/NetworkManager/Devices/19) Dec 6 05:14:26 localhost ovn_controller[154851]: 2025-12-06T10:14:26Z|00070|binding|INFO|Claiming lport 674505ce-f881-462b-a185-46ca8116f551 for this chassis. Dec 6 05:14:26 localhost ovn_controller[154851]: 2025-12-06T10:14:26Z|00071|binding|INFO|674505ce-f881-462b-a185-46ca8116f551: Claiming unknown Dec 6 05:14:26 localhost nova_compute[282193]: 2025-12-06 10:14:26.851 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:26 localhost systemd-udevd[311183]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:14:26 localhost ovn_metadata_agent[160504]: 2025-12-06 10:14:26.865 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-2716abb4-8339-437b-9952-fd22a3d3f838', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2716abb4-8339-437b-9952-fd22a3d3f838', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '550f4c3bf626406eac0d7f6f917d607c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5a194418-47d8-46fb-95d0-765c18cf4dc9, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=674505ce-f881-462b-a185-46ca8116f551) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:14:26 localhost ovn_metadata_agent[160504]: 2025-12-06 10:14:26.866 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 674505ce-f881-462b-a185-46ca8116f551 in datapath 2716abb4-8339-437b-9952-fd22a3d3f838 bound to our chassis#033[00m Dec 6 05:14:26 localhost ovn_metadata_agent[160504]: 2025-12-06 10:14:26.867 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Port 4ce4da82-c04f-4bd0-8ba1-43ca2cb8db51 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 6 05:14:26 localhost ovn_metadata_agent[160504]: 2025-12-06 10:14:26.868 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2716abb4-8339-437b-9952-fd22a3d3f838, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:14:26 localhost ovn_metadata_agent[160504]: 2025-12-06 10:14:26.869 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[8bdb85ad-96a1-4e13-8ee7-183910db6d79]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:14:26 localhost journal[230404]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, ) Dec 6 05:14:26 localhost journal[230404]: hostname: np0005548789.localdomain Dec 6 05:14:26 localhost journal[230404]: ethtool ioctl error on tap674505ce-f8: No such device Dec 6 05:14:26 localhost journal[230404]: ethtool ioctl error on tap674505ce-f8: No such device Dec 6 05:14:26 localhost ovn_controller[154851]: 2025-12-06T10:14:26Z|00072|binding|INFO|Setting lport 674505ce-f881-462b-a185-46ca8116f551 ovn-installed in OVS Dec 6 05:14:26 localhost ovn_controller[154851]: 2025-12-06T10:14:26Z|00073|binding|INFO|Setting lport 674505ce-f881-462b-a185-46ca8116f551 up in Southbound Dec 6 05:14:26 localhost nova_compute[282193]: 2025-12-06 10:14:26.900 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:26 localhost journal[230404]: ethtool ioctl error on tap674505ce-f8: No such device Dec 6 05:14:26 localhost journal[230404]: ethtool ioctl error on tap674505ce-f8: No such device Dec 6 05:14:26 localhost journal[230404]: ethtool ioctl error on tap674505ce-f8: No such device Dec 6 05:14:26 localhost journal[230404]: ethtool ioctl error on tap674505ce-f8: No such device Dec 6 05:14:26 localhost journal[230404]: ethtool ioctl error on tap674505ce-f8: No such device Dec 6 05:14:26 localhost journal[230404]: ethtool ioctl error on tap674505ce-f8: No such device Dec 6 05:14:26 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:26.945 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:14:26Z, description=, device_id=a657ed9a-4b90-4e6b-92ce-73d23dc898ea, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=0aec7b47-5e43-49da-9f07-e6bebe4c2675, ip_allocation=immediate, mac_address=fa:16:3e:3f:86:8a, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=459, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-06T10:14:26Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:14:26 localhost nova_compute[282193]: 2025-12-06 10:14:26.997 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:27 localhost nova_compute[282193]: 2025-12-06 10:14:27.001 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:27 localhost dnsmasq[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 5 addresses Dec 6 05:14:27 localhost dnsmasq-dhcp[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:14:27 localhost dnsmasq-dhcp[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:14:27 localhost podman[311231]: 2025-12-06 10:14:27.136582215 +0000 UTC m=+0.037020403 container kill e4e5ac05442955f65040fee10f3aa342d80518f3e4a072b3d79fe4302f513cf7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS) Dec 6 05:14:27 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:27.467 263652 INFO neutron.agent.dhcp.agent [None req-20cdf521-3e81-455f-b7fd-a3ad692b4482 - - - - - -] DHCP configuration for ports {'0aec7b47-5e43-49da-9f07-e6bebe4c2675'} is completed#033[00m Dec 6 05:14:27 localhost podman[311293]: Dec 6 05:14:27 localhost podman[311293]: 2025-12-06 10:14:27.886519139 +0000 UTC m=+0.082240564 container create 70d88d91b5e20c247f4a6f4c63d79ee38d3e85e687937e8f7f521dc46eaf340f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2716abb4-8339-437b-9952-fd22a3d3f838, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Dec 6 05:14:27 localhost systemd[1]: Started libpod-conmon-70d88d91b5e20c247f4a6f4c63d79ee38d3e85e687937e8f7f521dc46eaf340f.scope. Dec 6 05:14:27 localhost podman[311293]: 2025-12-06 10:14:27.838927557 +0000 UTC m=+0.034649042 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:14:27 localhost systemd[1]: tmp-crun.eRyIB8.mount: Deactivated successfully. Dec 6 05:14:27 localhost systemd[1]: Started libcrun container. Dec 6 05:14:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b70efbcf1a478b69865e43308846dd732728820bb0a4447d04b09e0d0cf1219/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:14:27 localhost podman[311293]: 2025-12-06 10:14:27.970671159 +0000 UTC m=+0.166392634 container init 70d88d91b5e20c247f4a6f4c63d79ee38d3e85e687937e8f7f521dc46eaf340f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2716abb4-8339-437b-9952-fd22a3d3f838, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Dec 6 05:14:27 localhost podman[311293]: 2025-12-06 10:14:27.982855368 +0000 UTC m=+0.178576843 container start 70d88d91b5e20c247f4a6f4c63d79ee38d3e85e687937e8f7f521dc46eaf340f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2716abb4-8339-437b-9952-fd22a3d3f838, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 6 05:14:27 localhost dnsmasq[311312]: started, version 2.85 cachesize 150 Dec 6 05:14:27 localhost dnsmasq[311312]: DNS service limited to local subnets Dec 6 05:14:27 localhost dnsmasq[311312]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:14:27 localhost dnsmasq[311312]: warning: no upstream servers configured Dec 6 05:14:27 localhost dnsmasq-dhcp[311312]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 6 05:14:27 localhost dnsmasq[311312]: read /var/lib/neutron/dhcp/2716abb4-8339-437b-9952-fd22a3d3f838/addn_hosts - 0 addresses Dec 6 05:14:27 localhost dnsmasq-dhcp[311312]: read /var/lib/neutron/dhcp/2716abb4-8339-437b-9952-fd22a3d3f838/host Dec 6 05:14:27 localhost dnsmasq-dhcp[311312]: read /var/lib/neutron/dhcp/2716abb4-8339-437b-9952-fd22a3d3f838/opts Dec 6 05:14:28 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:28.161 263652 INFO neutron.agent.dhcp.agent [None req-830a4aa6-d8ce-4027-9178-de848068f18c - - - - - -] DHCP configuration for ports {'5314d21d-0307-4cbc-ac4b-f181460f47a3'} is completed#033[00m Dec 6 05:14:28 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:14:28 localhost nova_compute[282193]: 2025-12-06 10:14:28.658 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:14:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:14:28 localhost systemd[1]: tmp-crun.5uIFN8.mount: Deactivated successfully. Dec 6 05:14:28 localhost podman[311313]: 2025-12-06 10:14:28.944635291 +0000 UTC m=+0.101777164 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent) Dec 6 05:14:28 localhost podman[311314]: 2025-12-06 10:14:28.982598922 +0000 UTC m=+0.135859828 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 6 05:14:29 localhost podman[311313]: 2025-12-06 10:14:29.003812705 +0000 UTC m=+0.160954568 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true) Dec 6 05:14:29 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:14:29 localhost podman[311314]: 2025-12-06 10:14:29.017411376 +0000 UTC m=+0.170672282 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 6 05:14:29 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:14:29 localhost neutron_sriov_agent[256690]: 2025-12-06 10:14:29.199 2 INFO neutron.agent.securitygroups_rpc [None req-42741e53-1189-4d3e-a617-18fc0438f9c5 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Security group member updated ['bfad329a-0ea3-4b02-8e91-9d15749f8c9b']#033[00m Dec 6 05:14:29 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:29.584 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:14:29Z, description=, device_id=a657ed9a-4b90-4e6b-92ce-73d23dc898ea, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=1de05677-105c-406f-9cd7-f01d2cbdcdcd, ip_allocation=immediate, mac_address=fa:16:3e:c4:04:30, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:14:23Z, description=, dns_domain=, id=2716abb4-8339-437b-9952-fd22a3d3f838, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPDetailsNegativeTestJSON-1521297530-network, port_security_enabled=True, project_id=550f4c3bf626406eac0d7f6f917d607c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=35801, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=433, status=ACTIVE, subnets=['bacb9850-7f83-4210-a14f-8a65cd67ed70'], tags=[], tenant_id=550f4c3bf626406eac0d7f6f917d607c, updated_at=2025-12-06T10:14:24Z, vlan_transparent=None, network_id=2716abb4-8339-437b-9952-fd22a3d3f838, port_security_enabled=False, project_id=550f4c3bf626406eac0d7f6f917d607c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=484, status=DOWN, tags=[], tenant_id=550f4c3bf626406eac0d7f6f917d607c, updated_at=2025-12-06T10:14:29Z on network 2716abb4-8339-437b-9952-fd22a3d3f838#033[00m Dec 6 05:14:29 localhost dnsmasq[311312]: read /var/lib/neutron/dhcp/2716abb4-8339-437b-9952-fd22a3d3f838/addn_hosts - 1 addresses Dec 6 05:14:29 localhost podman[311372]: 2025-12-06 10:14:29.799920217 +0000 UTC m=+0.058490533 container kill 70d88d91b5e20c247f4a6f4c63d79ee38d3e85e687937e8f7f521dc46eaf340f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2716abb4-8339-437b-9952-fd22a3d3f838, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:14:29 localhost dnsmasq-dhcp[311312]: read /var/lib/neutron/dhcp/2716abb4-8339-437b-9952-fd22a3d3f838/host Dec 6 05:14:29 localhost dnsmasq-dhcp[311312]: read /var/lib/neutron/dhcp/2716abb4-8339-437b-9952-fd22a3d3f838/opts Dec 6 05:14:30 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:30.017 263652 INFO neutron.agent.dhcp.agent [None req-f30b7f6b-2a16-4fc5-83a3-d9b073c2d4c5 - - - - - -] DHCP configuration for ports {'1de05677-105c-406f-9cd7-f01d2cbdcdcd'} is completed#033[00m Dec 6 05:14:30 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:30.211 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:14:29Z, description=, device_id=fcb0956b-3e0b-42ed-82cc-dda3a3b5cf85, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=6e17a10f-dbbc-42b2-aeeb-b43e917b0e3c, ip_allocation=immediate, mac_address=fa:16:3e:80:e4:79, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=485, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-06T10:14:29Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:14:30 localhost dnsmasq[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 6 addresses Dec 6 05:14:30 localhost dnsmasq-dhcp[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:14:30 localhost dnsmasq-dhcp[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:14:30 localhost podman[311409]: 2025-12-06 10:14:30.432386481 +0000 UTC m=+0.060859624 container kill e4e5ac05442955f65040fee10f3aa342d80518f3e4a072b3d79fe4302f513cf7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 6 05:14:30 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:30.561 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:14:29Z, description=, device_id=a657ed9a-4b90-4e6b-92ce-73d23dc898ea, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=1de05677-105c-406f-9cd7-f01d2cbdcdcd, ip_allocation=immediate, mac_address=fa:16:3e:c4:04:30, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:14:23Z, description=, dns_domain=, id=2716abb4-8339-437b-9952-fd22a3d3f838, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPDetailsNegativeTestJSON-1521297530-network, port_security_enabled=True, project_id=550f4c3bf626406eac0d7f6f917d607c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=35801, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=433, status=ACTIVE, subnets=['bacb9850-7f83-4210-a14f-8a65cd67ed70'], tags=[], tenant_id=550f4c3bf626406eac0d7f6f917d607c, updated_at=2025-12-06T10:14:24Z, vlan_transparent=None, network_id=2716abb4-8339-437b-9952-fd22a3d3f838, port_security_enabled=False, project_id=550f4c3bf626406eac0d7f6f917d607c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=484, status=DOWN, tags=[], tenant_id=550f4c3bf626406eac0d7f6f917d607c, updated_at=2025-12-06T10:14:29Z on network 2716abb4-8339-437b-9952-fd22a3d3f838#033[00m Dec 6 05:14:30 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:30.673 263652 INFO neutron.agent.dhcp.agent [None req-7c139c2c-6361-457b-82f3-9a8e206f2299 - - - - - -] DHCP configuration for ports {'6e17a10f-dbbc-42b2-aeeb-b43e917b0e3c'} is completed#033[00m Dec 6 05:14:30 localhost dnsmasq[311312]: read /var/lib/neutron/dhcp/2716abb4-8339-437b-9952-fd22a3d3f838/addn_hosts - 1 addresses Dec 6 05:14:30 localhost dnsmasq-dhcp[311312]: read /var/lib/neutron/dhcp/2716abb4-8339-437b-9952-fd22a3d3f838/host Dec 6 05:14:30 localhost dnsmasq-dhcp[311312]: read /var/lib/neutron/dhcp/2716abb4-8339-437b-9952-fd22a3d3f838/opts Dec 6 05:14:30 localhost podman[311448]: 2025-12-06 10:14:30.776912681 +0000 UTC m=+0.064177565 container kill 70d88d91b5e20c247f4a6f4c63d79ee38d3e85e687937e8f7f521dc46eaf340f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2716abb4-8339-437b-9952-fd22a3d3f838, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 6 05:14:31 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:31.018 263652 INFO neutron.agent.dhcp.agent [None req-0640b2e6-57f5-4931-980d-f8db1e7d2171 - - - - - -] DHCP configuration for ports {'1de05677-105c-406f-9cd7-f01d2cbdcdcd'} is completed#033[00m Dec 6 05:14:31 localhost nova_compute[282193]: 2025-12-06 10:14:31.199 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:14:31 localhost nova_compute[282193]: 2025-12-06 10:14:31.226 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:14:31 localhost nova_compute[282193]: 2025-12-06 10:14:31.227 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:14:31 localhost nova_compute[282193]: 2025-12-06 10:14:31.227 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:14:31 localhost nova_compute[282193]: 2025-12-06 10:14:31.228 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Auditing locally available compute resources for np0005548789.localdomain (node: np0005548789.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 6 05:14:31 localhost nova_compute[282193]: 2025-12-06 10:14:31.228 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:14:31 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:14:31 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3766673990' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:14:31 localhost nova_compute[282193]: 2025-12-06 10:14:31.716 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:14:31 localhost nova_compute[282193]: 2025-12-06 10:14:31.800 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:14:31 localhost nova_compute[282193]: 2025-12-06 10:14:31.801 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:14:31 localhost nova_compute[282193]: 2025-12-06 10:14:31.848 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:31 localhost nova_compute[282193]: 2025-12-06 10:14:31.883 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:31 localhost nova_compute[282193]: 2025-12-06 10:14:31.895 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:32 localhost nova_compute[282193]: 2025-12-06 10:14:32.053 282197 WARNING nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 6 05:14:32 localhost nova_compute[282193]: 2025-12-06 10:14:32.055 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Hypervisor/Node resource view: name=np0005548789.localdomain free_ram=11389MB free_disk=41.77429962158203GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 6 05:14:32 localhost nova_compute[282193]: 2025-12-06 10:14:32.056 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:14:32 localhost nova_compute[282193]: 2025-12-06 10:14:32.056 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:14:32 localhost nova_compute[282193]: 2025-12-06 10:14:32.155 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 6 05:14:32 localhost nova_compute[282193]: 2025-12-06 10:14:32.155 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 6 05:14:32 localhost nova_compute[282193]: 2025-12-06 10:14:32.155 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Final resource view: name=np0005548789.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 6 05:14:32 localhost nova_compute[282193]: 2025-12-06 10:14:32.258 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:14:32 localhost systemd[1]: tmp-crun.so2RQB.mount: Deactivated successfully. Dec 6 05:14:32 localhost podman[311519]: 2025-12-06 10:14:32.46357968 +0000 UTC m=+0.063089973 container kill e4e5ac05442955f65040fee10f3aa342d80518f3e4a072b3d79fe4302f513cf7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:14:32 localhost dnsmasq[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 5 addresses Dec 6 05:14:32 localhost dnsmasq-dhcp[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:14:32 localhost dnsmasq-dhcp[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:14:32 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:14:32 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/4228215768' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:14:32 localhost nova_compute[282193]: 2025-12-06 10:14:32.725 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:14:32 localhost nova_compute[282193]: 2025-12-06 10:14:32.733 282197 DEBUG nova.compute.provider_tree [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed in ProviderTree for provider: 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 6 05:14:32 localhost ovn_controller[154851]: 2025-12-06T10:14:32Z|00074|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:14:32 localhost nova_compute[282193]: 2025-12-06 10:14:32.748 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 6 05:14:32 localhost nova_compute[282193]: 2025-12-06 10:14:32.768 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Compute_service record updated for np0005548789.localdomain:np0005548789.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 6 05:14:32 localhost nova_compute[282193]: 2025-12-06 10:14:32.769 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:14:32 localhost nova_compute[282193]: 2025-12-06 10:14:32.778 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:33 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:14:33 localhost dnsmasq[311312]: read /var/lib/neutron/dhcp/2716abb4-8339-437b-9952-fd22a3d3f838/addn_hosts - 0 addresses Dec 6 05:14:33 localhost dnsmasq-dhcp[311312]: read /var/lib/neutron/dhcp/2716abb4-8339-437b-9952-fd22a3d3f838/host Dec 6 05:14:33 localhost podman[311569]: 2025-12-06 10:14:33.245189504 +0000 UTC m=+0.048388748 container kill 70d88d91b5e20c247f4a6f4c63d79ee38d3e85e687937e8f7f521dc46eaf340f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2716abb4-8339-437b-9952-fd22a3d3f838, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true) Dec 6 05:14:33 localhost dnsmasq-dhcp[311312]: read /var/lib/neutron/dhcp/2716abb4-8339-437b-9952-fd22a3d3f838/opts Dec 6 05:14:33 localhost nova_compute[282193]: 2025-12-06 10:14:33.473 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:33 localhost kernel: device tap674505ce-f8 left promiscuous mode Dec 6 05:14:33 localhost ovn_controller[154851]: 2025-12-06T10:14:33Z|00075|binding|INFO|Releasing lport 674505ce-f881-462b-a185-46ca8116f551 from this chassis (sb_readonly=0) Dec 6 05:14:33 localhost ovn_controller[154851]: 2025-12-06T10:14:33Z|00076|binding|INFO|Setting lport 674505ce-f881-462b-a185-46ca8116f551 down in Southbound Dec 6 05:14:33 localhost ovn_metadata_agent[160504]: 2025-12-06 10:14:33.491 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-2716abb4-8339-437b-9952-fd22a3d3f838', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2716abb4-8339-437b-9952-fd22a3d3f838', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '550f4c3bf626406eac0d7f6f917d607c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5a194418-47d8-46fb-95d0-765c18cf4dc9, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=674505ce-f881-462b-a185-46ca8116f551) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:14:33 localhost nova_compute[282193]: 2025-12-06 10:14:33.493 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:33 localhost ovn_metadata_agent[160504]: 2025-12-06 10:14:33.495 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 674505ce-f881-462b-a185-46ca8116f551 in datapath 2716abb4-8339-437b-9952-fd22a3d3f838 unbound from our chassis#033[00m Dec 6 05:14:33 localhost nova_compute[282193]: 2025-12-06 10:14:33.495 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:33 localhost ovn_metadata_agent[160504]: 2025-12-06 10:14:33.498 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2716abb4-8339-437b-9952-fd22a3d3f838, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:14:33 localhost ovn_metadata_agent[160504]: 2025-12-06 10:14:33.499 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[022ea27f-9813-4b10-b73d-72166a23f636]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:14:34 localhost nova_compute[282193]: 2025-12-06 10:14:34.751 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:14:34 localhost nova_compute[282193]: 2025-12-06 10:14:34.752 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:14:34 localhost nova_compute[282193]: 2025-12-06 10:14:34.752 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 6 05:14:34 localhost nova_compute[282193]: 2025-12-06 10:14:34.752 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 6 05:14:34 localhost ovn_controller[154851]: 2025-12-06T10:14:34Z|00077|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:14:35 localhost nova_compute[282193]: 2025-12-06 10:14:35.035 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:35 localhost dnsmasq[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 4 addresses Dec 6 05:14:35 localhost dnsmasq-dhcp[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:14:35 localhost dnsmasq-dhcp[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:14:35 localhost podman[311607]: 2025-12-06 10:14:35.06582072 +0000 UTC m=+0.061174334 container kill e4e5ac05442955f65040fee10f3aa342d80518f3e4a072b3d79fe4302f513cf7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2) Dec 6 05:14:35 localhost nova_compute[282193]: 2025-12-06 10:14:35.071 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 6 05:14:35 localhost nova_compute[282193]: 2025-12-06 10:14:35.072 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquired lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 6 05:14:35 localhost nova_compute[282193]: 2025-12-06 10:14:35.072 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 6 05:14:35 localhost nova_compute[282193]: 2025-12-06 10:14:35.072 282197 DEBUG nova.objects.instance [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 6 05:14:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:14:35 localhost systemd[1]: tmp-crun.RYWESP.mount: Deactivated successfully. Dec 6 05:14:35 localhost podman[311620]: 2025-12-06 10:14:35.183030011 +0000 UTC m=+0.093692719 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, name=ubi9-minimal, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vendor=Red Hat, Inc., distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.openshift.expose-services=, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, release=1755695350) Dec 6 05:14:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:14:35 localhost podman[311620]: 2025-12-06 10:14:35.219843017 +0000 UTC m=+0.130505675 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, name=ubi9-minimal, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_id=edpm, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Dec 6 05:14:35 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:14:35 localhost podman[311647]: 2025-12-06 10:14:35.268518382 +0000 UTC m=+0.066186237 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Dec 6 05:14:35 localhost podman[311647]: 2025-12-06 10:14:35.276803413 +0000 UTC m=+0.074471298 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:14:35 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:14:35 localhost dnsmasq[311312]: exiting on receipt of SIGTERM Dec 6 05:14:35 localhost podman[311681]: 2025-12-06 10:14:35.389997323 +0000 UTC m=+0.039824998 container kill 70d88d91b5e20c247f4a6f4c63d79ee38d3e85e687937e8f7f521dc46eaf340f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2716abb4-8339-437b-9952-fd22a3d3f838, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Dec 6 05:14:35 localhost systemd[1]: libpod-70d88d91b5e20c247f4a6f4c63d79ee38d3e85e687937e8f7f521dc46eaf340f.scope: Deactivated successfully. Dec 6 05:14:35 localhost podman[311696]: 2025-12-06 10:14:35.450430564 +0000 UTC m=+0.043418077 container died 70d88d91b5e20c247f4a6f4c63d79ee38d3e85e687937e8f7f521dc46eaf340f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2716abb4-8339-437b-9952-fd22a3d3f838, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 6 05:14:35 localhost podman[311696]: 2025-12-06 10:14:35.494656575 +0000 UTC m=+0.087644028 container remove 70d88d91b5e20c247f4a6f4c63d79ee38d3e85e687937e8f7f521dc46eaf340f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2716abb4-8339-437b-9952-fd22a3d3f838, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true) Dec 6 05:14:35 localhost systemd[1]: libpod-conmon-70d88d91b5e20c247f4a6f4c63d79ee38d3e85e687937e8f7f521dc46eaf340f.scope: Deactivated successfully. Dec 6 05:14:35 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:35.523 263652 INFO neutron.agent.dhcp.agent [None req-8060e107-0d2d-468d-9810-0d8609c8862f - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:14:35 localhost nova_compute[282193]: 2025-12-06 10:14:35.742 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updating instance_info_cache with network_info: [{"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 6 05:14:35 localhost nova_compute[282193]: 2025-12-06 10:14:35.758 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Releasing lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 6 05:14:35 localhost nova_compute[282193]: 2025-12-06 10:14:35.759 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 6 05:14:35 localhost nova_compute[282193]: 2025-12-06 10:14:35.759 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:14:35 localhost nova_compute[282193]: 2025-12-06 10:14:35.760 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:14:35 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:35.842 263652 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:14:36 localhost systemd[1]: tmp-crun.oCAuY6.mount: Deactivated successfully. Dec 6 05:14:36 localhost systemd[1]: var-lib-containers-storage-overlay-2b70efbcf1a478b69865e43308846dd732728820bb0a4447d04b09e0d0cf1219-merged.mount: Deactivated successfully. Dec 6 05:14:36 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-70d88d91b5e20c247f4a6f4c63d79ee38d3e85e687937e8f7f521dc46eaf340f-userdata-shm.mount: Deactivated successfully. Dec 6 05:14:36 localhost systemd[1]: run-netns-qdhcp\x2d2716abb4\x2d8339\x2d437b\x2d9952\x2dfd22a3d3f838.mount: Deactivated successfully. Dec 6 05:14:36 localhost nova_compute[282193]: 2025-12-06 10:14:36.182 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:14:36 localhost nova_compute[282193]: 2025-12-06 10:14:36.183 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:14:36 localhost nova_compute[282193]: 2025-12-06 10:14:36.851 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:36 localhost nova_compute[282193]: 2025-12-06 10:14:36.898 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:37 localhost nova_compute[282193]: 2025-12-06 10:14:37.180 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:14:37 localhost nova_compute[282193]: 2025-12-06 10:14:37.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:14:37 localhost nova_compute[282193]: 2025-12-06 10:14:37.182 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 6 05:14:38 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:14:38 localhost nova_compute[282193]: 2025-12-06 10:14:38.317 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:38 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 6 05:14:38 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1657712167' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 6 05:14:38 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e104 e104: 6 total, 6 up, 6 in Dec 6 05:14:38 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 6 05:14:38 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1657712167' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 6 05:14:38 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #34. Immutable memtables: 0. Dec 6 05:14:38 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:14:38.991459) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 6 05:14:38 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 34 Dec 6 05:14:38 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016078991527, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 1060, "num_deletes": 252, "total_data_size": 1315745, "memory_usage": 1341648, "flush_reason": "Manual Compaction"} Dec 6 05:14:38 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #35: started Dec 6 05:14:39 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016079000091, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 35, "file_size": 849236, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20839, "largest_seqno": 21894, "table_properties": {"data_size": 844735, "index_size": 2164, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10723, "raw_average_key_size": 20, "raw_value_size": 835388, "raw_average_value_size": 1634, "num_data_blocks": 90, "num_entries": 511, "num_filter_entries": 511, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765016021, "oldest_key_time": 1765016021, "file_creation_time": 1765016078, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ec2a3a6-c7d0-4f1e-9923-5a3b782725ae", "db_session_id": "JMBO5KX1IJCJ8FWC64EX", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}} Dec 6 05:14:39 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 8674 microseconds, and 3579 cpu microseconds. Dec 6 05:14:39 localhost ceph-mon[298582]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 6 05:14:39 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:14:39.000135) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #35: 849236 bytes OK Dec 6 05:14:39 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:14:39.000155) [db/memtable_list.cc:519] [default] Level-0 commit table #35 started Dec 6 05:14:39 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:14:39.003382) [db/memtable_list.cc:722] [default] Level-0 commit table #35: memtable #1 done Dec 6 05:14:39 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:14:39.003404) EVENT_LOG_v1 {"time_micros": 1765016079003397, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 6 05:14:39 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:14:39.003424) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 6 05:14:39 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 1310509, prev total WAL file size 1310509, number of live WAL files 2. Dec 6 05:14:39 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000031.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:14:39 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:14:39.004077) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131353436' seq:72057594037927935, type:22 .. '7061786F73003131373938' seq:0, type:0; will stop at (end) Dec 6 05:14:39 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 6 05:14:39 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [35(829KB)], [33(19MB)] Dec 6 05:14:39 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016079004158, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [35], "files_L6": [33], "score": -1, "input_data_size": 21113828, "oldest_snapshot_seqno": -1} Dec 6 05:14:39 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #36: 12153 keys, 19283165 bytes, temperature: kUnknown Dec 6 05:14:39 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016079115264, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 36, "file_size": 19283165, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19215590, "index_size": 36114, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30405, "raw_key_size": 326825, "raw_average_key_size": 26, "raw_value_size": 19010254, "raw_average_value_size": 1564, "num_data_blocks": 1367, "num_entries": 12153, "num_filter_entries": 12153, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015623, "oldest_key_time": 0, "file_creation_time": 1765016079, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ec2a3a6-c7d0-4f1e-9923-5a3b782725ae", "db_session_id": "JMBO5KX1IJCJ8FWC64EX", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}} Dec 6 05:14:39 localhost ceph-mon[298582]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 6 05:14:39 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:14:39.115690) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 19283165 bytes Dec 6 05:14:39 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:14:39.117635) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 189.7 rd, 173.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 19.3 +0.0 blob) out(18.4 +0.0 blob), read-write-amplify(47.6) write-amplify(22.7) OK, records in: 12683, records dropped: 530 output_compression: NoCompression Dec 6 05:14:39 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:14:39.117676) EVENT_LOG_v1 {"time_micros": 1765016079117657, "job": 18, "event": "compaction_finished", "compaction_time_micros": 111287, "compaction_time_cpu_micros": 53740, "output_level": 6, "num_output_files": 1, "total_output_size": 19283165, "num_input_records": 12683, "num_output_records": 12153, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 6 05:14:39 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:14:39 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016079118007, "job": 18, "event": "table_file_deletion", "file_number": 35} Dec 6 05:14:39 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000033.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:14:39 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016079121004, "job": 18, "event": "table_file_deletion", "file_number": 33} Dec 6 05:14:39 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:14:39.003931) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:14:39 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:14:39.121130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:14:39 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:14:39.121138) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:14:39 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:14:39.121142) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:14:39 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:14:39.121144) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:14:39 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:14:39.121147) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:14:39 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:39.542 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:14:39Z, description=, device_id=2e797ff2-91d9-444a-be81-92fadd15ae3f, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=623f4d15-d3e9-4201-b94a-f57e73649098, ip_allocation=immediate, mac_address=fa:16:3e:c1:a3:09, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=525, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-06T10:14:39Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:14:39 localhost dnsmasq[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 5 addresses Dec 6 05:14:39 localhost dnsmasq-dhcp[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:14:39 localhost podman[311737]: 2025-12-06 10:14:39.798099574 +0000 UTC m=+0.078743357 container kill e4e5ac05442955f65040fee10f3aa342d80518f3e4a072b3d79fe4302f513cf7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 6 05:14:39 localhost dnsmasq-dhcp[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:14:40 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:40.009 263652 INFO neutron.agent.dhcp.agent [None req-cd843266-ec6a-4bb4-90b4-2e35f1e940c5 - - - - - -] DHCP configuration for ports {'623f4d15-d3e9-4201-b94a-f57e73649098'} is completed#033[00m Dec 6 05:14:40 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:40.011 263652 INFO neutron.agent.linux.ip_lib [None req-70ea0e9b-44da-4f7e-8e7f-a82adf4090f9 - - - - - -] Device tap9f077348-ed cannot be used as it has no MAC address#033[00m Dec 6 05:14:40 localhost nova_compute[282193]: 2025-12-06 10:14:40.039 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:40 localhost kernel: device tap9f077348-ed entered promiscuous mode Dec 6 05:14:40 localhost NetworkManager[5973]: [1765016080.0511] manager: (tap9f077348-ed): new Generic device (/org/freedesktop/NetworkManager/Devices/20) Dec 6 05:14:40 localhost ovn_controller[154851]: 2025-12-06T10:14:40Z|00078|binding|INFO|Claiming lport 9f077348-ed05-4cf4-8524-593431fbafaf for this chassis. Dec 6 05:14:40 localhost ovn_controller[154851]: 2025-12-06T10:14:40Z|00079|binding|INFO|9f077348-ed05-4cf4-8524-593431fbafaf: Claiming unknown Dec 6 05:14:40 localhost nova_compute[282193]: 2025-12-06 10:14:40.052 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:40 localhost systemd-udevd[311766]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:14:40 localhost ovn_metadata_agent[160504]: 2025-12-06 10:14:40.071 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-df3c5fcc-9cd4-4d33-9970-a165c712aad3', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df3c5fcc-9cd4-4d33-9970-a165c712aad3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '024b6fbc052c4ed7a93c855bd2ae77da', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dd535f07-7612-46c6-87c9-bf69c15a9a5d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=9f077348-ed05-4cf4-8524-593431fbafaf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:14:40 localhost ovn_metadata_agent[160504]: 2025-12-06 10:14:40.073 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 9f077348-ed05-4cf4-8524-593431fbafaf in datapath df3c5fcc-9cd4-4d33-9970-a165c712aad3 bound to our chassis#033[00m Dec 6 05:14:40 localhost ovn_metadata_agent[160504]: 2025-12-06 10:14:40.075 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Port 64f8fc93-0da5-442a-a910-eb65f721f2a4 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 6 05:14:40 localhost ovn_metadata_agent[160504]: 2025-12-06 10:14:40.076 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network df3c5fcc-9cd4-4d33-9970-a165c712aad3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:14:40 localhost ovn_metadata_agent[160504]: 2025-12-06 10:14:40.076 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[eab9cd6f-02d4-4261-a5c7-16dcfaeec1a3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:14:40 localhost nova_compute[282193]: 2025-12-06 10:14:40.087 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:14:40 localhost ovn_controller[154851]: 2025-12-06T10:14:40Z|00080|binding|INFO|Setting lport 9f077348-ed05-4cf4-8524-593431fbafaf ovn-installed in OVS Dec 6 05:14:40 localhost ovn_controller[154851]: 2025-12-06T10:14:40Z|00081|binding|INFO|Setting lport 9f077348-ed05-4cf4-8524-593431fbafaf up in Southbound Dec 6 05:14:40 localhost nova_compute[282193]: 2025-12-06 10:14:40.094 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:40 localhost nova_compute[282193]: 2025-12-06 10:14:40.099 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:40 localhost nova_compute[282193]: 2025-12-06 10:14:40.122 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:40 localhost nova_compute[282193]: 2025-12-06 10:14:40.149 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:40 localhost podman[311770]: 2025-12-06 10:14:40.202511647 +0000 UTC m=+0.100227498 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251125) Dec 6 05:14:40 localhost nova_compute[282193]: 2025-12-06 10:14:40.228 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:40 localhost podman[311770]: 2025-12-06 10:14:40.243238092 +0000 UTC m=+0.140953953 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3) Dec 6 05:14:40 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:14:40 localhost systemd[1]: tmp-crun.d3IUwJ.mount: Deactivated successfully. Dec 6 05:14:40 localhost podman[311841]: Dec 6 05:14:40 localhost podman[311841]: 2025-12-06 10:14:40.996069833 +0000 UTC m=+0.088188413 container create dbf1d79cf37c9a95da8c3fbfd17ed1db394008f8d27c162fbd5d4fc5de50fe81 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-df3c5fcc-9cd4-4d33-9970-a165c712aad3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Dec 6 05:14:41 localhost systemd[1]: Started libpod-conmon-dbf1d79cf37c9a95da8c3fbfd17ed1db394008f8d27c162fbd5d4fc5de50fe81.scope. Dec 6 05:14:41 localhost podman[311841]: 2025-12-06 10:14:40.953604036 +0000 UTC m=+0.045722636 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:14:41 localhost systemd[1]: Started libcrun container. Dec 6 05:14:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a826d515f5bdfea048cffebccb4edfc28363d9139b831b1071c42234067ec609/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:14:41 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e105 e105: 6 total, 6 up, 6 in Dec 6 05:14:41 localhost podman[311841]: 2025-12-06 10:14:41.076918093 +0000 UTC m=+0.169036663 container init dbf1d79cf37c9a95da8c3fbfd17ed1db394008f8d27c162fbd5d4fc5de50fe81 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-df3c5fcc-9cd4-4d33-9970-a165c712aad3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125) Dec 6 05:14:41 localhost podman[311841]: 2025-12-06 10:14:41.089743222 +0000 UTC m=+0.181861792 container start dbf1d79cf37c9a95da8c3fbfd17ed1db394008f8d27c162fbd5d4fc5de50fe81 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-df3c5fcc-9cd4-4d33-9970-a165c712aad3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:14:41 localhost dnsmasq[311859]: started, version 2.85 cachesize 150 Dec 6 05:14:41 localhost dnsmasq[311859]: DNS service limited to local subnets Dec 6 05:14:41 localhost dnsmasq[311859]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:14:41 localhost dnsmasq[311859]: warning: no upstream servers configured Dec 6 05:14:41 localhost dnsmasq-dhcp[311859]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 6 05:14:41 localhost dnsmasq[311859]: read /var/lib/neutron/dhcp/df3c5fcc-9cd4-4d33-9970-a165c712aad3/addn_hosts - 0 addresses Dec 6 05:14:41 localhost dnsmasq-dhcp[311859]: read /var/lib/neutron/dhcp/df3c5fcc-9cd4-4d33-9970-a165c712aad3/host Dec 6 05:14:41 localhost dnsmasq-dhcp[311859]: read /var/lib/neutron/dhcp/df3c5fcc-9cd4-4d33-9970-a165c712aad3/opts Dec 6 05:14:41 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:41.161 263652 INFO neutron.agent.dhcp.agent [None req-5a4831f2-83b0-42c5-9287-fea2f04a31d9 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:14:40Z, description=, device_id=2e797ff2-91d9-444a-be81-92fadd15ae3f, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=a17f9d15-299c-474e-9834-9e63c98a6a26, ip_allocation=immediate, mac_address=fa:16:3e:44:4d:23, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:14:37Z, description=, dns_domain=, id=df3c5fcc-9cd4-4d33-9970-a165c712aad3, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupRulesNegativeTestJSON-1583596222-network, port_security_enabled=True, project_id=024b6fbc052c4ed7a93c855bd2ae77da, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=43497, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=518, status=ACTIVE, subnets=['eaa29c6a-37af-4221-a05f-34273ec978f2'], tags=[], tenant_id=024b6fbc052c4ed7a93c855bd2ae77da, updated_at=2025-12-06T10:14:38Z, vlan_transparent=None, network_id=df3c5fcc-9cd4-4d33-9970-a165c712aad3, port_security_enabled=False, project_id=024b6fbc052c4ed7a93c855bd2ae77da, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=544, status=DOWN, tags=[], tenant_id=024b6fbc052c4ed7a93c855bd2ae77da, updated_at=2025-12-06T10:14:40Z on network df3c5fcc-9cd4-4d33-9970-a165c712aad3#033[00m Dec 6 05:14:41 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:41.260 263652 INFO neutron.agent.dhcp.agent [None req-6610962e-6428-4ab0-b76b-53db7804607c - - - - - -] DHCP configuration for ports {'a6b83e42-613c-482c-9542-5aae284b7256'} is completed#033[00m Dec 6 05:14:41 localhost podman[311878]: 2025-12-06 10:14:41.380359008 +0000 UTC m=+0.071738085 container kill dbf1d79cf37c9a95da8c3fbfd17ed1db394008f8d27c162fbd5d4fc5de50fe81 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-df3c5fcc-9cd4-4d33-9970-a165c712aad3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:14:41 localhost dnsmasq[311859]: read /var/lib/neutron/dhcp/df3c5fcc-9cd4-4d33-9970-a165c712aad3/addn_hosts - 1 addresses Dec 6 05:14:41 localhost dnsmasq-dhcp[311859]: read /var/lib/neutron/dhcp/df3c5fcc-9cd4-4d33-9970-a165c712aad3/host Dec 6 05:14:41 localhost dnsmasq-dhcp[311859]: read /var/lib/neutron/dhcp/df3c5fcc-9cd4-4d33-9970-a165c712aad3/opts Dec 6 05:14:41 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:41.720 263652 INFO neutron.agent.dhcp.agent [None req-935df81a-a49c-473d-a6f3-b996a76f6fea - - - - - -] DHCP configuration for ports {'a17f9d15-299c-474e-9834-9e63c98a6a26'} is completed#033[00m Dec 6 05:14:41 localhost nova_compute[282193]: 2025-12-06 10:14:41.854 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:41 localhost nova_compute[282193]: 2025-12-06 10:14:41.901 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:42 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e106 e106: 6 total, 6 up, 6 in Dec 6 05:14:42 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:42.177 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:14:40Z, description=, device_id=2e797ff2-91d9-444a-be81-92fadd15ae3f, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=a17f9d15-299c-474e-9834-9e63c98a6a26, ip_allocation=immediate, mac_address=fa:16:3e:44:4d:23, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:14:37Z, description=, dns_domain=, id=df3c5fcc-9cd4-4d33-9970-a165c712aad3, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupRulesNegativeTestJSON-1583596222-network, port_security_enabled=True, project_id=024b6fbc052c4ed7a93c855bd2ae77da, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=43497, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=518, status=ACTIVE, subnets=['eaa29c6a-37af-4221-a05f-34273ec978f2'], tags=[], tenant_id=024b6fbc052c4ed7a93c855bd2ae77da, updated_at=2025-12-06T10:14:38Z, vlan_transparent=None, network_id=df3c5fcc-9cd4-4d33-9970-a165c712aad3, port_security_enabled=False, project_id=024b6fbc052c4ed7a93c855bd2ae77da, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=544, status=DOWN, tags=[], tenant_id=024b6fbc052c4ed7a93c855bd2ae77da, updated_at=2025-12-06T10:14:40Z on network df3c5fcc-9cd4-4d33-9970-a165c712aad3#033[00m Dec 6 05:14:42 localhost dnsmasq[311859]: read /var/lib/neutron/dhcp/df3c5fcc-9cd4-4d33-9970-a165c712aad3/addn_hosts - 1 addresses Dec 6 05:14:42 localhost dnsmasq-dhcp[311859]: read /var/lib/neutron/dhcp/df3c5fcc-9cd4-4d33-9970-a165c712aad3/host Dec 6 05:14:42 localhost dnsmasq-dhcp[311859]: read /var/lib/neutron/dhcp/df3c5fcc-9cd4-4d33-9970-a165c712aad3/opts Dec 6 05:14:42 localhost systemd[1]: tmp-crun.DWkabP.mount: Deactivated successfully. Dec 6 05:14:42 localhost podman[311917]: 2025-12-06 10:14:42.410901694 +0000 UTC m=+0.066933180 container kill dbf1d79cf37c9a95da8c3fbfd17ed1db394008f8d27c162fbd5d4fc5de50fe81 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-df3c5fcc-9cd4-4d33-9970-a165c712aad3, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 6 05:14:42 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:42.701 263652 INFO neutron.agent.dhcp.agent [None req-9048a336-b13b-4d30-822e-ae84c0da904d - - - - - -] DHCP configuration for ports {'a17f9d15-299c-474e-9834-9e63c98a6a26'} is completed#033[00m Dec 6 05:14:43 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:14:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:14:43 localhost podman[311937]: 2025-12-06 10:14:43.900719077 +0000 UTC m=+0.065523296 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 6 05:14:43 localhost podman[311937]: 2025-12-06 10:14:43.913128563 +0000 UTC m=+0.077932862 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 6 05:14:43 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:14:44 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:44.344 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:14:42Z, description=, device_id=8ac18363-2c8c-4254-a57a-690b1714b140, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=3f202222-16a8-4488-bcc9-0691af80a9ba, ip_allocation=immediate, mac_address=fa:16:3e:6f:70:a0, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=554, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-06T10:14:43Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:14:44 localhost systemd[1]: tmp-crun.Gw48Jw.mount: Deactivated successfully. Dec 6 05:14:44 localhost dnsmasq[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 6 addresses Dec 6 05:14:44 localhost dnsmasq-dhcp[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:14:44 localhost podman[311975]: 2025-12-06 10:14:44.591257761 +0000 UTC m=+0.072313882 container kill e4e5ac05442955f65040fee10f3aa342d80518f3e4a072b3d79fe4302f513cf7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 6 05:14:44 localhost dnsmasq-dhcp[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:14:44 localhost ovn_metadata_agent[160504]: 2025-12-06 10:14:44.821 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:6c:02', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:a8:2f:0c:cb:a1'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:14:44 localhost ovn_metadata_agent[160504]: 2025-12-06 10:14:44.823 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 6 05:14:44 localhost nova_compute[282193]: 2025-12-06 10:14:44.855 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:44 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:44.906 263652 INFO neutron.agent.dhcp.agent [None req-583292b8-a6c5-4bc8-bb21-4ee822382ed1 - - - - - -] DHCP configuration for ports {'3f202222-16a8-4488-bcc9-0691af80a9ba'} is completed#033[00m Dec 6 05:14:45 localhost nova_compute[282193]: 2025-12-06 10:14:45.177 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:14:46 localhost neutron_sriov_agent[256690]: 2025-12-06 10:14:46.015 2 INFO neutron.agent.securitygroups_rpc [req-842dc70c-4c90-4d04-97b8-ca0a150f47f3 req-694d7e2d-322f-485d-ac12-0a632bb0d8f8 3a50fae64027482ba5b10005ed97189e 024b6fbc052c4ed7a93c855bd2ae77da - - default default] Security group rule updated ['e6cef3ed-f2f1-4e9f-8bb7-b8303074aa1b']#033[00m Dec 6 05:14:46 localhost nova_compute[282193]: 2025-12-06 10:14:46.368 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:46 localhost openstack_network_exporter[243110]: ERROR 10:14:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:14:46 localhost openstack_network_exporter[243110]: ERROR 10:14:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:14:46 localhost openstack_network_exporter[243110]: ERROR 10:14:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:14:46 localhost openstack_network_exporter[243110]: ERROR 10:14:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:14:46 localhost openstack_network_exporter[243110]: Dec 6 05:14:46 localhost openstack_network_exporter[243110]: ERROR 10:14:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:14:46 localhost openstack_network_exporter[243110]: Dec 6 05:14:46 localhost neutron_sriov_agent[256690]: 2025-12-06 10:14:46.815 2 INFO neutron.agent.securitygroups_rpc [req-3aac6f95-4738-40dc-9407-49685a717c88 req-a8e1311a-c6c3-4f2f-8fef-2b7b3e5084e1 3a50fae64027482ba5b10005ed97189e 024b6fbc052c4ed7a93c855bd2ae77da - - default default] Security group rule updated ['e6cef3ed-f2f1-4e9f-8bb7-b8303074aa1b']#033[00m Dec 6 05:14:46 localhost nova_compute[282193]: 2025-12-06 10:14:46.856 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:46 localhost nova_compute[282193]: 2025-12-06 10:14:46.904 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:46 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e107 e107: 6 total, 6 up, 6 in Dec 6 05:14:47 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:47.134 263652 INFO neutron.agent.linux.ip_lib [None req-e15adfb8-c949-47d1-b9c5-a07dd781f185 - - - - - -] Device tap2f6c7dc0-af cannot be used as it has no MAC address#033[00m Dec 6 05:14:47 localhost nova_compute[282193]: 2025-12-06 10:14:47.166 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:47 localhost kernel: device tap2f6c7dc0-af entered promiscuous mode Dec 6 05:14:47 localhost NetworkManager[5973]: [1765016087.1738] manager: (tap2f6c7dc0-af): new Generic device (/org/freedesktop/NetworkManager/Devices/21) Dec 6 05:14:47 localhost ovn_controller[154851]: 2025-12-06T10:14:47Z|00082|binding|INFO|Claiming lport 2f6c7dc0-af46-4cc2-99f3-f46a11be455c for this chassis. Dec 6 05:14:47 localhost ovn_controller[154851]: 2025-12-06T10:14:47Z|00083|binding|INFO|2f6c7dc0-af46-4cc2-99f3-f46a11be455c: Claiming unknown Dec 6 05:14:47 localhost nova_compute[282193]: 2025-12-06 10:14:47.177 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:47 localhost systemd-udevd[312007]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:14:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:14:47.186 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-feb354e1-97d5-4c74-804a-eeb06e5bb155', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-feb354e1-97d5-4c74-804a-eeb06e5bb155', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4185da56d12649bc8653dd9db208c0a0', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fd335efc-b05b-4aaa-a30a-c891a594ccf4, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=2f6c7dc0-af46-4cc2-99f3-f46a11be455c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:14:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:14:47.188 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 2f6c7dc0-af46-4cc2-99f3-f46a11be455c in datapath feb354e1-97d5-4c74-804a-eeb06e5bb155 bound to our chassis#033[00m Dec 6 05:14:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:14:47.190 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network feb354e1-97d5-4c74-804a-eeb06e5bb155 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:14:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:14:47.191 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[8a48b358-7bf7-4270-8250-2c4856cd7d34]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:14:47 localhost journal[230404]: ethtool ioctl error on tap2f6c7dc0-af: No such device Dec 6 05:14:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:14:47 localhost journal[230404]: ethtool ioctl error on tap2f6c7dc0-af: No such device Dec 6 05:14:47 localhost ovn_controller[154851]: 2025-12-06T10:14:47Z|00084|binding|INFO|Setting lport 2f6c7dc0-af46-4cc2-99f3-f46a11be455c ovn-installed in OVS Dec 6 05:14:47 localhost ovn_controller[154851]: 2025-12-06T10:14:47Z|00085|binding|INFO|Setting lport 2f6c7dc0-af46-4cc2-99f3-f46a11be455c up in Southbound Dec 6 05:14:47 localhost nova_compute[282193]: 2025-12-06 10:14:47.217 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:47 localhost journal[230404]: ethtool ioctl error on tap2f6c7dc0-af: No such device Dec 6 05:14:47 localhost journal[230404]: ethtool ioctl error on tap2f6c7dc0-af: No such device Dec 6 05:14:47 localhost journal[230404]: ethtool ioctl error on tap2f6c7dc0-af: No such device Dec 6 05:14:47 localhost journal[230404]: ethtool ioctl error on tap2f6c7dc0-af: No such device Dec 6 05:14:47 localhost journal[230404]: ethtool ioctl error on tap2f6c7dc0-af: No such device Dec 6 05:14:47 localhost journal[230404]: ethtool ioctl error on tap2f6c7dc0-af: No such device Dec 6 05:14:47 localhost nova_compute[282193]: 2025-12-06 10:14:47.252 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:47 localhost nova_compute[282193]: 2025-12-06 10:14:47.279 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:47 localhost systemd[1]: tmp-crun.ppU5LU.mount: Deactivated successfully. Dec 6 05:14:47 localhost podman[312015]: 2025-12-06 10:14:47.297809774 +0000 UTC m=+0.070792436 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 6 05:14:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:14:47.304 160509 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:14:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:14:47.305 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:14:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:14:47.305 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:14:47 localhost podman[312015]: 2025-12-06 10:14:47.35216194 +0000 UTC m=+0.125144632 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller) Dec 6 05:14:47 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:14:48 localhost podman[312105]: Dec 6 05:14:48 localhost podman[312105]: 2025-12-06 10:14:48.209181859 +0000 UTC m=+0.092996849 container create bf1840969a38cdc03ff2f446425ad666bbcdfe5ae29c7d80e81b374fc1a9dff9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-feb354e1-97d5-4c74-804a-eeb06e5bb155, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 6 05:14:48 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:14:48 localhost systemd[1]: Started libpod-conmon-bf1840969a38cdc03ff2f446425ad666bbcdfe5ae29c7d80e81b374fc1a9dff9.scope. Dec 6 05:14:48 localhost podman[312105]: 2025-12-06 10:14:48.165806525 +0000 UTC m=+0.049621545 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:14:48 localhost systemd[1]: tmp-crun.Ks6Hug.mount: Deactivated successfully. Dec 6 05:14:48 localhost systemd[1]: Started libcrun container. Dec 6 05:14:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17d5803262906bbfc10a2359d065da806a9d9644144fa240df1cddd30ea542d6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:14:48 localhost podman[312105]: 2025-12-06 10:14:48.295890096 +0000 UTC m=+0.179705096 container init bf1840969a38cdc03ff2f446425ad666bbcdfe5ae29c7d80e81b374fc1a9dff9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-feb354e1-97d5-4c74-804a-eeb06e5bb155, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 6 05:14:48 localhost podman[312105]: 2025-12-06 10:14:48.302790876 +0000 UTC m=+0.186605866 container start bf1840969a38cdc03ff2f446425ad666bbcdfe5ae29c7d80e81b374fc1a9dff9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-feb354e1-97d5-4c74-804a-eeb06e5bb155, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 6 05:14:48 localhost dnsmasq[312123]: started, version 2.85 cachesize 150 Dec 6 05:14:48 localhost dnsmasq[312123]: DNS service limited to local subnets Dec 6 05:14:48 localhost dnsmasq[312123]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:14:48 localhost dnsmasq[312123]: warning: no upstream servers configured Dec 6 05:14:48 localhost dnsmasq-dhcp[312123]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 6 05:14:48 localhost dnsmasq[312123]: read /var/lib/neutron/dhcp/feb354e1-97d5-4c74-804a-eeb06e5bb155/addn_hosts - 0 addresses Dec 6 05:14:48 localhost dnsmasq-dhcp[312123]: read /var/lib/neutron/dhcp/feb354e1-97d5-4c74-804a-eeb06e5bb155/host Dec 6 05:14:48 localhost dnsmasq-dhcp[312123]: read /var/lib/neutron/dhcp/feb354e1-97d5-4c74-804a-eeb06e5bb155/opts Dec 6 05:14:48 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:48.564 263652 INFO neutron.agent.dhcp.agent [None req-98aa3ae0-29fa-4189-8228-5041e80f18dc - - - - - -] DHCP configuration for ports {'fe6507e2-e590-4d81-bf58-28ec08e1216a'} is completed#033[00m Dec 6 05:14:48 localhost nova_compute[282193]: 2025-12-06 10:14:48.654 282197 DEBUG nova.virt.libvirt.driver [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Creating tmpfile /var/lib/nova/instances/tmpe77a5ohg to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m Dec 6 05:14:48 localhost nova_compute[282193]: 2025-12-06 10:14:48.690 282197 DEBUG nova.compute.manager [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] destination check data is LibvirtLiveMigrateData(bdms=,block_migration=,disk_available_mb=12288,disk_over_commit=,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmpe77a5ohg',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=,is_shared_block_storage=,is_shared_instance_path=,is_volume_backed=,migration=,old_vol_attachment_ids=,serial_listen_addr=None,serial_listen_ports=,src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=,target_connect_addr=,vifs=[VIFMigrateData],wait_for_vif_plugged=) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m Dec 6 05:14:48 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:48.699 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:14:48Z, description=, device_id=7e399e99-e656-47a1-9fb2-96abab06c114, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=09b837f1-40ed-4eeb-8b33-2fe63cdb818e, ip_allocation=immediate, mac_address=fa:16:3e:28:6c:da, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=605, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-06T10:14:48Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:14:48 localhost nova_compute[282193]: 2025-12-06 10:14:48.713 282197 DEBUG oslo_concurrency.lockutils [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Acquiring lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 6 05:14:48 localhost nova_compute[282193]: 2025-12-06 10:14:48.714 282197 DEBUG oslo_concurrency.lockutils [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Acquired lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 6 05:14:48 localhost nova_compute[282193]: 2025-12-06 10:14:48.723 282197 INFO nova.compute.rpcapi [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Automatically selected compute RPC version 6.2 from minimum service version 66#033[00m Dec 6 05:14:48 localhost nova_compute[282193]: 2025-12-06 10:14:48.723 282197 DEBUG oslo_concurrency.lockutils [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Releasing lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 6 05:14:48 localhost podman[312140]: 2025-12-06 10:14:48.936731705 +0000 UTC m=+0.073444606 container kill e4e5ac05442955f65040fee10f3aa342d80518f3e4a072b3d79fe4302f513cf7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:14:48 localhost dnsmasq[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 7 addresses Dec 6 05:14:48 localhost dnsmasq-dhcp[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:14:48 localhost dnsmasq-dhcp[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:14:49 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:49.238 263652 INFO neutron.agent.dhcp.agent [None req-ccb857ae-a091-4b34-b048-483c3f1e56f0 - - - - - -] DHCP configuration for ports {'09b837f1-40ed-4eeb-8b33-2fe63cdb818e'} is completed#033[00m Dec 6 05:14:49 localhost nova_compute[282193]: 2025-12-06 10:14:49.748 282197 DEBUG nova.compute.manager [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=,block_migration=False,disk_available_mb=12288,disk_over_commit=,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmpe77a5ohg',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='87dc2ce3-2b16-4764-9803-711c2d12c20f',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids=,serial_listen_addr=None,serial_listen_ports=,src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=,target_connect_addr=,vifs=[VIFMigrateData],wait_for_vif_plugged=) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m Dec 6 05:14:49 localhost nova_compute[282193]: 2025-12-06 10:14:49.789 282197 DEBUG oslo_concurrency.lockutils [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Acquiring lock "refresh_cache-87dc2ce3-2b16-4764-9803-711c2d12c20f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 6 05:14:49 localhost nova_compute[282193]: 2025-12-06 10:14:49.789 282197 DEBUG oslo_concurrency.lockutils [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Acquired lock "refresh_cache-87dc2ce3-2b16-4764-9803-711c2d12c20f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 6 05:14:49 localhost nova_compute[282193]: 2025-12-06 10:14:49.790 282197 DEBUG nova.network.neutron [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Dec 6 05:14:50 localhost nova_compute[282193]: 2025-12-06 10:14:50.121 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:50.282 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:14:49Z, description=, device_id=110bfe4d-8dd3-4386-b8da-4c950d9b90e9, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=bfbbd672-ac59-4d4f-97b0-0bfce9d5e0c5, ip_allocation=immediate, mac_address=fa:16:3e:22:72:81, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=610, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-06T10:14:49Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:14:50 localhost podman[312178]: 2025-12-06 10:14:50.487966969 +0000 UTC m=+0.059860335 container kill e4e5ac05442955f65040fee10f3aa342d80518f3e4a072b3d79fe4302f513cf7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 6 05:14:50 localhost dnsmasq[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 8 addresses Dec 6 05:14:50 localhost dnsmasq-dhcp[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:14:50 localhost dnsmasq-dhcp[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:14:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:50.692 263652 INFO neutron.agent.dhcp.agent [None req-ba1fc9df-6730-465f-870d-4869d0b8fe05 - - - - - -] DHCP configuration for ports {'bfbbd672-ac59-4d4f-97b0-0bfce9d5e0c5'} is completed#033[00m Dec 6 05:14:50 localhost nova_compute[282193]: 2025-12-06 10:14:50.699 282197 DEBUG nova.network.neutron [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Updating instance_info_cache with network_info: [{"id": "e87832d3-ffc3-44e0-9f77-cd2eb6073d62", "address": "fa:16:3e:0e:f5:37", "network": {"id": "47d636a7-c520-4320-aa94-bfb41f418584", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1313845827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "7897d6398eb64eb29c66df8db792e581", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape87832d3-ff", "ovs_interfaceid": "e87832d3-ffc3-44e0-9f77-cd2eb6073d62", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 6 05:14:50 localhost nova_compute[282193]: 2025-12-06 10:14:50.759 282197 DEBUG oslo_concurrency.lockutils [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Releasing lock "refresh_cache-87dc2ce3-2b16-4764-9803-711c2d12c20f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 6 05:14:50 localhost nova_compute[282193]: 2025-12-06 10:14:50.762 282197 DEBUG nova.virt.libvirt.driver [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=,block_migration=False,disk_available_mb=12288,disk_over_commit=,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmpe77a5ohg',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='87dc2ce3-2b16-4764-9803-711c2d12c20f',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=,src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=,target_connect_addr=,vifs=[VIFMigrateData],wait_for_vif_plugged=) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m Dec 6 05:14:50 localhost nova_compute[282193]: 2025-12-06 10:14:50.763 282197 DEBUG nova.virt.libvirt.driver [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Creating instance directory: /var/lib/nova/instances/87dc2ce3-2b16-4764-9803-711c2d12c20f pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m Dec 6 05:14:50 localhost nova_compute[282193]: 2025-12-06 10:14:50.764 282197 DEBUG nova.virt.libvirt.driver [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Ensure instance console log exists: /var/lib/nova/instances/87dc2ce3-2b16-4764-9803-711c2d12c20f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m Dec 6 05:14:50 localhost nova_compute[282193]: 2025-12-06 10:14:50.765 282197 DEBUG nova.virt.libvirt.driver [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m Dec 6 05:14:50 localhost nova_compute[282193]: 2025-12-06 10:14:50.766 282197 DEBUG nova.virt.libvirt.vif [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T10:14:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-1999616987',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005548790.localdomain',hostname='tempest-liveautoblockmigrationv225test-server-1999616987',id=7,image_ref='6a944ab6-8965-4055-b7fc-af6e395005ea',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-12-06T10:14:45Z,launched_on='np0005548790.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=,new_flavor=None,node='np0005548790.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7897d6398eb64eb29c66df8db792e581',ramdisk_id='',reservation_id='r-tcv45ne4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6a944ab6-8965-4055-b7fc-af6e395005ea',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-265776820',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-265776820-project-member'},tags=,task_state='migrating',terminated_at=None,trusted_certs=,updated_at=2025-12-06T10:14:45Z,user_data=None,user_id='ac2e85103fd14829ad4e6df2357da95b',uuid=87dc2ce3-2b16-4764-9803-711c2d12c20f,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e87832d3-ffc3-44e0-9f77-cd2eb6073d62", "address": "fa:16:3e:0e:f5:37", "network": {"id": "47d636a7-c520-4320-aa94-bfb41f418584", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1313845827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "7897d6398eb64eb29c66df8db792e581", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tape87832d3-ff", "ovs_interfaceid": "e87832d3-ffc3-44e0-9f77-cd2eb6073d62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m Dec 6 05:14:50 localhost nova_compute[282193]: 2025-12-06 10:14:50.767 282197 DEBUG nova.network.os_vif_util [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Converting VIF {"id": "e87832d3-ffc3-44e0-9f77-cd2eb6073d62", "address": "fa:16:3e:0e:f5:37", "network": {"id": "47d636a7-c520-4320-aa94-bfb41f418584", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1313845827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "7897d6398eb64eb29c66df8db792e581", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tape87832d3-ff", "ovs_interfaceid": "e87832d3-ffc3-44e0-9f77-cd2eb6073d62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Dec 6 05:14:50 localhost nova_compute[282193]: 2025-12-06 10:14:50.769 282197 DEBUG nova.network.os_vif_util [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0e:f5:37,bridge_name='br-int',has_traffic_filtering=True,id=e87832d3-ffc3-44e0-9f77-cd2eb6073d62,network=Network(47d636a7-c520-4320-aa94-bfb41f418584),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape87832d3-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Dec 6 05:14:50 localhost nova_compute[282193]: 2025-12-06 10:14:50.769 282197 DEBUG os_vif [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:f5:37,bridge_name='br-int',has_traffic_filtering=True,id=e87832d3-ffc3-44e0-9f77-cd2eb6073d62,network=Network(47d636a7-c520-4320-aa94-bfb41f418584),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape87832d3-ff') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m Dec 6 05:14:50 localhost nova_compute[282193]: 2025-12-06 10:14:50.771 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:50 localhost nova_compute[282193]: 2025-12-06 10:14:50.772 282197 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 05:14:50 localhost nova_compute[282193]: 2025-12-06 10:14:50.772 282197 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 6 05:14:50 localhost nova_compute[282193]: 2025-12-06 10:14:50.777 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:50 localhost nova_compute[282193]: 2025-12-06 10:14:50.777 282197 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape87832d3-ff, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 05:14:50 localhost nova_compute[282193]: 2025-12-06 10:14:50.778 282197 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape87832d3-ff, col_values=(('external_ids', {'iface-id': 'e87832d3-ffc3-44e0-9f77-cd2eb6073d62', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0e:f5:37', 'vm-uuid': '87dc2ce3-2b16-4764-9803-711c2d12c20f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 05:14:50 localhost nova_compute[282193]: 2025-12-06 10:14:50.816 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:50 localhost nova_compute[282193]: 2025-12-06 10:14:50.819 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:14:50 localhost nova_compute[282193]: 2025-12-06 10:14:50.822 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:50 localhost nova_compute[282193]: 2025-12-06 10:14:50.823 282197 INFO os_vif [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:f5:37,bridge_name='br-int',has_traffic_filtering=True,id=e87832d3-ffc3-44e0-9f77-cd2eb6073d62,network=Network(47d636a7-c520-4320-aa94-bfb41f418584),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape87832d3-ff')#033[00m Dec 6 05:14:50 localhost nova_compute[282193]: 2025-12-06 10:14:50.823 282197 DEBUG nova.virt.libvirt.driver [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m Dec 6 05:14:50 localhost nova_compute[282193]: 2025-12-06 10:14:50.824 282197 DEBUG nova.compute.manager [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=12288,disk_over_commit=,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmpe77a5ohg',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='87dc2ce3-2b16-4764-9803-711c2d12c20f',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m Dec 6 05:14:50 localhost nova_compute[282193]: 2025-12-06 10:14:50.915 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:51 localhost nova_compute[282193]: 2025-12-06 10:14:51.908 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:52 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:52.295 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:14:51Z, description=, device_id=7e399e99-e656-47a1-9fb2-96abab06c114, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=3632540c-2981-4cf2-a512-17df5b6faa8d, ip_allocation=immediate, mac_address=fa:16:3e:ac:38:dd, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:14:44Z, description=, dns_domain=, id=feb354e1-97d5-4c74-804a-eeb06e5bb155, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServersTestJSON-1665562525-network, port_security_enabled=True, project_id=4185da56d12649bc8653dd9db208c0a0, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=23466, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=570, status=ACTIVE, subnets=['0d3c1a86-b134-4467-9f13-385eed16e944'], tags=[], tenant_id=4185da56d12649bc8653dd9db208c0a0, updated_at=2025-12-06T10:14:45Z, vlan_transparent=None, network_id=feb354e1-97d5-4c74-804a-eeb06e5bb155, port_security_enabled=False, project_id=4185da56d12649bc8653dd9db208c0a0, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=616, status=DOWN, tags=[], tenant_id=4185da56d12649bc8653dd9db208c0a0, updated_at=2025-12-06T10:14:51Z on network feb354e1-97d5-4c74-804a-eeb06e5bb155#033[00m Dec 6 05:14:52 localhost dnsmasq[312123]: read /var/lib/neutron/dhcp/feb354e1-97d5-4c74-804a-eeb06e5bb155/addn_hosts - 1 addresses Dec 6 05:14:52 localhost dnsmasq-dhcp[312123]: read /var/lib/neutron/dhcp/feb354e1-97d5-4c74-804a-eeb06e5bb155/host Dec 6 05:14:52 localhost dnsmasq-dhcp[312123]: read /var/lib/neutron/dhcp/feb354e1-97d5-4c74-804a-eeb06e5bb155/opts Dec 6 05:14:52 localhost podman[312220]: 2025-12-06 10:14:52.510162083 +0000 UTC m=+0.065860136 container kill bf1840969a38cdc03ff2f446425ad666bbcdfe5ae29c7d80e81b374fc1a9dff9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-feb354e1-97d5-4c74-804a-eeb06e5bb155, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Dec 6 05:14:52 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:52.767 263652 INFO neutron.agent.dhcp.agent [None req-498da11d-8da3-44d9-9d74-d37ab421f5eb - - - - - -] DHCP configuration for ports {'3632540c-2981-4cf2-a512-17df5b6faa8d'} is completed#033[00m Dec 6 05:14:53 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e108 e108: 6 total, 6 up, 6 in Dec 6 05:14:53 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:14:53 localhost neutron_sriov_agent[256690]: 2025-12-06 10:14:53.702 2 INFO neutron.agent.securitygroups_rpc [None req-2bc0f0e9-228c-4272-bb0d-cc31a9019510 b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Security group member updated ['4c82b56e-0fc5-4c7f-8922-ceb8236815fd']#033[00m Dec 6 05:14:53 localhost nova_compute[282193]: 2025-12-06 10:14:53.718 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:53 localhost ovn_metadata_agent[160504]: 2025-12-06 10:14:53.825 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b142a5ef-fbed-4e92-aa78-e3ad080c6370, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 05:14:53 localhost podman[241090]: time="2025-12-06T10:14:53Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:14:53 localhost podman[241090]: @ - - [06/Dec/2025:10:14:53 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 159752 "" "Go-http-client/1.1" Dec 6 05:14:53 localhost podman[241090]: @ - - [06/Dec/2025:10:14:53 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20207 "" "Go-http-client/1.1" Dec 6 05:14:54 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:14:54 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:14:55 localhost nova_compute[282193]: 2025-12-06 10:14:55.817 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:56 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:56.686 263652 INFO neutron.agent.linux.ip_lib [None req-5b3e8c07-fe2b-42c8-95dc-9e47a5c87336 - - - - - -] Device tapff588d77-fd cannot be used as it has no MAC address#033[00m Dec 6 05:14:56 localhost nova_compute[282193]: 2025-12-06 10:14:56.708 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:56 localhost kernel: device tapff588d77-fd entered promiscuous mode Dec 6 05:14:56 localhost NetworkManager[5973]: [1765016096.7176] manager: (tapff588d77-fd): new Generic device (/org/freedesktop/NetworkManager/Devices/22) Dec 6 05:14:56 localhost systemd-udevd[312340]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:14:56 localhost nova_compute[282193]: 2025-12-06 10:14:56.723 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:56 localhost ovn_controller[154851]: 2025-12-06T10:14:56Z|00086|binding|INFO|Claiming lport ff588d77-fd65-43a9-bd18-9402d0aef61a for this chassis. Dec 6 05:14:56 localhost ovn_controller[154851]: 2025-12-06T10:14:56Z|00087|binding|INFO|ff588d77-fd65-43a9-bd18-9402d0aef61a: Claiming unknown Dec 6 05:14:56 localhost ovn_metadata_agent[160504]: 2025-12-06 10:14:56.742 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-deb7774c-e96b-4e7f-88d7-ed9d740915f4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-deb7774c-e96b-4e7f-88d7-ed9d740915f4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da995d8e002548889747013c0eeca935', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9cc41455-e125-49b5-8c35-a9f7e38c8e70, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=ff588d77-fd65-43a9-bd18-9402d0aef61a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:14:56 localhost ovn_metadata_agent[160504]: 2025-12-06 10:14:56.745 160509 INFO neutron.agent.ovn.metadata.agent [-] Port ff588d77-fd65-43a9-bd18-9402d0aef61a in datapath deb7774c-e96b-4e7f-88d7-ed9d740915f4 bound to our chassis#033[00m Dec 6 05:14:56 localhost ovn_metadata_agent[160504]: 2025-12-06 10:14:56.747 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network deb7774c-e96b-4e7f-88d7-ed9d740915f4 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:14:56 localhost ovn_metadata_agent[160504]: 2025-12-06 10:14:56.748 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[237947a5-5a99-423b-8004-d49acec8760b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:14:56 localhost journal[230404]: ethtool ioctl error on tapff588d77-fd: No such device Dec 6 05:14:56 localhost journal[230404]: ethtool ioctl error on tapff588d77-fd: No such device Dec 6 05:14:56 localhost journal[230404]: ethtool ioctl error on tapff588d77-fd: No such device Dec 6 05:14:56 localhost ovn_controller[154851]: 2025-12-06T10:14:56Z|00088|binding|INFO|Setting lport ff588d77-fd65-43a9-bd18-9402d0aef61a ovn-installed in OVS Dec 6 05:14:56 localhost ovn_controller[154851]: 2025-12-06T10:14:56Z|00089|binding|INFO|Setting lport ff588d77-fd65-43a9-bd18-9402d0aef61a up in Southbound Dec 6 05:14:56 localhost nova_compute[282193]: 2025-12-06 10:14:56.769 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:56 localhost journal[230404]: ethtool ioctl error on tapff588d77-fd: No such device Dec 6 05:14:56 localhost journal[230404]: ethtool ioctl error on tapff588d77-fd: No such device Dec 6 05:14:56 localhost journal[230404]: ethtool ioctl error on tapff588d77-fd: No such device Dec 6 05:14:56 localhost journal[230404]: ethtool ioctl error on tapff588d77-fd: No such device Dec 6 05:14:56 localhost journal[230404]: ethtool ioctl error on tapff588d77-fd: No such device Dec 6 05:14:56 localhost nova_compute[282193]: 2025-12-06 10:14:56.807 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:56 localhost nova_compute[282193]: 2025-12-06 10:14:56.829 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:56 localhost nova_compute[282193]: 2025-12-06 10:14:56.911 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:56 localhost nova_compute[282193]: 2025-12-06 10:14:56.959 282197 DEBUG nova.network.neutron [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Port e87832d3-ffc3-44e0-9f77-cd2eb6073d62 updated with migration profile {'migrating_to': 'np0005548789.localdomain'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m Dec 6 05:14:56 localhost nova_compute[282193]: 2025-12-06 10:14:56.961 282197 DEBUG nova.compute.manager [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=12288,disk_over_commit=,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmpe77a5ohg',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='87dc2ce3-2b16-4764-9803-711c2d12c20f',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m Dec 6 05:14:57 localhost dnsmasq[311859]: read /var/lib/neutron/dhcp/df3c5fcc-9cd4-4d33-9970-a165c712aad3/addn_hosts - 0 addresses Dec 6 05:14:57 localhost podman[312400]: 2025-12-06 10:14:57.159974879 +0000 UTC m=+0.064110314 container kill dbf1d79cf37c9a95da8c3fbfd17ed1db394008f8d27c162fbd5d4fc5de50fe81 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-df3c5fcc-9cd4-4d33-9970-a165c712aad3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3) Dec 6 05:14:57 localhost dnsmasq-dhcp[311859]: read /var/lib/neutron/dhcp/df3c5fcc-9cd4-4d33-9970-a165c712aad3/host Dec 6 05:14:57 localhost dnsmasq-dhcp[311859]: read /var/lib/neutron/dhcp/df3c5fcc-9cd4-4d33-9970-a165c712aad3/opts Dec 6 05:14:57 localhost sshd[312417]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:14:57 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:57.179 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:14:51Z, description=, device_id=7e399e99-e656-47a1-9fb2-96abab06c114, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=3632540c-2981-4cf2-a512-17df5b6faa8d, ip_allocation=immediate, mac_address=fa:16:3e:ac:38:dd, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:14:44Z, description=, dns_domain=, id=feb354e1-97d5-4c74-804a-eeb06e5bb155, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServersTestJSON-1665562525-network, port_security_enabled=True, project_id=4185da56d12649bc8653dd9db208c0a0, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=23466, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=570, status=ACTIVE, subnets=['0d3c1a86-b134-4467-9f13-385eed16e944'], tags=[], tenant_id=4185da56d12649bc8653dd9db208c0a0, updated_at=2025-12-06T10:14:45Z, vlan_transparent=None, network_id=feb354e1-97d5-4c74-804a-eeb06e5bb155, port_security_enabled=False, project_id=4185da56d12649bc8653dd9db208c0a0, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=616, status=DOWN, tags=[], tenant_id=4185da56d12649bc8653dd9db208c0a0, updated_at=2025-12-06T10:14:51Z on network feb354e1-97d5-4c74-804a-eeb06e5bb155#033[00m Dec 6 05:14:57 localhost systemd[1]: Created slice User Slice of UID 42436. Dec 6 05:14:57 localhost systemd[1]: Starting User Runtime Directory /run/user/42436... Dec 6 05:14:57 localhost systemd-logind[766]: New session 75 of user nova. Dec 6 05:14:57 localhost systemd[1]: Finished User Runtime Directory /run/user/42436. Dec 6 05:14:57 localhost systemd[1]: Starting User Manager for UID 42436... Dec 6 05:14:57 localhost podman[312444]: 2025-12-06 10:14:57.387257125 +0000 UTC m=+0.057560684 container kill bf1840969a38cdc03ff2f446425ad666bbcdfe5ae29c7d80e81b374fc1a9dff9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-feb354e1-97d5-4c74-804a-eeb06e5bb155, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Dec 6 05:14:57 localhost dnsmasq[312123]: read /var/lib/neutron/dhcp/feb354e1-97d5-4c74-804a-eeb06e5bb155/addn_hosts - 1 addresses Dec 6 05:14:57 localhost dnsmasq-dhcp[312123]: read /var/lib/neutron/dhcp/feb354e1-97d5-4c74-804a-eeb06e5bb155/host Dec 6 05:14:57 localhost dnsmasq-dhcp[312123]: read /var/lib/neutron/dhcp/feb354e1-97d5-4c74-804a-eeb06e5bb155/opts Dec 6 05:14:57 localhost systemd[312446]: Queued start job for default target Main User Target. Dec 6 05:14:57 localhost systemd[312446]: Created slice User Application Slice. Dec 6 05:14:57 localhost systemd[312446]: Started Mark boot as successful after the user session has run 2 minutes. Dec 6 05:14:57 localhost systemd[312446]: Started Daily Cleanup of User's Temporary Directories. Dec 6 05:14:57 localhost systemd[312446]: Reached target Paths. Dec 6 05:14:57 localhost systemd[312446]: Reached target Timers. Dec 6 05:14:57 localhost systemd[312446]: Starting D-Bus User Message Bus Socket... Dec 6 05:14:57 localhost systemd[312446]: Starting Create User's Volatile Files and Directories... Dec 6 05:14:57 localhost systemd[312446]: Finished Create User's Volatile Files and Directories. Dec 6 05:14:57 localhost systemd[312446]: Listening on D-Bus User Message Bus Socket. Dec 6 05:14:57 localhost systemd[312446]: Reached target Sockets. Dec 6 05:14:57 localhost systemd[312446]: Reached target Basic System. Dec 6 05:14:57 localhost systemd[312446]: Reached target Main User Target. Dec 6 05:14:57 localhost systemd[312446]: Startup finished in 152ms. Dec 6 05:14:57 localhost systemd[1]: Started User Manager for UID 42436. Dec 6 05:14:57 localhost systemd[1]: Started Session 75 of User nova. Dec 6 05:14:57 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:57.554 263652 INFO neutron.agent.dhcp.agent [None req-29c685c6-210d-489d-8583-2f4ccab49cc6 - - - - - -] DHCP configuration for ports {'3632540c-2981-4cf2-a512-17df5b6faa8d'} is completed#033[00m Dec 6 05:14:57 localhost systemd[1]: Started libvirt secret daemon. Dec 6 05:14:57 localhost podman[312512]: Dec 6 05:14:57 localhost podman[312512]: 2025-12-06 10:14:57.779266874 +0000 UTC m=+0.083653626 container create 8a8b7a6a9724101bff1398ade8c854164d1816271ca6c4f86a12732f70229362 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-deb7774c-e96b-4e7f-88d7-ed9d740915f4, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:14:57 localhost NetworkManager[5973]: [1765016097.7921] manager: (tape87832d3-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/23) Dec 6 05:14:57 localhost kernel: device tape87832d3-ff entered promiscuous mode Dec 6 05:14:57 localhost systemd-udevd[312344]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:14:57 localhost NetworkManager[5973]: [1765016097.8105] device (tape87832d3-ff): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Dec 6 05:14:57 localhost NetworkManager[5973]: [1765016097.8110] device (tape87832d3-ff): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external') Dec 6 05:14:57 localhost ovn_controller[154851]: 2025-12-06T10:14:57Z|00090|binding|INFO|Claiming lport e87832d3-ffc3-44e0-9f77-cd2eb6073d62 for this additional chassis. Dec 6 05:14:57 localhost ovn_controller[154851]: 2025-12-06T10:14:57Z|00091|binding|INFO|e87832d3-ffc3-44e0-9f77-cd2eb6073d62: Claiming fa:16:3e:0e:f5:37 10.100.0.14 Dec 6 05:14:57 localhost ovn_controller[154851]: 2025-12-06T10:14:57Z|00092|binding|INFO|Claiming lport 3b69daca-b91a-4923-9795-2e6a02ee3d59 for this additional chassis. Dec 6 05:14:57 localhost ovn_controller[154851]: 2025-12-06T10:14:57Z|00093|binding|INFO|3b69daca-b91a-4923-9795-2e6a02ee3d59: Claiming fa:16:3e:a8:e1:a6 19.80.0.214 Dec 6 05:14:57 localhost nova_compute[282193]: 2025-12-06 10:14:57.831 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:57 localhost nova_compute[282193]: 2025-12-06 10:14:57.835 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:57 localhost podman[312512]: 2025-12-06 10:14:57.740636334 +0000 UTC m=+0.045023106 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:14:57 localhost systemd[1]: Started libpod-conmon-8a8b7a6a9724101bff1398ade8c854164d1816271ca6c4f86a12732f70229362.scope. Dec 6 05:14:57 localhost ovn_controller[154851]: 2025-12-06T10:14:57Z|00094|binding|INFO|Setting lport e87832d3-ffc3-44e0-9f77-cd2eb6073d62 ovn-installed in OVS Dec 6 05:14:57 localhost nova_compute[282193]: 2025-12-06 10:14:57.858 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:57 localhost systemd[1]: Started libcrun container. Dec 6 05:14:57 localhost systemd[1]: Started Virtual Machine qemu-3-instance-00000007. Dec 6 05:14:57 localhost systemd-machined[84444]: New machine qemu-3-instance-00000007. Dec 6 05:14:57 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06549e5dbf4ea1c819a27ad89b0090c0fd564fb1fbcc2e1eabbb66d37085811c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:14:57 localhost podman[312512]: 2025-12-06 10:14:57.876324755 +0000 UTC m=+0.180711507 container init 8a8b7a6a9724101bff1398ade8c854164d1816271ca6c4f86a12732f70229362 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-deb7774c-e96b-4e7f-88d7-ed9d740915f4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:14:57 localhost podman[312512]: 2025-12-06 10:14:57.885540964 +0000 UTC m=+0.189927716 container start 8a8b7a6a9724101bff1398ade8c854164d1816271ca6c4f86a12732f70229362 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-deb7774c-e96b-4e7f-88d7-ed9d740915f4, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3) Dec 6 05:14:57 localhost dnsmasq[312566]: started, version 2.85 cachesize 150 Dec 6 05:14:57 localhost dnsmasq[312566]: DNS service limited to local subnets Dec 6 05:14:57 localhost dnsmasq[312566]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:14:57 localhost dnsmasq[312566]: warning: no upstream servers configured Dec 6 05:14:57 localhost dnsmasq-dhcp[312566]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 6 05:14:57 localhost dnsmasq[312566]: read /var/lib/neutron/dhcp/deb7774c-e96b-4e7f-88d7-ed9d740915f4/addn_hosts - 0 addresses Dec 6 05:14:57 localhost dnsmasq-dhcp[312566]: read /var/lib/neutron/dhcp/deb7774c-e96b-4e7f-88d7-ed9d740915f4/host Dec 6 05:14:57 localhost dnsmasq-dhcp[312566]: read /var/lib/neutron/dhcp/deb7774c-e96b-4e7f-88d7-ed9d740915f4/opts Dec 6 05:14:57 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:14:58 localhost nova_compute[282193]: 2025-12-06 10:14:58.137 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:58 localhost ovn_controller[154851]: 2025-12-06T10:14:58Z|00095|binding|INFO|Releasing lport 9f077348-ed05-4cf4-8524-593431fbafaf from this chassis (sb_readonly=1) Dec 6 05:14:58 localhost kernel: device tap9f077348-ed left promiscuous mode Dec 6 05:14:58 localhost ovn_controller[154851]: 2025-12-06T10:14:58Z|00096|if_status|INFO|Not setting lport 9f077348-ed05-4cf4-8524-593431fbafaf down as sb is readonly Dec 6 05:14:58 localhost nova_compute[282193]: 2025-12-06 10:14:58.161 282197 DEBUG nova.virt.driver [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Emitting event Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 6 05:14:58 localhost nova_compute[282193]: 2025-12-06 10:14:58.162 282197 INFO nova.compute.manager [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] VM Started (Lifecycle Event)#033[00m Dec 6 05:14:58 localhost nova_compute[282193]: 2025-12-06 10:14:58.165 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:14:58 localhost ovn_controller[154851]: 2025-12-06T10:14:58Z|00097|binding|INFO|Setting lport 9f077348-ed05-4cf4-8524-593431fbafaf down in Southbound Dec 6 05:14:58 localhost ovn_metadata_agent[160504]: 2025-12-06 10:14:58.203 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-df3c5fcc-9cd4-4d33-9970-a165c712aad3', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df3c5fcc-9cd4-4d33-9970-a165c712aad3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '024b6fbc052c4ed7a93c855bd2ae77da', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548789.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dd535f07-7612-46c6-87c9-bf69c15a9a5d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=9f077348-ed05-4cf4-8524-593431fbafaf) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:14:58 localhost ovn_metadata_agent[160504]: 2025-12-06 10:14:58.205 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 9f077348-ed05-4cf4-8524-593431fbafaf in datapath df3c5fcc-9cd4-4d33-9970-a165c712aad3 unbound from our chassis#033[00m Dec 6 05:14:58 localhost nova_compute[282193]: 2025-12-06 10:14:58.208 282197 DEBUG nova.compute.manager [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 6 05:14:58 localhost ovn_metadata_agent[160504]: 2025-12-06 10:14:58.210 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network df3c5fcc-9cd4-4d33-9970-a165c712aad3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:14:58 localhost ovn_metadata_agent[160504]: 2025-12-06 10:14:58.212 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[8b02852c-79cc-4146-ac84-72f4d4e51c42]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:14:58 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:14:58 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:14:58.257 263652 INFO neutron.agent.dhcp.agent [None req-6f08096c-8cef-4708-b239-5eda78c66e32 - - - - - -] DHCP configuration for ports {'431aeba8-5962-4449-b69d-46c4360741a7'} is completed#033[00m Dec 6 05:14:58 localhost nova_compute[282193]: 2025-12-06 10:14:58.951 282197 DEBUG nova.virt.driver [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Emitting event Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 6 05:14:58 localhost nova_compute[282193]: 2025-12-06 10:14:58.952 282197 INFO nova.compute.manager [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] VM Resumed (Lifecycle Event)#033[00m Dec 6 05:14:58 localhost nova_compute[282193]: 2025-12-06 10:14:58.979 282197 DEBUG nova.compute.manager [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 6 05:14:58 localhost nova_compute[282193]: 2025-12-06 10:14:58.984 282197 DEBUG nova.compute.manager [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Dec 6 05:14:59 localhost nova_compute[282193]: 2025-12-06 10:14:59.012 282197 INFO nova.compute.manager [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] During the sync_power process the instance has moved from host np0005548790.localdomain to host np0005548789.localdomain#033[00m Dec 6 05:14:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:14:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:14:59 localhost systemd[1]: session-75.scope: Deactivated successfully. Dec 6 05:14:59 localhost systemd-logind[766]: Session 75 logged out. Waiting for processes to exit. Dec 6 05:14:59 localhost systemd-logind[766]: Removed session 75. Dec 6 05:14:59 localhost systemd[1]: tmp-crun.BueLQA.mount: Deactivated successfully. Dec 6 05:14:59 localhost podman[312616]: 2025-12-06 10:14:59.348720642 +0000 UTC m=+0.091053180 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:14:59 localhost podman[312616]: 2025-12-06 10:14:59.358737895 +0000 UTC m=+0.101070413 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:14:59 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:14:59 localhost podman[312615]: 2025-12-06 10:14:59.454357613 +0000 UTC m=+0.200200657 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:14:59 localhost podman[312615]: 2025-12-06 10:14:59.487378453 +0000 UTC m=+0.233221427 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:14:59 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:15:00 localhost systemd[1]: tmp-crun.IyrE0m.mount: Deactivated successfully. Dec 6 05:15:00 localhost ovn_controller[154851]: 2025-12-06T10:15:00Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0e:f5:37 10.100.0.14 Dec 6 05:15:00 localhost ovn_controller[154851]: 2025-12-06T10:15:00Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0e:f5:37 10.100.0.14 Dec 6 05:15:00 localhost nova_compute[282193]: 2025-12-06 10:15:00.846 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:01 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:01.871 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:15:00Z, description=, device_id=f44ab79a-f9b8-4237-b1dc-a24e7d22c236, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=5f9b5a36-6f9d-4432-a50f-3ba7cd01f2c4, ip_allocation=immediate, mac_address=fa:16:3e:ad:2b:28, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=650, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-06T10:15:01Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:15:01 localhost nova_compute[282193]: 2025-12-06 10:15:01.945 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:01 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e109 e109: 6 total, 6 up, 6 in Dec 6 05:15:02 localhost podman[312673]: 2025-12-06 10:15:02.074556229 +0000 UTC m=+0.049155663 container kill e4e5ac05442955f65040fee10f3aa342d80518f3e4a072b3d79fe4302f513cf7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 6 05:15:02 localhost dnsmasq[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 9 addresses Dec 6 05:15:02 localhost dnsmasq-dhcp[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:15:02 localhost dnsmasq-dhcp[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:15:02 localhost systemd[1]: tmp-crun.J0jLJw.mount: Deactivated successfully. Dec 6 05:15:02 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:02.343 263652 INFO neutron.agent.dhcp.agent [None req-ce92131b-5312-498a-a700-98ab0b647cb8 - - - - - -] DHCP configuration for ports {'5f9b5a36-6f9d-4432-a50f-3ba7cd01f2c4'} is completed#033[00m Dec 6 05:15:03 localhost sshd[312694]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:15:03 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:15:04 localhost ovn_controller[154851]: 2025-12-06T10:15:04Z|00098|binding|INFO|Claiming lport e87832d3-ffc3-44e0-9f77-cd2eb6073d62 for this chassis. Dec 6 05:15:04 localhost ovn_controller[154851]: 2025-12-06T10:15:04Z|00099|binding|INFO|e87832d3-ffc3-44e0-9f77-cd2eb6073d62: Claiming fa:16:3e:0e:f5:37 10.100.0.14 Dec 6 05:15:04 localhost ovn_controller[154851]: 2025-12-06T10:15:04Z|00100|binding|INFO|Claiming lport 3b69daca-b91a-4923-9795-2e6a02ee3d59 for this chassis. Dec 6 05:15:04 localhost ovn_controller[154851]: 2025-12-06T10:15:04Z|00101|binding|INFO|3b69daca-b91a-4923-9795-2e6a02ee3d59: Claiming fa:16:3e:a8:e1:a6 19.80.0.214 Dec 6 05:15:04 localhost ovn_controller[154851]: 2025-12-06T10:15:04Z|00102|binding|INFO|Setting lport e87832d3-ffc3-44e0-9f77-cd2eb6073d62 up in Southbound Dec 6 05:15:04 localhost ovn_controller[154851]: 2025-12-06T10:15:04Z|00103|binding|INFO|Setting lport 3b69daca-b91a-4923-9795-2e6a02ee3d59 up in Southbound Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:04.055 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:f5:37 10.100.0.14'], port_security=['fa:16:3e:0e:f5:37 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-876689022', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-47d636a7-c520-4320-aa94-bfb41f418584', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-876689022', 'neutron:project_id': '7897d6398eb64eb29c66df8db792e581', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'bfad329a-0ea3-4b02-8e91-9d15749f8c9b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548790.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6898c302-0153-460c-9cb1-4c62ebc9ff31, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=e87832d3-ffc3-44e0-9f77-cd2eb6073d62) old=Port_Binding(up=[False], additional_chassis=[], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:04.058 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:e1:a6 19.80.0.214'], port_security=['fa:16:3e:a8:e1:a6 19.80.0.214'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': ''}, parent_port=['e87832d3-ffc3-44e0-9f77-cd2eb6073d62'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-546955816', 'neutron:cidrs': '19.80.0.214/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-932e7489-8895-41d4-92c6-0d944505e7e6', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-546955816', 'neutron:project_id': '7897d6398eb64eb29c66df8db792e581', 'neutron:revision_number': '3', 'neutron:security_group_ids': 'bfad329a-0ea3-4b02-8e91-9d15749f8c9b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=f9bb405c-aea0-4a81-a300-475f8e1e8050, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=3b69daca-b91a-4923-9795-2e6a02ee3d59) old=Port_Binding(up=[False], additional_chassis=[], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:04.060 160509 INFO neutron.agent.ovn.metadata.agent [-] Port e87832d3-ffc3-44e0-9f77-cd2eb6073d62 in datapath 47d636a7-c520-4320-aa94-bfb41f418584 bound to our chassis#033[00m Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:04.065 160509 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 47d636a7-c520-4320-aa94-bfb41f418584#033[00m Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:04.076 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[388dc684-1a5b-4f19-8015-ecdc7b8c8026]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:04.077 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap47d636a7-c1 in ovnmeta-47d636a7-c520-4320-aa94-bfb41f418584 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:04.080 160674 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap47d636a7-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:04.080 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[524b8678-7ced-47df-ac30-8655196df868]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:04.083 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[123f2226-5a5d-46c5-8a4e-a74d9cae2366]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:04.103 160720 DEBUG oslo.privsep.daemon [-] privsep: reply[8f5cc2a8-8ee0-4a8b-b08a-5cf39cdf51c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:04.120 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[6d086798-55b1-4514-b2a2-246701cf9bcb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:04.157 160700 DEBUG oslo.privsep.daemon [-] privsep: reply[00904b77-ae9f-4428-9e88-0d9c975f713a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:04.162 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[4ce041db-b30c-46ec-b4dd-6845d50495e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:04 localhost NetworkManager[5973]: [1765016104.1646] manager: (tap47d636a7-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/24) Dec 6 05:15:04 localhost systemd-udevd[312701]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:04.199 160700 DEBUG oslo.privsep.daemon [-] privsep: reply[7d23e7aa-b7f2-4a80-81e4-feb5213d499b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:04.203 160700 DEBUG oslo.privsep.daemon [-] privsep: reply[a376bb66-b496-4285-a754-b7ed72d3da94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:04 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap47d636a7-c1: link becomes ready Dec 6 05:15:04 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap47d636a7-c0: link becomes ready Dec 6 05:15:04 localhost NetworkManager[5973]: [1765016104.2230] device (tap47d636a7-c0): carrier: link connected Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:04.230 160700 DEBUG oslo.privsep.daemon [-] privsep: reply[46b7f9f4-3658-43e4-b623-45049db537d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:04.251 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[bd629d4d-37d7-4633-b88d-f02d59adc186]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap47d636a7-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:94:11:87'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1252240, 'reachable_time': 42492, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312722, 'error': None, 'target': 'ovnmeta-47d636a7-c520-4320-aa94-bfb41f418584', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:04.270 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[d24c0559-3966-4145-b612-1608bd642564]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe94:1187'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1252240, 'tstamp': 1252240}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312723, 'error': None, 'target': 'ovnmeta-47d636a7-c520-4320-aa94-bfb41f418584', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:04.291 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[5c42f402-3216-43c6-a5d9-cc52cdc23348]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap47d636a7-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:94:11:87'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1252240, 'reachable_time': 42492, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 312724, 'error': None, 'target': 'ovnmeta-47d636a7-c520-4320-aa94-bfb41f418584', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:04.329 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[7895e3c0-ab53-430c-a14e-3f98205bdd5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:04.398 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[4501a9c4-a64e-4ebe-9177-bb2fbff76977]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:04.400 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap47d636a7-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:04.401 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:04.401 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap47d636a7-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 05:15:04 localhost kernel: device tap47d636a7-c0 entered promiscuous mode Dec 6 05:15:04 localhost nova_compute[282193]: 2025-12-06 10:15:04.404 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:04 localhost nova_compute[282193]: 2025-12-06 10:15:04.407 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:04.409 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap47d636a7-c0, col_values=(('external_ids', {'iface-id': '8839eeed-ff6b-46d9-b40d-610788617728'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 05:15:04 localhost ovn_controller[154851]: 2025-12-06T10:15:04Z|00104|binding|INFO|Releasing lport 8839eeed-ff6b-46d9-b40d-610788617728 from this chassis (sb_readonly=0) Dec 6 05:15:04 localhost nova_compute[282193]: 2025-12-06 10:15:04.410 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:04 localhost nova_compute[282193]: 2025-12-06 10:15:04.421 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:04.423 160509 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/47d636a7-c520-4320-aa94-bfb41f418584.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/47d636a7-c520-4320-aa94-bfb41f418584.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:04.424 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[5bae47bb-7f6f-4e44-824b-9fe29269dda7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:04.425 160509 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: global Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: log /dev/log local0 debug Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: log-tag haproxy-metadata-proxy-47d636a7-c520-4320-aa94-bfb41f418584 Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: user root Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: group root Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: maxconn 1024 Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: pidfile /var/lib/neutron/external/pids/47d636a7-c520-4320-aa94-bfb41f418584.pid.haproxy Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: daemon Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: defaults Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: log global Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: mode http Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: option httplog Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: option dontlognull Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: option http-server-close Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: option forwardfor Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: retries 3 Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: timeout http-request 30s Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: timeout connect 30s Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: timeout client 32s Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: timeout server 32s Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: timeout http-keep-alive 30s Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: listen listener Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: bind 169.254.169.254:80 Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: server metadata /var/lib/neutron/metadata_proxy Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: http-request add-header X-OVN-Network-ID 47d636a7-c520-4320-aa94-bfb41f418584 Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:04.426 160509 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-47d636a7-c520-4320-aa94-bfb41f418584', 'env', 'PROCESS_TAG=haproxy-47d636a7-c520-4320-aa94-bfb41f418584', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/47d636a7-c520-4320-aa94-bfb41f418584.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Dec 6 05:15:04 localhost neutron_sriov_agent[256690]: 2025-12-06 10:15:04.670 2 WARNING neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [req-72030b45-f187-4b30-a9d9-59e0042b4b0f req-b4b9ad16-a4fa-4cc7-b4bb-1c52c2b8f48b f52779cce5374723ad2618b5c2916973 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] This port is not SRIOV, skip binding for port e87832d3-ffc3-44e0-9f77-cd2eb6073d62.#033[00m Dec 6 05:15:04 localhost nova_compute[282193]: 2025-12-06 10:15:04.823 282197 INFO nova.compute.manager [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Post operation of migration started#033[00m Dec 6 05:15:04 localhost podman[312757]: Dec 6 05:15:04 localhost podman[312757]: 2025-12-06 10:15:04.856279572 +0000 UTC m=+0.061749154 container create 30f8df0ce350363c4f7f35e7678b3d71ec43583d72e5615def6c2519fa7edac0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-47d636a7-c520-4320-aa94-bfb41f418584, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 6 05:15:04 localhost systemd[1]: Started libpod-conmon-30f8df0ce350363c4f7f35e7678b3d71ec43583d72e5615def6c2519fa7edac0.scope. Dec 6 05:15:04 localhost systemd[1]: Started libcrun container. Dec 6 05:15:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c58caa5621f3279794f7dc107a894db9a252904b5522821832a2bf549b22bd7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:15:04 localhost podman[312757]: 2025-12-06 10:15:04.821058803 +0000 UTC m=+0.026528545 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Dec 6 05:15:04 localhost podman[312757]: 2025-12-06 10:15:04.922309754 +0000 UTC m=+0.127779346 container init 30f8df0ce350363c4f7f35e7678b3d71ec43583d72e5615def6c2519fa7edac0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-47d636a7-c520-4320-aa94-bfb41f418584, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125) Dec 6 05:15:04 localhost podman[312757]: 2025-12-06 10:15:04.931617886 +0000 UTC m=+0.137087458 container start 30f8df0ce350363c4f7f35e7678b3d71ec43583d72e5615def6c2519fa7edac0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-47d636a7-c520-4320-aa94-bfb41f418584, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 6 05:15:04 localhost neutron-haproxy-ovnmeta-47d636a7-c520-4320-aa94-bfb41f418584[312771]: [NOTICE] (312775) : New worker (312777) forked Dec 6 05:15:04 localhost neutron-haproxy-ovnmeta-47d636a7-c520-4320-aa94-bfb41f418584[312771]: [NOTICE] (312775) : Loading success. Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:04.975 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 3b69daca-b91a-4923-9795-2e6a02ee3d59 in datapath 932e7489-8895-41d4-92c6-0d944505e7e6 bound to our chassis#033[00m Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:04.978 160509 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 932e7489-8895-41d4-92c6-0d944505e7e6#033[00m Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:04.985 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[cc15a672-c359-4620-bcf9-77985b2a7beb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:04.986 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap932e7489-81 in ovnmeta-932e7489-8895-41d4-92c6-0d944505e7e6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:04.988 160674 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap932e7489-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:04.988 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[d724c277-2738-4be8-8f1f-752e0f61ff96]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:04.989 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[04d16f33-2159-4859-9d40-f3d87f10ad3b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:04 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:04.996 160720 DEBUG oslo.privsep.daemon [-] privsep: reply[56cbbb71-627c-46a2-9a7c-4fba11f33927]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:05 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:05.006 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[0c1c62fe-33fd-43af-952b-a212b3028732]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:05 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:05.019 160700 DEBUG oslo.privsep.daemon [-] privsep: reply[9e5a30b9-ecdc-4b5b-b268-0c907aeba40c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:05 localhost NetworkManager[5973]: [1765016105.0248] manager: (tap932e7489-80): new Veth device (/org/freedesktop/NetworkManager/Devices/25) Dec 6 05:15:05 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:05.023 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[5ffb6f34-2bba-4318-8ff3-9078ef0e5d80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:05 localhost systemd-udevd[312716]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:15:05 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:05.051 160700 DEBUG oslo.privsep.daemon [-] privsep: reply[bc850976-62fd-437c-857e-5c57560fd499]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:05 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:05.053 160700 DEBUG oslo.privsep.daemon [-] privsep: reply[baeda593-b222-417e-b859-68412ebe617f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:05 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap932e7489-80: link becomes ready Dec 6 05:15:05 localhost NetworkManager[5973]: [1765016105.0719] device (tap932e7489-80): carrier: link connected Dec 6 05:15:05 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:05.076 160700 DEBUG oslo.privsep.daemon [-] privsep: reply[f01cb6b8-0e9b-465b-9d8c-427b63e7346d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:05 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:05.094 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[3b417f9c-63f1-43af-adb3-ea6f6d5bd4be]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap932e7489-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:4b:f3:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1252325, 'reachable_time': 24039, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312796, 'error': None, 'target': 'ovnmeta-932e7489-8895-41d4-92c6-0d944505e7e6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:05 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:05.108 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[3333ba94-05a8-4529-8897-003387f4eeb7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4b:f3ac'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1252325, 'tstamp': 1252325}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312797, 'error': None, 'target': 'ovnmeta-932e7489-8895-41d4-92c6-0d944505e7e6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:05 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:05.125 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[f0df1adc-1544-4968-a782-2f26ac813b82]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap932e7489-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:4b:f3:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 176, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 176, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1252325, 'reachable_time': 24039, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 312798, 'error': None, 'target': 'ovnmeta-932e7489-8895-41d4-92c6-0d944505e7e6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:05 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:05.152 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[25e86e29-1f10-4ae4-8289-1a1865d7e561]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:05 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:05.217 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[89af31fc-e075-44b7-b743-596a10c31e90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:05 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:05.219 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap932e7489-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 05:15:05 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:05.220 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 6 05:15:05 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:05.221 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap932e7489-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 05:15:05 localhost nova_compute[282193]: 2025-12-06 10:15:05.224 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:05 localhost kernel: device tap932e7489-80 entered promiscuous mode Dec 6 05:15:05 localhost nova_compute[282193]: 2025-12-06 10:15:05.227 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:05 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:05.230 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap932e7489-80, col_values=(('external_ids', {'iface-id': '9a87eef5-19db-4fcf-a021-4f61b153af33'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 05:15:05 localhost nova_compute[282193]: 2025-12-06 10:15:05.232 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:05 localhost ovn_controller[154851]: 2025-12-06T10:15:05Z|00105|binding|INFO|Releasing lport 9a87eef5-19db-4fcf-a021-4f61b153af33 from this chassis (sb_readonly=0) Dec 6 05:15:05 localhost nova_compute[282193]: 2025-12-06 10:15:05.233 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:05 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:05.234 160509 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/932e7489-8895-41d4-92c6-0d944505e7e6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/932e7489-8895-41d4-92c6-0d944505e7e6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Dec 6 05:15:05 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:05.235 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[86f15ef4-8181-4798-8298-04cfe630b156]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:05 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:05.236 160509 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Dec 6 05:15:05 localhost ovn_metadata_agent[160504]: global Dec 6 05:15:05 localhost ovn_metadata_agent[160504]: log /dev/log local0 debug Dec 6 05:15:05 localhost ovn_metadata_agent[160504]: log-tag haproxy-metadata-proxy-932e7489-8895-41d4-92c6-0d944505e7e6 Dec 6 05:15:05 localhost ovn_metadata_agent[160504]: user root Dec 6 05:15:05 localhost ovn_metadata_agent[160504]: group root Dec 6 05:15:05 localhost ovn_metadata_agent[160504]: maxconn 1024 Dec 6 05:15:05 localhost ovn_metadata_agent[160504]: pidfile /var/lib/neutron/external/pids/932e7489-8895-41d4-92c6-0d944505e7e6.pid.haproxy Dec 6 05:15:05 localhost ovn_metadata_agent[160504]: daemon Dec 6 05:15:05 localhost ovn_metadata_agent[160504]: Dec 6 05:15:05 localhost ovn_metadata_agent[160504]: defaults Dec 6 05:15:05 localhost ovn_metadata_agent[160504]: log global Dec 6 05:15:05 localhost ovn_metadata_agent[160504]: mode http Dec 6 05:15:05 localhost ovn_metadata_agent[160504]: option httplog Dec 6 05:15:05 localhost ovn_metadata_agent[160504]: option dontlognull Dec 6 05:15:05 localhost ovn_metadata_agent[160504]: option http-server-close Dec 6 05:15:05 localhost ovn_metadata_agent[160504]: option forwardfor Dec 6 05:15:05 localhost ovn_metadata_agent[160504]: retries 3 Dec 6 05:15:05 localhost ovn_metadata_agent[160504]: timeout http-request 30s Dec 6 05:15:05 localhost ovn_metadata_agent[160504]: timeout connect 30s Dec 6 05:15:05 localhost ovn_metadata_agent[160504]: timeout client 32s Dec 6 05:15:05 localhost ovn_metadata_agent[160504]: timeout server 32s Dec 6 05:15:05 localhost ovn_metadata_agent[160504]: timeout http-keep-alive 30s Dec 6 05:15:05 localhost ovn_metadata_agent[160504]: Dec 6 05:15:05 localhost ovn_metadata_agent[160504]: Dec 6 05:15:05 localhost ovn_metadata_agent[160504]: listen listener Dec 6 05:15:05 localhost ovn_metadata_agent[160504]: bind 169.254.169.254:80 Dec 6 05:15:05 localhost ovn_metadata_agent[160504]: server metadata /var/lib/neutron/metadata_proxy Dec 6 05:15:05 localhost ovn_metadata_agent[160504]: http-request add-header X-OVN-Network-ID 932e7489-8895-41d4-92c6-0d944505e7e6 Dec 6 05:15:05 localhost ovn_metadata_agent[160504]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Dec 6 05:15:05 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:05.237 160509 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-932e7489-8895-41d4-92c6-0d944505e7e6', 'env', 'PROCESS_TAG=haproxy-932e7489-8895-41d4-92c6-0d944505e7e6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/932e7489-8895-41d4-92c6-0d944505e7e6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Dec 6 05:15:05 localhost nova_compute[282193]: 2025-12-06 10:15:05.238 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:05 localhost podman[312830]: Dec 6 05:15:05 localhost podman[312830]: 2025-12-06 10:15:05.700025084 +0000 UTC m=+0.104644466 container create 612bd559f2204770efc500c44b1ad1302ada83654a900876cb5eb3da5fa5d50c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-932e7489-8895-41d4-92c6-0d944505e7e6, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:15:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:15:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:15:05 localhost systemd[1]: Started libpod-conmon-612bd559f2204770efc500c44b1ad1302ada83654a900876cb5eb3da5fa5d50c.scope. Dec 6 05:15:05 localhost podman[312830]: 2025-12-06 10:15:05.641525209 +0000 UTC m=+0.046144611 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Dec 6 05:15:05 localhost systemd[1]: tmp-crun.ebm9JE.mount: Deactivated successfully. Dec 6 05:15:05 localhost nova_compute[282193]: 2025-12-06 10:15:05.760 282197 DEBUG oslo_concurrency.lockutils [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Acquiring lock "refresh_cache-87dc2ce3-2b16-4764-9803-711c2d12c20f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 6 05:15:05 localhost nova_compute[282193]: 2025-12-06 10:15:05.761 282197 DEBUG oslo_concurrency.lockutils [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Acquired lock "refresh_cache-87dc2ce3-2b16-4764-9803-711c2d12c20f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 6 05:15:05 localhost nova_compute[282193]: 2025-12-06 10:15:05.761 282197 DEBUG nova.network.neutron [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Dec 6 05:15:05 localhost systemd[1]: Started libcrun container. Dec 6 05:15:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cd54fc14950974dc4be0ab802b5b1b70a4a6778ec88aa2829505a69bc3f2898/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:15:05 localhost podman[312830]: 2025-12-06 10:15:05.780647159 +0000 UTC m=+0.185266541 container init 612bd559f2204770efc500c44b1ad1302ada83654a900876cb5eb3da5fa5d50c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-932e7489-8895-41d4-92c6-0d944505e7e6, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3) Dec 6 05:15:05 localhost neutron-haproxy-ovnmeta-932e7489-8895-41d4-92c6-0d944505e7e6[312846]: [NOTICE] (312868) : New worker (312870) forked Dec 6 05:15:05 localhost neutron-haproxy-ovnmeta-932e7489-8895-41d4-92c6-0d944505e7e6[312846]: [NOTICE] (312868) : Loading success. Dec 6 05:15:05 localhost podman[312830]: 2025-12-06 10:15:05.841388181 +0000 UTC m=+0.246007553 container start 612bd559f2204770efc500c44b1ad1302ada83654a900876cb5eb3da5fa5d50c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-932e7489-8895-41d4-92c6-0d944505e7e6, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:15:05 localhost nova_compute[282193]: 2025-12-06 10:15:05.887 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:05 localhost podman[312844]: 2025-12-06 10:15:05.930845254 +0000 UTC m=+0.183837516 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, architecture=x86_64, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, io.openshift.expose-services=) Dec 6 05:15:05 localhost podman[312845]: 2025-12-06 10:15:05.840945928 +0000 UTC m=+0.095436066 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible) Dec 6 05:15:05 localhost podman[312844]: 2025-12-06 10:15:05.968497647 +0000 UTC m=+0.221489939 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-type=git, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.expose-services=, name=ubi9-minimal, architecture=x86_64, distribution-scope=public, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible) Dec 6 05:15:05 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:15:06 localhost neutron_sriov_agent[256690]: 2025-12-06 10:15:06.020 2 INFO neutron.agent.securitygroups_rpc [None req-32f5fe5b-2f75-4cad-9292-d5acba05dc94 b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Security group member updated ['4c82b56e-0fc5-4c7f-8922-ceb8236815fd']#033[00m Dec 6 05:15:06 localhost podman[312845]: 2025-12-06 10:15:06.024588568 +0000 UTC m=+0.279078686 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute) Dec 6 05:15:06 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:15:06 localhost nova_compute[282193]: 2025-12-06 10:15:06.434 282197 DEBUG nova.network.neutron [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Updating instance_info_cache with network_info: [{"id": "e87832d3-ffc3-44e0-9f77-cd2eb6073d62", "address": "fa:16:3e:0e:f5:37", "network": {"id": "47d636a7-c520-4320-aa94-bfb41f418584", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1313845827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "7897d6398eb64eb29c66df8db792e581", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape87832d3-ff", "ovs_interfaceid": "e87832d3-ffc3-44e0-9f77-cd2eb6073d62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 6 05:15:06 localhost nova_compute[282193]: 2025-12-06 10:15:06.773 282197 DEBUG oslo_concurrency.lockutils [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Releasing lock "refresh_cache-87dc2ce3-2b16-4764-9803-711c2d12c20f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 6 05:15:06 localhost nova_compute[282193]: 2025-12-06 10:15:06.804 282197 DEBUG oslo_concurrency.lockutils [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:15:06 localhost nova_compute[282193]: 2025-12-06 10:15:06.804 282197 DEBUG oslo_concurrency.lockutils [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:15:06 localhost nova_compute[282193]: 2025-12-06 10:15:06.805 282197 DEBUG oslo_concurrency.lockutils [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:15:06 localhost nova_compute[282193]: 2025-12-06 10:15:06.812 282197 INFO nova.virt.libvirt.driver [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m Dec 6 05:15:06 localhost journal[203911]: Domain id=3 name='instance-00000007' uuid=87dc2ce3-2b16-4764-9803-711c2d12c20f is tainted: custom-monitor Dec 6 05:15:06 localhost nova_compute[282193]: 2025-12-06 10:15:06.974 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:07 localhost nova_compute[282193]: 2025-12-06 10:15:07.823 282197 INFO nova.virt.libvirt.driver [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m Dec 6 05:15:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:07.915 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'name': 'test', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005548789.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'hostId': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 6 05:15:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:07.919 12 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET http://nova-internal.openstack.svc:8774/v2.1/flavors?is_public=None -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}172727ab3d6cbaaa6568bf97b3413b148d1c73ec77830424529759a582bd30ea" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:519 Dec 6 05:15:08 localhost nova_compute[282193]: 2025-12-06 10:15:08.062 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.116 12 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 954 Content-Type: application/json Date: Sat, 06 Dec 2025 10:15:07 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-a6bde3db-2216-49af-8ec4-1150061ba601 x-openstack-request-id: req-a6bde3db-2216-49af-8ec4-1150061ba601 _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:550 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.117 12 DEBUG novaclient.v2.client [-] RESP BODY: {"flavors": [{"id": "3b9dcd46-fa1b-4714-ba2b-665da2f67af6", "name": "m1.small", "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/3b9dcd46-fa1b-4714-ba2b-665da2f67af6"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/3b9dcd46-fa1b-4714-ba2b-665da2f67af6"}]}, {"id": "72bdd1eb-059b-401d-8f8a-ec7c66937f24", "name": "m1.micro", "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/72bdd1eb-059b-401d-8f8a-ec7c66937f24"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/72bdd1eb-059b-401d-8f8a-ec7c66937f24"}]}, {"id": "a0a7498e-22eb-495c-a2e3-89ba9e483bf6", "name": "m1.nano", "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/a0a7498e-22eb-495c-a2e3-89ba9e483bf6"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/a0a7498e-22eb-495c-a2e3-89ba9e483bf6"}]}]} _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:582 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.117 12 DEBUG novaclient.v2.client [-] GET call to compute for http://nova-internal.openstack.svc:8774/v2.1/flavors?is_public=None used request id req-a6bde3db-2216-49af-8ec4-1150061ba601 request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:954 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.119 12 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET http://nova-internal.openstack.svc:8774/v2.1/flavors/a0a7498e-22eb-495c-a2e3-89ba9e483bf6 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}172727ab3d6cbaaa6568bf97b3413b148d1c73ec77830424529759a582bd30ea" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:519 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.130 12 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 493 Content-Type: application/json Date: Sat, 06 Dec 2025 10:15:08 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-5a9ae02a-ac3f-43b5-ab54-a8a480dd3fdd x-openstack-request-id: req-5a9ae02a-ac3f-43b5-ab54-a8a480dd3fdd _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:550 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.131 12 DEBUG novaclient.v2.client [-] RESP BODY: {"flavor": {"id": "a0a7498e-22eb-495c-a2e3-89ba9e483bf6", "name": "m1.nano", "ram": 128, "disk": 1, "swap": "", "OS-FLV-EXT-DATA:ephemeral": 0, "OS-FLV-DISABLED:disabled": false, "vcpus": 1, "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/a0a7498e-22eb-495c-a2e3-89ba9e483bf6"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/a0a7498e-22eb-495c-a2e3-89ba9e483bf6"}]}} _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:582 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.131 12 DEBUG novaclient.v2.client [-] GET call to compute for http://nova-internal.openstack.svc:8774/v2.1/flavors/a0a7498e-22eb-495c-a2e3-89ba9e483bf6 used request id req-5a9ae02a-ac3f-43b5-ab54-a8a480dd3fdd request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:954 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.132 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '87dc2ce3-2b16-4764-9803-711c2d12c20f', 'name': 'tempest-LiveAutoBlockMigrationV225Test-server-1999616987', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000007', 'OS-EXT-SRV-ATTR:host': 'np0005548789.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '7897d6398eb64eb29c66df8db792e581', 'user_id': 'ac2e85103fd14829ad4e6df2357da95b', 'hostId': 'b262cbaba97a5f617b7f4005a27ec79e86949cad326c374cbf942dbe', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.132 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.138 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.bytes volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.142 12 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 87dc2ce3-2b16-4764-9803-711c2d12c20f / tape87832d3-ff inspect_vnics /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/inspector.py:136 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.143 12 DEBUG ceilometer.compute.pollsters [-] 87dc2ce3-2b16-4764-9803-711c2d12c20f/network.incoming.bytes volume: 1446 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9315d243-e01c-426d-b74d-f3afbde4f29f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:15:08.133230', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '70fd4942-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.382597781, 'message_signature': 'cc946e0847f9e7aadbe1de75754ac0aff23f1828a1fb8812275d1d98e02d84a9'}, {'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 1446, 'user_id': 'ac2e85103fd14829ad4e6df2357da95b', 'user_name': None, 'project_id': '7897d6398eb64eb29c66df8db792e581', 'project_name': None, 'resource_id': 'instance-00000007-87dc2ce3-2b16-4764-9803-711c2d12c20f-tape87832d3-ff', 'timestamp': '2025-12-06T10:15:08.133230', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-1999616987', 'name': 'tape87832d3-ff', 'instance_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f', 'instance_type': 'm1.nano', 'host': 'b262cbaba97a5f617b7f4005a27ec79e86949cad326c374cbf942dbe', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:0e:f5:37', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tape87832d3-ff'}, 'message_id': '70fdfdd8-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.388408647, 'message_signature': 'bc06b5d66562964ce558b9d94417fdd9a320b4e2828184bc55983d41c06f634f'}]}, 'timestamp': '2025-12-06 10:15:08.143628', '_unique_id': 'a15e3d5b659b496880e613ae5d4ba3d0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.146 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.148 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.148 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets volume: 114 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.149 12 DEBUG ceilometer.compute.pollsters [-] 87dc2ce3-2b16-4764-9803-711c2d12c20f/network.outgoing.packets volume: 53 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e235f2d3-3146-49bd-a6ab-64e5a4d3d333', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 114, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:15:08.148723', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '70feddde-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.382597781, 'message_signature': 'd5bf00a4d5ed30cf8675e6ba3598597be760a28ae230e2824f496f4cd09a33f1'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 53, 'user_id': 'ac2e85103fd14829ad4e6df2357da95b', 'user_name': None, 'project_id': '7897d6398eb64eb29c66df8db792e581', 'project_name': None, 'resource_id': 'instance-00000007-87dc2ce3-2b16-4764-9803-711c2d12c20f-tape87832d3-ff', 'timestamp': '2025-12-06T10:15:08.148723', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-1999616987', 'name': 'tape87832d3-ff', 'instance_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f', 'instance_type': 'm1.nano', 'host': 'b262cbaba97a5f617b7f4005a27ec79e86949cad326c374cbf942dbe', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:0e:f5:37', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tape87832d3-ff'}, 'message_id': '70fef260-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.388408647, 'message_signature': '72189fca8c756eb2fb2f8caff23463cac63577c887bd9404ddf2bd40a5e7efff'}]}, 'timestamp': '2025-12-06 10:15:08.149910', '_unique_id': 'cc584b8f71c441fc83f038f3431676a2'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.151 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.152 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.166 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.167 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.179 12 DEBUG ceilometer.compute.pollsters [-] 87dc2ce3-2b16-4764-9803-711c2d12c20f/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.180 12 DEBUG ceilometer.compute.pollsters [-] 87dc2ce3-2b16-4764-9803-711c2d12c20f/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9da37fbe-f106-493b-a85c-1ab57331fd26', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:15:08.152854', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '7101a2ee-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.402242477, 'message_signature': '7e5f5ca5a066a068387ed2e9a09971aca6c53ef25750ee81602694f90e3f1251'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:15:08.152854', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '7101ba18-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.402242477, 'message_signature': '4c31d75a9902ea875f718dda05f66370656500a54df419019cb0b342ce8f00d1'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ac2e85103fd14829ad4e6df2357da95b', 'user_name': None, 'project_id': '7897d6398eb64eb29c66df8db792e581', 'project_name': None, 'resource_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f-vda', 'timestamp': '2025-12-06T10:15:08.152854', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-1999616987', 'name': 'instance-00000007', 'instance_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f', 'instance_type': 'm1.nano', 'host': 'b262cbaba97a5f617b7f4005a27ec79e86949cad326c374cbf942dbe', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '7103a59e-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.417401107, 'message_signature': 'c5d3c4a422d1f217ca7601ba057f34108977c712f7d8877320db312933b75da3'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 485376, 'user_id': 'ac2e85103fd14829ad4e6df2357da95b', 'user_name': None, 'project_id': '7897d6398eb64eb29c66df8db792e581', 'project_name': None, 'resource_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f-sda', 'timestamp': '2025-12-06T10:15:08.152854', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-1999616987', 'name': 'instance-00000007', 'instance_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f', 'instance_type': 'm1.nano', 'host': 'b262cbaba97a5f617b7f4005a27ec79e86949cad326c374cbf942dbe', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '7103be80-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.417401107, 'message_signature': '13801bd793946732d62b554f3b4fce1dd4dc00c71fc4a63b018f864c7699f766'}]}, 'timestamp': '2025-12-06 10:15:08.181284', '_unique_id': 'df59266bc76d41c69ee43c7bedeace4e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.182 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.184 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.201 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/memory.usage volume: 51.80859375 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.219 12 DEBUG ceilometer.compute.pollsters [-] 87dc2ce3-2b16-4764-9803-711c2d12c20f/memory.usage volume: 40.44921875 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'cfb78e92-41d0-41bf-8819-0580553dbf3f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.80859375, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'timestamp': '2025-12-06T10:15:08.184918', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '7106ecae-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.450827631, 'message_signature': '4203aeb816eecdfafc5ff9db501d5537016e3c73b7784d7c7e32efdce78764b5'}, {'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 40.44921875, 'user_id': 'ac2e85103fd14829ad4e6df2357da95b', 'user_name': None, 'project_id': '7897d6398eb64eb29c66df8db792e581', 'project_name': None, 'resource_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f', 'timestamp': '2025-12-06T10:15:08.184918', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-1999616987', 'name': 'instance-00000007', 'instance_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f', 'instance_type': 'm1.nano', 'host': 'b262cbaba97a5f617b7f4005a27ec79e86949cad326c374cbf942dbe', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1}, 'message_id': '7109a480-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.468585349, 'message_signature': '63a070114e5f6f65398563e2f38688354b3967c8632da928dce05714e5ddedd9'}]}, 'timestamp': '2025-12-06 10:15:08.219898', '_unique_id': '7b4d6e90b6364e3f81feef1c9428e12b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.220 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.221 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.221 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 DEBUG ceilometer.compute.pollsters [-] 87dc2ce3-2b16-4764-9803-711c2d12c20f/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3323cd67-84ba-47df-9456-1a8f4fc87307', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:15:08.221809', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '7109fdc2-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.382597781, 'message_signature': 'a9781eb69a6084327ad7229bec70b4961616d7e7e49811fc94fcba72a2d6b183'}, {'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'ac2e85103fd14829ad4e6df2357da95b', 'user_name': None, 'project_id': '7897d6398eb64eb29c66df8db792e581', 'project_name': None, 'resource_id': 'instance-00000007-87dc2ce3-2b16-4764-9803-711c2d12c20f-tape87832d3-ff', 'timestamp': '2025-12-06T10:15:08.221809', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-1999616987', 'name': 'tape87832d3-ff', 'instance_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f', 'instance_type': 'm1.nano', 'host': 'b262cbaba97a5f617b7f4005a27ec79e86949cad326c374cbf942dbe', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:0e:f5:37', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tape87832d3-ff'}, 'message_id': '710a0704-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.388408647, 'message_signature': 'eea6561975db61dbe0d3c6b8bfed1c659d80d2b1b9870b90772b48bcc9672c57'}]}, 'timestamp': '2025-12-06 10:15:08.222335', '_unique_id': '82de2371e5e94774a52b9e169d11f23f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.222 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.223 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.iops in the context of pollsters Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.223 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for PerDeviceDiskIOPSPollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.223 12 ERROR ceilometer.polling.manager [-] Prevent pollster disk.device.iops from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.223 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 DEBUG ceilometer.compute.pollsters [-] 87dc2ce3-2b16-4764-9803-711c2d12c20f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1d8e2951-d645-4f62-9d92-d7a765341cee', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:15:08.224016', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '710a5204-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.382597781, 'message_signature': '469eff839e74cfd84ed2a3d0038cb2882dfb49b1e260818cd0cda433600052bc'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ac2e85103fd14829ad4e6df2357da95b', 'user_name': None, 'project_id': '7897d6398eb64eb29c66df8db792e581', 'project_name': None, 'resource_id': 'instance-00000007-87dc2ce3-2b16-4764-9803-711c2d12c20f-tape87832d3-ff', 'timestamp': '2025-12-06T10:15:08.224016', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-1999616987', 'name': 'tape87832d3-ff', 'instance_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f', 'instance_type': 'm1.nano', 'host': 'b262cbaba97a5f617b7f4005a27ec79e86949cad326c374cbf942dbe', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:0e:f5:37', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tape87832d3-ff'}, 'message_id': '710a5b28-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.388408647, 'message_signature': 'ae63fffd87a3b18156ab8b0ae2fd35e4ee7c54841d4b501efb65d2ecbf7160dc'}]}, 'timestamp': '2025-12-06 10:15:08.224490', '_unique_id': '9430ce47d8f547d09b2c3bc932942cd7'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.224 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.225 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.225 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 DEBUG ceilometer.compute.pollsters [-] 87dc2ce3-2b16-4764-9803-711c2d12c20f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7bb42570-80af-470b-b641-78fb3b1f5d67', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:15:08.225763', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '710a971e-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.382597781, 'message_signature': 'd530f48e146c40d2c712d9138705b3129d75387b03c2997a1bbf4ff4d650de02'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ac2e85103fd14829ad4e6df2357da95b', 'user_name': None, 'project_id': '7897d6398eb64eb29c66df8db792e581', 'project_name': None, 'resource_id': 'instance-00000007-87dc2ce3-2b16-4764-9803-711c2d12c20f-tape87832d3-ff', 'timestamp': '2025-12-06T10:15:08.225763', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-1999616987', 'name': 'tape87832d3-ff', 'instance_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f', 'instance_type': 'm1.nano', 'host': 'b262cbaba97a5f617b7f4005a27ec79e86949cad326c374cbf942dbe', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:0e:f5:37', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tape87832d3-ff'}, 'message_id': '710aa04c-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.388408647, 'message_signature': '7f8acd60ee5ca43e18b9f312be60a864a5739465c5570db82251315e7a2c2103'}]}, 'timestamp': '2025-12-06 10:15:08.226260', '_unique_id': '9104b59ca9614e038477d01f350e6673'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.226 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.227 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 6 05:15:08 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.247 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.247 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.274 12 DEBUG ceilometer.compute.pollsters [-] 87dc2ce3-2b16-4764-9803-711c2d12c20f/disk.device.read.requests volume: 182 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.275 12 DEBUG ceilometer.compute.pollsters [-] 87dc2ce3-2b16-4764-9803-711c2d12c20f/disk.device.read.requests volume: 69 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'afe6664f-0f8a-4374-a6d7-d1a96df987e4', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:15:08.227498', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '710dd9e2-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.47683263, 'message_signature': '410ff09559e30ab159230594068f678c35156e39d12d353a1d727c0e24e4626b'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:15:08.227498', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '710de392-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.47683263, 'message_signature': '01eaa1b51e52a836e96e2f868be6af62463a1c364fd233ae1828660476670f26'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 182, 'user_id': 'ac2e85103fd14829ad4e6df2357da95b', 'user_name': None, 'project_id': '7897d6398eb64eb29c66df8db792e581', 'project_name': None, 'resource_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f-vda', 'timestamp': '2025-12-06T10:15:08.227498', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-1999616987', 'name': 'instance-00000007', 'instance_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f', 'instance_type': 'm1.nano', 'host': 'b262cbaba97a5f617b7f4005a27ec79e86949cad326c374cbf942dbe', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '711221f0-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.49694731, 'message_signature': '89b2f6a9c2717c87a3c53a781d6a4b08437c0c79c88b2b5cb4fb8168f70ac80e'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 69, 'user_id': 'ac2e85103fd14829ad4e6df2357da95b', 'user_name': None, 'project_id': '7897d6398eb64eb29c66df8db792e581', 'project_name': None, 'resource_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f-sda', 'timestamp': '2025-12-06T10:15:08.227498', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-1999616987', 'name': 'instance-00000007', 'instance_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f', 'instance_type': 'm1.nano', 'host': 'b262cbaba97a5f617b7f4005a27ec79e86949cad326c374cbf942dbe', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '711239e2-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.49694731, 'message_signature': 'a74bea9b9ca9298a26a308a6095767a4a2610b92786fc27fd4b72c155b87e912'}]}, 'timestamp': '2025-12-06 10:15:08.276191', '_unique_id': '0a68e23a1c98467a82a2340f6fdc615f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.277 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.279 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.279 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.280 12 DEBUG ceilometer.compute.pollsters [-] 87dc2ce3-2b16-4764-9803-711c2d12c20f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4758e61b-e52e-4c9c-aa46-0f5b0d0af980', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:15:08.279693', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '7112dc62-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.382597781, 'message_signature': '66fc6831dff658a2f0412640718292509fd041f99e1dec06dd18cf1924080ef1'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ac2e85103fd14829ad4e6df2357da95b', 'user_name': None, 'project_id': '7897d6398eb64eb29c66df8db792e581', 'project_name': None, 'resource_id': 'instance-00000007-87dc2ce3-2b16-4764-9803-711c2d12c20f-tape87832d3-ff', 'timestamp': '2025-12-06T10:15:08.279693', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-1999616987', 'name': 'tape87832d3-ff', 'instance_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f', 'instance_type': 'm1.nano', 'host': 'b262cbaba97a5f617b7f4005a27ec79e86949cad326c374cbf942dbe', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:0e:f5:37', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tape87832d3-ff'}, 'message_id': '7112f1ac-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.388408647, 'message_signature': 'd6dd9e29e9e663bfa4ce6bed0e47db2a6285d37d8e4394d86093b4d683dd7654'}]}, 'timestamp': '2025-12-06 10:15:08.280996', '_unique_id': 'a1fd7b67ff864908892b841d3a567a3f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.282 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.284 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.latency in the context of pollsters Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.284 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for PerDeviceDiskLatencyPollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.284 12 ERROR ceilometer.polling.manager [-] Prevent pollster disk.device.latency from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.284 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.285 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.requests volume: 47 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.285 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.286 12 DEBUG ceilometer.compute.pollsters [-] 87dc2ce3-2b16-4764-9803-711c2d12c20f/disk.device.write.requests volume: 74 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.286 12 DEBUG ceilometer.compute.pollsters [-] 87dc2ce3-2b16-4764-9803-711c2d12c20f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '81341788-2953-4375-b5c5-d19ab094ef73', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 47, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:15:08.285114', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '7113aab6-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.47683263, 'message_signature': 'c0ff32f61af8991c423deec5da540f3cc2b446eba59cff1307751a99cb631e16'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:15:08.285114', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '7113c064-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.47683263, 'message_signature': '050be5b277164fda021c81fac924fd081504e3286a4ea8355209cb31de79c491'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 74, 'user_id': 'ac2e85103fd14829ad4e6df2357da95b', 'user_name': None, 'project_id': '7897d6398eb64eb29c66df8db792e581', 'project_name': None, 'resource_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f-vda', 'timestamp': '2025-12-06T10:15:08.285114', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-1999616987', 'name': 'instance-00000007', 'instance_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f', 'instance_type': 'm1.nano', 'host': 'b262cbaba97a5f617b7f4005a27ec79e86949cad326c374cbf942dbe', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '7113d432-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.49694731, 'message_signature': 'd0d0d556247a60dab6e5d9fec9bdeec5185d00c8e81caaedb6c667473c524534'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': 'ac2e85103fd14829ad4e6df2357da95b', 'user_name': None, 'project_id': '7897d6398eb64eb29c66df8db792e581', 'project_name': None, 'resource_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f-sda', 'timestamp': '2025-12-06T10:15:08.285114', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-1999616987', 'name': 'instance-00000007', 'instance_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f', 'instance_type': 'm1.nano', 'host': 'b262cbaba97a5f617b7f4005a27ec79e86949cad326c374cbf942dbe', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '7113e8fa-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.49694731, 'message_signature': '78af484019149e84fa5ee11bf9387ec573dcf5365aa11ad2bbb71d04440279db'}]}, 'timestamp': '2025-12-06 10:15:08.287215', '_unique_id': 'e59a3ed4628146c6934742881cd47282'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.288 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.289 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.290 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.290 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.291 12 DEBUG ceilometer.compute.pollsters [-] 87dc2ce3-2b16-4764-9803-711c2d12c20f/disk.device.read.bytes volume: 3067392 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.291 12 DEBUG ceilometer.compute.pollsters [-] 87dc2ce3-2b16-4764-9803-711c2d12c20f/disk.device.read.bytes volume: 159824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '044624b5-a6b7-4fb6-bd06-8f564bc6ca83', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:15:08.290153', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '711473ec-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.47683263, 'message_signature': '397cbba4be60d3ba5e4ffd3edea0a8b9f510a1a0a3ffd0d0753bbc03059327d5'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:15:08.290153', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '7114881e-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.47683263, 'message_signature': '05f7ff1fbda52417a322c45ad97bef56cb78c9d849b0a0665c59c9be2f06bc36'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 3067392, 'user_id': 'ac2e85103fd14829ad4e6df2357da95b', 'user_name': None, 'project_id': '7897d6398eb64eb29c66df8db792e581', 'project_name': None, 'resource_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f-vda', 'timestamp': '2025-12-06T10:15:08.290153', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-1999616987', 'name': 'instance-00000007', 'instance_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f', 'instance_type': 'm1.nano', 'host': 'b262cbaba97a5f617b7f4005a27ec79e86949cad326c374cbf942dbe', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '71149a20-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.49694731, 'message_signature': 'aa77ca9e9dddacfd291193814f0724f000c7b949e9ba8735a5214a04afb989c6'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 159824, 'user_id': 'ac2e85103fd14829ad4e6df2357da95b', 'user_name': None, 'project_id': '7897d6398eb64eb29c66df8db792e581', 'project_name': None, 'resource_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f-sda', 'timestamp': '2025-12-06T10:15:08.290153', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-1999616987', 'name': 'instance-00000007', 'instance_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f', 'instance_type': 'm1.nano', 'host': 'b262cbaba97a5f617b7f4005a27ec79e86949cad326c374cbf942dbe', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '7114ad4e-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.49694731, 'message_signature': 'e49a87f60bdb3401ac344536440e39a8ac81b49ee42f64e972aa224ff4e642cd'}]}, 'timestamp': '2025-12-06 10:15:08.292233', '_unique_id': '2a36dd7e05644eaba67efaad6770f1cf'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.293 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.295 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.295 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.296 12 DEBUG ceilometer.compute.pollsters [-] 87dc2ce3-2b16-4764-9803-711c2d12c20f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'eb5f10c6-2d12-47b8-93db-70735501d786', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:15:08.295426', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '711540ba-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.382597781, 'message_signature': '607600144dd9ea380036ad1a64643b2df9d041bbf8bd9f96e97ebd34684f65fb'}, {'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ac2e85103fd14829ad4e6df2357da95b', 'user_name': None, 'project_id': '7897d6398eb64eb29c66df8db792e581', 'project_name': None, 'resource_id': 'instance-00000007-87dc2ce3-2b16-4764-9803-711c2d12c20f-tape87832d3-ff', 'timestamp': '2025-12-06T10:15:08.295426', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-1999616987', 'name': 'tape87832d3-ff', 'instance_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f', 'instance_type': 'm1.nano', 'host': 'b262cbaba97a5f617b7f4005a27ec79e86949cad326c374cbf942dbe', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:0e:f5:37', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tape87832d3-ff'}, 'message_id': '7115547e-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.388408647, 'message_signature': '1e06d4a8ddc77acbe232ff87c9e25a3e803986140a242808609822d1e9ed4226'}]}, 'timestamp': '2025-12-06 10:15:08.296538', '_unique_id': '2ca9f37365dd4be78125e1ae553be0a7'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.298 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.299 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.299 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.300 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.300 12 DEBUG ceilometer.compute.pollsters [-] 87dc2ce3-2b16-4764-9803-711c2d12c20f/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.301 12 DEBUG ceilometer.compute.pollsters [-] 87dc2ce3-2b16-4764-9803-711c2d12c20f/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '45f3cd03-8231-46e7-a5ec-fec9395ee5e8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:15:08.299798', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '7115e68c-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.402242477, 'message_signature': 'edc7b8e7c95effb208a8a372a773109f01441f7f14ba2ce1b917e6a36bd30b73'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:15:08.299798', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '7115f6b8-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.402242477, 'message_signature': 'ca0fb2b43c0380292f42d2ec0e2b9fd63cf7d6545cf426385c1b750934cf457b'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ac2e85103fd14829ad4e6df2357da95b', 'user_name': None, 'project_id': '7897d6398eb64eb29c66df8db792e581', 'project_name': None, 'resource_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f-vda', 'timestamp': '2025-12-06T10:15:08.299798', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-1999616987', 'name': 'instance-00000007', 'instance_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f', 'instance_type': 'm1.nano', 'host': 'b262cbaba97a5f617b7f4005a27ec79e86949cad326c374cbf942dbe', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '71160748-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.417401107, 'message_signature': 'e75ae73c50f5a5d7ea9ec88138be42f854645815df948605e47a35b78eba1c59'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 485376, 'user_id': 'ac2e85103fd14829ad4e6df2357da95b', 'user_name': None, 'project_id': '7897d6398eb64eb29c66df8db792e581', 'project_name': None, 'resource_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f-sda', 'timestamp': '2025-12-06T10:15:08.299798', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-1999616987', 'name': 'instance-00000007', 'instance_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f', 'instance_type': 'm1.nano', 'host': 'b262cbaba97a5f617b7f4005a27ec79e86949cad326c374cbf942dbe', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '7116165c-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.417401107, 'message_signature': 'c5394f7cc9758b8d127827b51e4b3d991e5bbcc6d94bb4b8638a593c5eacf50f'}]}, 'timestamp': '2025-12-06 10:15:08.301469', '_unique_id': '9aa8ae4f45d04d4795f8dcbcac4d3a48'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.302 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.303 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.303 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.304 12 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.304 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.304 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.latency volume: 1525105336 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.304 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.latency volume: 106716064 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.305 12 DEBUG ceilometer.compute.pollsters [-] 87dc2ce3-2b16-4764-9803-711c2d12c20f/disk.device.read.latency volume: 184282236 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.305 12 DEBUG ceilometer.compute.pollsters [-] 87dc2ce3-2b16-4764-9803-711c2d12c20f/disk.device.read.latency volume: 70306913 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '77bc7a07-8c18-45f7-9771-0b978edd233c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1525105336, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:15:08.304497', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '71169d20-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.47683263, 'message_signature': 'b4a57d8879192631c983a93254c914b5ccc02338c69cafba38b42e48dac71c7a'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 106716064, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:15:08.304497', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '7116ae64-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.47683263, 'message_signature': '5dfd2cb318834b211954c8962c9a2e676f5a7ba3b9728bb022a446437fa7dc65'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 184282236, 'user_id': 'ac2e85103fd14829ad4e6df2357da95b', 'user_name': None, 'project_id': '7897d6398eb64eb29c66df8db792e581', 'project_name': None, 'resource_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f-vda', 'timestamp': '2025-12-06T10:15:08.304497', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-1999616987', 'name': 'instance-00000007', 'instance_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f', 'instance_type': 'm1.nano', 'host': 'b262cbaba97a5f617b7f4005a27ec79e86949cad326c374cbf942dbe', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '7116bd64-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.49694731, 'message_signature': '635585a8c38812342566f1476e91008b5d4137e5e73fdc6ce8160f626ecd6d17'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 70306913, 'user_id': 'ac2e85103fd14829ad4e6df2357da95b', 'user_name': None, 'project_id': '7897d6398eb64eb29c66df8db792e581', 'project_name': None, 'resource_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f-sda', 'timestamp': '2025-12-06T10:15:08.304497', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-1999616987', 'name': 'instance-00000007', 'instance_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f', 'instance_type': 'm1.nano', 'host': 'b262cbaba97a5f617b7f4005a27ec79e86949cad326c374cbf942dbe', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '7116d3d0-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.49694731, 'message_signature': 'fdbe3804adf5df1c270b79fa491c67795e3920eef5407b16f269745892f42855'}]}, 'timestamp': '2025-12-06 10:15:08.306319', '_unique_id': 'b9f73347add6410892a9e6440288d5b7'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.307 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.308 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.308 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/cpu volume: 16190000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.309 12 DEBUG ceilometer.compute.pollsters [-] 87dc2ce3-2b16-4764-9803-711c2d12c20f/cpu volume: 1030000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a4eebff3-fe5d-45ee-942a-c49c00243eb3', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 16190000000, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'timestamp': '2025-12-06T10:15:08.308573', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '71173d98-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.450827631, 'message_signature': 'eacc644887d356b897de5bc11f3e7e334aaca3b816364fef57bbe01df022565c'}, {'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1030000000, 'user_id': 'ac2e85103fd14829ad4e6df2357da95b', 'user_name': None, 'project_id': '7897d6398eb64eb29c66df8db792e581', 'project_name': None, 'resource_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f', 'timestamp': '2025-12-06T10:15:08.308573', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-1999616987', 'name': 'instance-00000007', 'instance_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f', 'instance_type': 'm1.nano', 'host': 'b262cbaba97a5f617b7f4005a27ec79e86949cad326c374cbf942dbe', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'cpu_number': 1}, 'message_id': '71174d92-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.468585349, 'message_signature': 'e337cc736ab49f5a11aa565d5d1a0805a1b6d00d666e5b7f20d6663431ee4702'}]}, 'timestamp': '2025-12-06 10:15:08.309437', '_unique_id': 'ea4ff056d0c34fe4859393ea51aa78aa'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.310 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.311 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.311 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.311 12 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.312 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.312 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.latency volume: 1252245154 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.312 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.latency volume: 27668224 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.313 12 DEBUG ceilometer.compute.pollsters [-] 87dc2ce3-2b16-4764-9803-711c2d12c20f/disk.device.write.latency volume: 1537084443 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.313 12 DEBUG ceilometer.compute.pollsters [-] 87dc2ce3-2b16-4764-9803-711c2d12c20f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9f9c9df0-575d-4780-a879-efaaab615507', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1252245154, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:15:08.312223', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '7117cb0a-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.47683263, 'message_signature': '7d86399a810a506a63809593ee51e86f0a179db7ebab0b52f1f0e615f036e30c'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 27668224, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:15:08.312223', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '7117dbfe-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.47683263, 'message_signature': '8149b7ce2939975bc8f18e0350d5072768fb64904df3cfcc40fc30811852086a'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1537084443, 'user_id': 'ac2e85103fd14829ad4e6df2357da95b', 'user_name': None, 'project_id': '7897d6398eb64eb29c66df8db792e581', 'project_name': None, 'resource_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f-vda', 'timestamp': '2025-12-06T10:15:08.312223', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-1999616987', 'name': 'instance-00000007', 'instance_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f', 'instance_type': 'm1.nano', 'host': 'b262cbaba97a5f617b7f4005a27ec79e86949cad326c374cbf942dbe', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '7117eb58-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.49694731, 'message_signature': 'c9d62d50bf9fad8362c8f5795d398893dbc2c493865dea55e725e55ebc75d6be'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': 'ac2e85103fd14829ad4e6df2357da95b', 'user_name': None, 'project_id': '7897d6398eb64eb29c66df8db792e581', 'project_name': None, 'resource_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f-sda', 'timestamp': '2025-12-06T10:15:08.312223', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-1999616987', 'name': 'instance-00000007', 'instance_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f', 'instance_type': 'm1.nano', 'host': 'b262cbaba97a5f617b7f4005a27ec79e86949cad326c374cbf942dbe', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '7117fa4e-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.49694731, 'message_signature': '6a451eae16bfb543514ab01c756c60546dabc0ed1b9ee44bf23a29dd97612609'}]}, 'timestamp': '2025-12-06 10:15:08.313909', '_unique_id': 'b761968316174d2ea71f85d9201c01c8'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.314 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.316 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.316 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets volume: 60 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.316 12 DEBUG ceilometer.compute.pollsters [-] 87dc2ce3-2b16-4764-9803-711c2d12c20f/network.incoming.packets volume: 10 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6600945e-2358-4a7e-81b1-dff9000027be', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 60, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:15:08.316146', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '711864a2-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.382597781, 'message_signature': 'd6d214b9c7df8b87712477a2d762c7eab8fb2084ac2230bf7432a14d98005057'}, {'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 10, 'user_id': 'ac2e85103fd14829ad4e6df2357da95b', 'user_name': None, 'project_id': '7897d6398eb64eb29c66df8db792e581', 'project_name': None, 'resource_id': 'instance-00000007-87dc2ce3-2b16-4764-9803-711c2d12c20f-tape87832d3-ff', 'timestamp': '2025-12-06T10:15:08.316146', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-1999616987', 'name': 'tape87832d3-ff', 'instance_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f', 'instance_type': 'm1.nano', 'host': 'b262cbaba97a5f617b7f4005a27ec79e86949cad326c374cbf942dbe', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:0e:f5:37', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tape87832d3-ff'}, 'message_id': '71187640-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.388408647, 'message_signature': '527303367fb2cd028db543f07bf3e21d34e0fbc405c44f81ffcab0d0efb76601'}]}, 'timestamp': '2025-12-06 10:15:08.317049', '_unique_id': '93c775480fcc4364aaee7a734df30f2f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.318 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.319 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.319 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.319 12 DEBUG ceilometer.compute.pollsters [-] 87dc2ce3-2b16-4764-9803-711c2d12c20f/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '537aeeac-b0c8-47ca-9823-30f578e533e0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:15:08.319205', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '7118dc20-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.382597781, 'message_signature': '7e4308da4775b48edf9b84cbebb2d4ec08187efbd9212f84298afd7366b7fbcd'}, {'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'ac2e85103fd14829ad4e6df2357da95b', 'user_name': None, 'project_id': '7897d6398eb64eb29c66df8db792e581', 'project_name': None, 'resource_id': 'instance-00000007-87dc2ce3-2b16-4764-9803-711c2d12c20f-tape87832d3-ff', 'timestamp': '2025-12-06T10:15:08.319205', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-1999616987', 'name': 'tape87832d3-ff', 'instance_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f', 'instance_type': 'm1.nano', 'host': 'b262cbaba97a5f617b7f4005a27ec79e86949cad326c374cbf942dbe', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:0e:f5:37', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tape87832d3-ff'}, 'message_id': '7118ee0e-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.388408647, 'message_signature': '25ae2aecaa7a57b1404a52f5ad7c267fed461c04fcee022a79c32c53e3dab68c'}]}, 'timestamp': '2025-12-06 10:15:08.320116', '_unique_id': '05e074acb52a45bd8a497301c05bd4db'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.321 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.322 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.322 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.bytes volume: 389120 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.322 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.323 12 DEBUG ceilometer.compute.pollsters [-] 87dc2ce3-2b16-4764-9803-711c2d12c20f/disk.device.write.bytes volume: 47214592 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.323 12 DEBUG ceilometer.compute.pollsters [-] 87dc2ce3-2b16-4764-9803-711c2d12c20f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1c364bc2-34f0-40b2-9a81-f2f452ec305e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 389120, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:15:08.322292', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '71195484-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.47683263, 'message_signature': '4a889da976a072ec6c31110d25fa45d7ae75dcaebee570b6d17b36fc11ea8e1c'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:15:08.322292', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '711965aa-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.47683263, 'message_signature': '6b27aedfea11accc07eb7c33d3cc53f5f9d071094a2b53e7a79678370ef6de15'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 47214592, 'user_id': 'ac2e85103fd14829ad4e6df2357da95b', 'user_name': None, 'project_id': '7897d6398eb64eb29c66df8db792e581', 'project_name': None, 'resource_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f-vda', 'timestamp': '2025-12-06T10:15:08.322292', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-1999616987', 'name': 'instance-00000007', 'instance_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f', 'instance_type': 'm1.nano', 'host': 'b262cbaba97a5f617b7f4005a27ec79e86949cad326c374cbf942dbe', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '711974d2-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.49694731, 'message_signature': '936c19bdf7c266706711edc9a0a0298fbb558d6f26b9cd3131af22f1ecbab8c9'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'ac2e85103fd14829ad4e6df2357da95b', 'user_name': None, 'project_id': '7897d6398eb64eb29c66df8db792e581', 'project_name': None, 'resource_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f-sda', 'timestamp': '2025-12-06T10:15:08.322292', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-1999616987', 'name': 'instance-00000007', 'instance_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f', 'instance_type': 'm1.nano', 'host': 'b262cbaba97a5f617b7f4005a27ec79e86949cad326c374cbf942dbe', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '7119838c-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.49694731, 'message_signature': '4ac4e380bf03d6535b07d844a0f77e14cf5307e996fb66a720a845f79bf0d3a6'}]}, 'timestamp': '2025-12-06 10:15:08.323954', '_unique_id': 'a308647c51484bf4833b5fe1cf63b410'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.324 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.326 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.326 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.326 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.327 12 DEBUG ceilometer.compute.pollsters [-] 87dc2ce3-2b16-4764-9803-711c2d12c20f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.327 12 DEBUG ceilometer.compute.pollsters [-] 87dc2ce3-2b16-4764-9803-711c2d12c20f/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '71c79d6d-34c4-4d52-8b0f-c829ee5a7216', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:15:08.326223', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '7119ee12-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.402242477, 'message_signature': '9520c7f485948740e98a8d71b7b1fc9dd632d4e67b9919f5d2df6f9b775c0466'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:15:08.326223', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '7119fee8-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.402242477, 'message_signature': '92b05b294b3910dc36251d80bd23347d9deae27b68968131bb5145e5d85b196d'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ac2e85103fd14829ad4e6df2357da95b', 'user_name': None, 'project_id': '7897d6398eb64eb29c66df8db792e581', 'project_name': None, 'resource_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f-vda', 'timestamp': '2025-12-06T10:15:08.326223', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-1999616987', 'name': 'instance-00000007', 'instance_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f', 'instance_type': 'm1.nano', 'host': 'b262cbaba97a5f617b7f4005a27ec79e86949cad326c374cbf942dbe', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '711a11e4-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.417401107, 'message_signature': '6249dc4dd098304bb3b13f910d7edf61acbd1411d5c0b594c5d11509857f9592'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 485376, 'user_id': 'ac2e85103fd14829ad4e6df2357da95b', 'user_name': None, 'project_id': '7897d6398eb64eb29c66df8db792e581', 'project_name': None, 'resource_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f-sda', 'timestamp': '2025-12-06T10:15:08.326223', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-1999616987', 'name': 'instance-00000007', 'instance_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f', 'instance_type': 'm1.nano', 'host': 'b262cbaba97a5f617b7f4005a27ec79e86949cad326c374cbf942dbe', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '711a276a-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.417401107, 'message_signature': 'eb43741d9a1a0854be86d5eaf8edc208bd64faf1d8c1f29b03785952f7f00438'}]}, 'timestamp': '2025-12-06 10:15:08.328172', '_unique_id': 'e49dc9c2691b4a409d80644cbb2ea7c2'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.329 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.331 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.331 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.bytes volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.331 12 DEBUG ceilometer.compute.pollsters [-] 87dc2ce3-2b16-4764-9803-711c2d12c20f/network.outgoing.bytes volume: 4184 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '41ba7b7d-e7d2-4bcf-a522-5f8404e8114b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:15:08.331263', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '711ab4f0-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.382597781, 'message_signature': '2fe461631223c528f110226489a5a5f0b2790fa1fb547a54009816ffc69c3652'}, {'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4184, 'user_id': 'ac2e85103fd14829ad4e6df2357da95b', 'user_name': None, 'project_id': '7897d6398eb64eb29c66df8db792e581', 'project_name': None, 'resource_id': 'instance-00000007-87dc2ce3-2b16-4764-9803-711c2d12c20f-tape87832d3-ff', 'timestamp': '2025-12-06T10:15:08.331263', 'resource_metadata': {'display_name': 'tempest-LiveAutoBlockMigrationV225Test-server-1999616987', 'name': 'tape87832d3-ff', 'instance_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f', 'instance_type': 'm1.nano', 'host': 'b262cbaba97a5f617b7f4005a27ec79e86949cad326c374cbf942dbe', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:0e:f5:37', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tape87832d3-ff'}, 'message_id': '711acb3e-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12526.388408647, 'message_signature': '2cdcde42de35c980e246c30010bd20989c7f1c92426a6eac83a75c4571a1f352'}]}, 'timestamp': '2025-12-06 10:15:08.332350', '_unique_id': 'e822da23d2b14f73846cb5e2b3058938'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:15:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:15:08.333 12 ERROR oslo_messaging.notify.messaging Dec 6 05:15:08 localhost nova_compute[282193]: 2025-12-06 10:15:08.833 282197 INFO nova.virt.libvirt.driver [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m Dec 6 05:15:08 localhost nova_compute[282193]: 2025-12-06 10:15:08.841 282197 DEBUG nova.compute.manager [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 6 05:15:08 localhost nova_compute[282193]: 2025-12-06 10:15:08.861 282197 DEBUG nova.objects.instance [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m Dec 6 05:15:08 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e110 e110: 6 total, 6 up, 6 in Dec 6 05:15:09 localhost systemd[1]: Stopping User Manager for UID 42436... Dec 6 05:15:09 localhost systemd[312446]: Activating special unit Exit the Session... Dec 6 05:15:09 localhost systemd[312446]: Stopped target Main User Target. Dec 6 05:15:09 localhost systemd[312446]: Stopped target Basic System. Dec 6 05:15:09 localhost systemd[312446]: Stopped target Paths. Dec 6 05:15:09 localhost systemd[312446]: Stopped target Sockets. Dec 6 05:15:09 localhost systemd[312446]: Stopped target Timers. Dec 6 05:15:09 localhost systemd[312446]: Stopped Mark boot as successful after the user session has run 2 minutes. Dec 6 05:15:09 localhost systemd[312446]: Stopped Daily Cleanup of User's Temporary Directories. Dec 6 05:15:09 localhost systemd[312446]: Closed D-Bus User Message Bus Socket. Dec 6 05:15:09 localhost systemd[312446]: Stopped Create User's Volatile Files and Directories. Dec 6 05:15:09 localhost systemd[312446]: Removed slice User Application Slice. Dec 6 05:15:09 localhost systemd[312446]: Reached target Shutdown. Dec 6 05:15:09 localhost systemd[312446]: Finished Exit the Session. Dec 6 05:15:09 localhost systemd[312446]: Reached target Exit the Session. Dec 6 05:15:09 localhost systemd[1]: user@42436.service: Deactivated successfully. Dec 6 05:15:09 localhost systemd[1]: Stopped User Manager for UID 42436. Dec 6 05:15:09 localhost systemd[1]: Stopping User Runtime Directory /run/user/42436... Dec 6 05:15:09 localhost systemd[1]: run-user-42436.mount: Deactivated successfully. Dec 6 05:15:09 localhost systemd[1]: user-runtime-dir@42436.service: Deactivated successfully. Dec 6 05:15:09 localhost systemd[1]: Stopped User Runtime Directory /run/user/42436. Dec 6 05:15:09 localhost systemd[1]: Removed slice User Slice of UID 42436. Dec 6 05:15:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:15:10 localhost sshd[312898]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:15:10 localhost nova_compute[282193]: 2025-12-06 10:15:10.889 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:10 localhost podman[312897]: 2025-12-06 10:15:10.95581272 +0000 UTC m=+0.106565444 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 6 05:15:10 localhost podman[312897]: 2025-12-06 10:15:10.975355273 +0000 UTC m=+0.126107996 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:15:10 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:15:11 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e111 e111: 6 total, 6 up, 6 in Dec 6 05:15:11 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:11.139 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:15:10Z, description=, device_id=f44ab79a-f9b8-4237-b1dc-a24e7d22c236, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=cb8f0dad-295c-4ff7-a2e3-6c05095b4764, ip_allocation=immediate, mac_address=fa:16:3e:da:77:b2, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:14:54Z, description=, dns_domain=, id=deb7774c-e96b-4e7f-88d7-ed9d740915f4, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServersV294TestFqdnHostnames-1078460514-network, port_security_enabled=True, project_id=da995d8e002548889747013c0eeca935, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=20432, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=633, status=ACTIVE, subnets=['21dfbaea-2209-4e97-94d1-e29a4f3ba83b'], tags=[], tenant_id=da995d8e002548889747013c0eeca935, updated_at=2025-12-06T10:14:55Z, vlan_transparent=None, network_id=deb7774c-e96b-4e7f-88d7-ed9d740915f4, port_security_enabled=False, project_id=da995d8e002548889747013c0eeca935, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=663, status=DOWN, tags=[], tenant_id=da995d8e002548889747013c0eeca935, updated_at=2025-12-06T10:15:10Z on network deb7774c-e96b-4e7f-88d7-ed9d740915f4#033[00m Dec 6 05:15:11 localhost systemd[1]: tmp-crun.cnmPXV.mount: Deactivated successfully. Dec 6 05:15:11 localhost dnsmasq[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 8 addresses Dec 6 05:15:11 localhost dnsmasq-dhcp[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:15:11 localhost podman[312934]: 2025-12-06 10:15:11.154194888 +0000 UTC m=+0.075098139 container kill e4e5ac05442955f65040fee10f3aa342d80518f3e4a072b3d79fe4302f513cf7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:15:11 localhost dnsmasq-dhcp[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:15:11 localhost dnsmasq[312566]: read /var/lib/neutron/dhcp/deb7774c-e96b-4e7f-88d7-ed9d740915f4/addn_hosts - 1 addresses Dec 6 05:15:11 localhost dnsmasq-dhcp[312566]: read /var/lib/neutron/dhcp/deb7774c-e96b-4e7f-88d7-ed9d740915f4/host Dec 6 05:15:11 localhost dnsmasq-dhcp[312566]: read /var/lib/neutron/dhcp/deb7774c-e96b-4e7f-88d7-ed9d740915f4/opts Dec 6 05:15:11 localhost podman[312970]: 2025-12-06 10:15:11.448186995 +0000 UTC m=+0.063739604 container kill 8a8b7a6a9724101bff1398ade8c854164d1816271ca6c4f86a12732f70229362 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-deb7774c-e96b-4e7f-88d7-ed9d740915f4, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Dec 6 05:15:11 localhost ovn_controller[154851]: 2025-12-06T10:15:11Z|00106|binding|INFO|Releasing lport 9a87eef5-19db-4fcf-a021-4f61b153af33 from this chassis (sb_readonly=0) Dec 6 05:15:11 localhost ovn_controller[154851]: 2025-12-06T10:15:11Z|00107|binding|INFO|Releasing lport 8839eeed-ff6b-46d9-b40d-610788617728 from this chassis (sb_readonly=0) Dec 6 05:15:11 localhost ovn_controller[154851]: 2025-12-06T10:15:11Z|00108|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:15:11 localhost nova_compute[282193]: 2025-12-06 10:15:11.557 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:11 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:11.768 263652 INFO neutron.agent.dhcp.agent [None req-eacbe306-a8d4-4e64-a6e2-7cd18882a09b - - - - - -] DHCP configuration for ports {'cb8f0dad-295c-4ff7-a2e3-6c05095b4764'} is completed#033[00m Dec 6 05:15:12 localhost nova_compute[282193]: 2025-12-06 10:15:12.012 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:12 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e112 e112: 6 total, 6 up, 6 in Dec 6 05:15:12 localhost nova_compute[282193]: 2025-12-06 10:15:12.235 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:12 localhost dnsmasq[311859]: exiting on receipt of SIGTERM Dec 6 05:15:12 localhost podman[313009]: 2025-12-06 10:15:12.254939506 +0000 UTC m=+0.060338352 container kill dbf1d79cf37c9a95da8c3fbfd17ed1db394008f8d27c162fbd5d4fc5de50fe81 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-df3c5fcc-9cd4-4d33-9970-a165c712aad3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2) Dec 6 05:15:12 localhost systemd[1]: libpod-dbf1d79cf37c9a95da8c3fbfd17ed1db394008f8d27c162fbd5d4fc5de50fe81.scope: Deactivated successfully. Dec 6 05:15:12 localhost podman[313023]: 2025-12-06 10:15:12.329979511 +0000 UTC m=+0.055106042 container died dbf1d79cf37c9a95da8c3fbfd17ed1db394008f8d27c162fbd5d4fc5de50fe81 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-df3c5fcc-9cd4-4d33-9970-a165c712aad3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:15:12 localhost systemd[1]: tmp-crun.MbTTl6.mount: Deactivated successfully. Dec 6 05:15:12 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dbf1d79cf37c9a95da8c3fbfd17ed1db394008f8d27c162fbd5d4fc5de50fe81-userdata-shm.mount: Deactivated successfully. Dec 6 05:15:12 localhost systemd[1]: var-lib-containers-storage-overlay-a826d515f5bdfea048cffebccb4edfc28363d9139b831b1071c42234067ec609-merged.mount: Deactivated successfully. Dec 6 05:15:12 localhost podman[313023]: 2025-12-06 10:15:12.381959398 +0000 UTC m=+0.107085899 container remove dbf1d79cf37c9a95da8c3fbfd17ed1db394008f8d27c162fbd5d4fc5de50fe81 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-df3c5fcc-9cd4-4d33-9970-a165c712aad3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:15:12 localhost systemd[1]: libpod-conmon-dbf1d79cf37c9a95da8c3fbfd17ed1db394008f8d27c162fbd5d4fc5de50fe81.scope: Deactivated successfully. Dec 6 05:15:12 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:12.429 263652 INFO neutron.agent.dhcp.agent [None req-73cb6db9-d341-4c31-a329-722c7dea2032 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:15:12 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:12.591 263652 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:15:12 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:12.996 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:15:10Z, description=, device_id=f44ab79a-f9b8-4237-b1dc-a24e7d22c236, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=cb8f0dad-295c-4ff7-a2e3-6c05095b4764, ip_allocation=immediate, mac_address=fa:16:3e:da:77:b2, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:14:54Z, description=, dns_domain=, id=deb7774c-e96b-4e7f-88d7-ed9d740915f4, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServersV294TestFqdnHostnames-1078460514-network, port_security_enabled=True, project_id=da995d8e002548889747013c0eeca935, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=20432, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=633, status=ACTIVE, subnets=['21dfbaea-2209-4e97-94d1-e29a4f3ba83b'], tags=[], tenant_id=da995d8e002548889747013c0eeca935, updated_at=2025-12-06T10:14:55Z, vlan_transparent=None, network_id=deb7774c-e96b-4e7f-88d7-ed9d740915f4, port_security_enabled=False, project_id=da995d8e002548889747013c0eeca935, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=663, status=DOWN, tags=[], tenant_id=da995d8e002548889747013c0eeca935, updated_at=2025-12-06T10:15:10Z on network deb7774c-e96b-4e7f-88d7-ed9d740915f4#033[00m Dec 6 05:15:13 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:15:13 localhost systemd[1]: run-netns-qdhcp\x2ddf3c5fcc\x2d9cd4\x2d4d33\x2d9970\x2da165c712aad3.mount: Deactivated successfully. Dec 6 05:15:13 localhost dnsmasq[312566]: read /var/lib/neutron/dhcp/deb7774c-e96b-4e7f-88d7-ed9d740915f4/addn_hosts - 1 addresses Dec 6 05:15:13 localhost dnsmasq-dhcp[312566]: read /var/lib/neutron/dhcp/deb7774c-e96b-4e7f-88d7-ed9d740915f4/host Dec 6 05:15:13 localhost podman[313067]: 2025-12-06 10:15:13.319168596 +0000 UTC m=+0.054625598 container kill 8a8b7a6a9724101bff1398ade8c854164d1816271ca6c4f86a12732f70229362 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-deb7774c-e96b-4e7f-88d7-ed9d740915f4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3) Dec 6 05:15:13 localhost dnsmasq-dhcp[312566]: read /var/lib/neutron/dhcp/deb7774c-e96b-4e7f-88d7-ed9d740915f4/opts Dec 6 05:15:13 localhost systemd[1]: tmp-crun.zc8DCq.mount: Deactivated successfully. Dec 6 05:15:13 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:13.618 263652 INFO neutron.agent.dhcp.agent [None req-957ed8d5-3292-44c7-9943-9e84fb378784 - - - - - -] DHCP configuration for ports {'cb8f0dad-295c-4ff7-a2e3-6c05095b4764'} is completed#033[00m Dec 6 05:15:14 localhost dnsmasq[312123]: read /var/lib/neutron/dhcp/feb354e1-97d5-4c74-804a-eeb06e5bb155/addn_hosts - 0 addresses Dec 6 05:15:14 localhost dnsmasq-dhcp[312123]: read /var/lib/neutron/dhcp/feb354e1-97d5-4c74-804a-eeb06e5bb155/host Dec 6 05:15:14 localhost podman[313106]: 2025-12-06 10:15:14.107345962 +0000 UTC m=+0.069940132 container kill bf1840969a38cdc03ff2f446425ad666bbcdfe5ae29c7d80e81b374fc1a9dff9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-feb354e1-97d5-4c74-804a-eeb06e5bb155, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 6 05:15:14 localhost dnsmasq-dhcp[312123]: read /var/lib/neutron/dhcp/feb354e1-97d5-4c74-804a-eeb06e5bb155/opts Dec 6 05:15:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:15:14 localhost podman[313119]: 2025-12-06 10:15:14.22200444 +0000 UTC m=+0.085195716 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 6 05:15:14 localhost podman[313119]: 2025-12-06 10:15:14.257522378 +0000 UTC m=+0.120713704 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 6 05:15:14 localhost snmpd[67279]: empty variable list in _query Dec 6 05:15:14 localhost snmpd[67279]: empty variable list in _query Dec 6 05:15:14 localhost snmpd[67279]: empty variable list in _query Dec 6 05:15:14 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:15:14 localhost nova_compute[282193]: 2025-12-06 10:15:14.376 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:14 localhost ovn_controller[154851]: 2025-12-06T10:15:14Z|00109|binding|INFO|Releasing lport 2f6c7dc0-af46-4cc2-99f3-f46a11be455c from this chassis (sb_readonly=0) Dec 6 05:15:14 localhost ovn_controller[154851]: 2025-12-06T10:15:14Z|00110|binding|INFO|Setting lport 2f6c7dc0-af46-4cc2-99f3-f46a11be455c down in Southbound Dec 6 05:15:14 localhost kernel: device tap2f6c7dc0-af left promiscuous mode Dec 6 05:15:14 localhost nova_compute[282193]: 2025-12-06 10:15:14.396 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:14 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:14.695 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-feb354e1-97d5-4c74-804a-eeb06e5bb155', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-feb354e1-97d5-4c74-804a-eeb06e5bb155', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4185da56d12649bc8653dd9db208c0a0', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548789.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fd335efc-b05b-4aaa-a30a-c891a594ccf4, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=2f6c7dc0-af46-4cc2-99f3-f46a11be455c) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:15:14 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:14.698 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 2f6c7dc0-af46-4cc2-99f3-f46a11be455c in datapath feb354e1-97d5-4c74-804a-eeb06e5bb155 unbound from our chassis#033[00m Dec 6 05:15:14 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:14.703 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network feb354e1-97d5-4c74-804a-eeb06e5bb155, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:15:14 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:14.704 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[0a304e23-9df4-497b-a015-4d51954fc2e2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:15 localhost nova_compute[282193]: 2025-12-06 10:15:15.894 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:16 localhost openstack_network_exporter[243110]: ERROR 10:15:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:15:16 localhost openstack_network_exporter[243110]: ERROR 10:15:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:15:16 localhost openstack_network_exporter[243110]: ERROR 10:15:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:15:16 localhost openstack_network_exporter[243110]: ERROR 10:15:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:15:16 localhost openstack_network_exporter[243110]: Dec 6 05:15:16 localhost openstack_network_exporter[243110]: ERROR 10:15:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:15:16 localhost openstack_network_exporter[243110]: Dec 6 05:15:16 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e113 e113: 6 total, 6 up, 6 in Dec 6 05:15:17 localhost nova_compute[282193]: 2025-12-06 10:15:17.015 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:17 localhost neutron_sriov_agent[256690]: 2025-12-06 10:15:17.545 2 INFO neutron.agent.securitygroups_rpc [None req-5e443fd1-82aa-48be-b4ff-976554ebf448 da7bbd24eb95438897585b10577ea2e0 da995d8e002548889747013c0eeca935 - - default default] Security group rule updated ['581a4637-eff2-45f4-92f3-d575b736a840']#033[00m Dec 6 05:15:17 localhost ovn_controller[154851]: 2025-12-06T10:15:17Z|00111|binding|INFO|Releasing lport 9a87eef5-19db-4fcf-a021-4f61b153af33 from this chassis (sb_readonly=0) Dec 6 05:15:17 localhost ovn_controller[154851]: 2025-12-06T10:15:17Z|00112|binding|INFO|Releasing lport 8839eeed-ff6b-46d9-b40d-610788617728 from this chassis (sb_readonly=0) Dec 6 05:15:17 localhost ovn_controller[154851]: 2025-12-06T10:15:17Z|00113|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:15:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:15:17 localhost podman[313169]: 2025-12-06 10:15:17.741618015 +0000 UTC m=+0.088019190 container kill e4e5ac05442955f65040fee10f3aa342d80518f3e4a072b3d79fe4302f513cf7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Dec 6 05:15:17 localhost dnsmasq[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 7 addresses Dec 6 05:15:17 localhost dnsmasq-dhcp[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:15:17 localhost dnsmasq-dhcp[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:15:17 localhost systemd[1]: tmp-crun.NGrjuz.mount: Deactivated successfully. Dec 6 05:15:17 localhost neutron_sriov_agent[256690]: 2025-12-06 10:15:17.771 2 INFO neutron.agent.securitygroups_rpc [None req-54187745-6fe9-48d8-bbb3-7e399880134e da7bbd24eb95438897585b10577ea2e0 da995d8e002548889747013c0eeca935 - - default default] Security group rule updated ['581a4637-eff2-45f4-92f3-d575b736a840']#033[00m Dec 6 05:15:17 localhost systemd[1]: tmp-crun.EgQVOy.mount: Deactivated successfully. Dec 6 05:15:17 localhost nova_compute[282193]: 2025-12-06 10:15:17.797 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:17 localhost podman[313177]: 2025-12-06 10:15:17.801619566 +0000 UTC m=+0.112951938 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 6 05:15:17 localhost podman[313177]: 2025-12-06 10:15:17.861825412 +0000 UTC m=+0.173157804 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:15:17 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:15:18 localhost dnsmasq[312123]: exiting on receipt of SIGTERM Dec 6 05:15:18 localhost podman[313228]: 2025-12-06 10:15:18.137450322 +0000 UTC m=+0.040787258 container kill bf1840969a38cdc03ff2f446425ad666bbcdfe5ae29c7d80e81b374fc1a9dff9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-feb354e1-97d5-4c74-804a-eeb06e5bb155, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 6 05:15:18 localhost systemd[1]: libpod-bf1840969a38cdc03ff2f446425ad666bbcdfe5ae29c7d80e81b374fc1a9dff9.scope: Deactivated successfully. Dec 6 05:15:18 localhost podman[313242]: 2025-12-06 10:15:18.205179607 +0000 UTC m=+0.051959828 container died bf1840969a38cdc03ff2f446425ad666bbcdfe5ae29c7d80e81b374fc1a9dff9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-feb354e1-97d5-4c74-804a-eeb06e5bb155, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:15:18 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:15:18 localhost podman[313242]: 2025-12-06 10:15:18.248807519 +0000 UTC m=+0.095587750 container cleanup bf1840969a38cdc03ff2f446425ad666bbcdfe5ae29c7d80e81b374fc1a9dff9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-feb354e1-97d5-4c74-804a-eeb06e5bb155, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:15:18 localhost systemd[1]: libpod-conmon-bf1840969a38cdc03ff2f446425ad666bbcdfe5ae29c7d80e81b374fc1a9dff9.scope: Deactivated successfully. Dec 6 05:15:18 localhost podman[313243]: 2025-12-06 10:15:18.287329138 +0000 UTC m=+0.129757526 container remove bf1840969a38cdc03ff2f446425ad666bbcdfe5ae29c7d80e81b374fc1a9dff9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-feb354e1-97d5-4c74-804a-eeb06e5bb155, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3) Dec 6 05:15:18 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:18.313 263652 INFO neutron.agent.dhcp.agent [None req-b0c6e341-e194-4e94-87e0-7a3d2080e559 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:15:18 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:18.439 263652 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:15:18 localhost nova_compute[282193]: 2025-12-06 10:15:18.655 282197 DEBUG nova.compute.manager [req-75910a2f-80eb-488b-b75f-d1d834665e19 req-136732f4-6e5c-4f56-8969-2d1ed0f41471 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Received event network-vif-plugged-e87832d3-ffc3-44e0-9f77-cd2eb6073d62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 6 05:15:18 localhost nova_compute[282193]: 2025-12-06 10:15:18.655 282197 DEBUG oslo_concurrency.lockutils [req-75910a2f-80eb-488b-b75f-d1d834665e19 req-136732f4-6e5c-4f56-8969-2d1ed0f41471 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Acquiring lock "87dc2ce3-2b16-4764-9803-711c2d12c20f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:15:18 localhost nova_compute[282193]: 2025-12-06 10:15:18.656 282197 DEBUG oslo_concurrency.lockutils [req-75910a2f-80eb-488b-b75f-d1d834665e19 req-136732f4-6e5c-4f56-8969-2d1ed0f41471 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "87dc2ce3-2b16-4764-9803-711c2d12c20f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:15:18 localhost nova_compute[282193]: 2025-12-06 10:15:18.656 282197 DEBUG oslo_concurrency.lockutils [req-75910a2f-80eb-488b-b75f-d1d834665e19 req-136732f4-6e5c-4f56-8969-2d1ed0f41471 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "87dc2ce3-2b16-4764-9803-711c2d12c20f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:15:18 localhost nova_compute[282193]: 2025-12-06 10:15:18.656 282197 DEBUG nova.compute.manager [req-75910a2f-80eb-488b-b75f-d1d834665e19 req-136732f4-6e5c-4f56-8969-2d1ed0f41471 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] No waiting events found dispatching network-vif-plugged-e87832d3-ffc3-44e0-9f77-cd2eb6073d62 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Dec 6 05:15:18 localhost nova_compute[282193]: 2025-12-06 10:15:18.657 282197 WARNING nova.compute.manager [req-75910a2f-80eb-488b-b75f-d1d834665e19 req-136732f4-6e5c-4f56-8969-2d1ed0f41471 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Received unexpected event network-vif-plugged-e87832d3-ffc3-44e0-9f77-cd2eb6073d62 for instance with vm_state active and task_state None.#033[00m Dec 6 05:15:18 localhost nova_compute[282193]: 2025-12-06 10:15:18.657 282197 DEBUG nova.compute.manager [req-75910a2f-80eb-488b-b75f-d1d834665e19 req-136732f4-6e5c-4f56-8969-2d1ed0f41471 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Received event network-vif-plugged-e87832d3-ffc3-44e0-9f77-cd2eb6073d62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 6 05:15:18 localhost nova_compute[282193]: 2025-12-06 10:15:18.657 282197 DEBUG oslo_concurrency.lockutils [req-75910a2f-80eb-488b-b75f-d1d834665e19 req-136732f4-6e5c-4f56-8969-2d1ed0f41471 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Acquiring lock "87dc2ce3-2b16-4764-9803-711c2d12c20f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:15:18 localhost nova_compute[282193]: 2025-12-06 10:15:18.658 282197 DEBUG oslo_concurrency.lockutils [req-75910a2f-80eb-488b-b75f-d1d834665e19 req-136732f4-6e5c-4f56-8969-2d1ed0f41471 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "87dc2ce3-2b16-4764-9803-711c2d12c20f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:15:18 localhost nova_compute[282193]: 2025-12-06 10:15:18.658 282197 DEBUG oslo_concurrency.lockutils [req-75910a2f-80eb-488b-b75f-d1d834665e19 req-136732f4-6e5c-4f56-8969-2d1ed0f41471 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "87dc2ce3-2b16-4764-9803-711c2d12c20f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:15:18 localhost nova_compute[282193]: 2025-12-06 10:15:18.658 282197 DEBUG nova.compute.manager [req-75910a2f-80eb-488b-b75f-d1d834665e19 req-136732f4-6e5c-4f56-8969-2d1ed0f41471 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] No waiting events found dispatching network-vif-plugged-e87832d3-ffc3-44e0-9f77-cd2eb6073d62 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Dec 6 05:15:18 localhost nova_compute[282193]: 2025-12-06 10:15:18.659 282197 WARNING nova.compute.manager [req-75910a2f-80eb-488b-b75f-d1d834665e19 req-136732f4-6e5c-4f56-8969-2d1ed0f41471 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Received unexpected event network-vif-plugged-e87832d3-ffc3-44e0-9f77-cd2eb6073d62 for instance with vm_state active and task_state None.#033[00m Dec 6 05:15:18 localhost systemd[1]: var-lib-containers-storage-overlay-17d5803262906bbfc10a2359d065da806a9d9644144fa240df1cddd30ea542d6-merged.mount: Deactivated successfully. Dec 6 05:15:18 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bf1840969a38cdc03ff2f446425ad666bbcdfe5ae29c7d80e81b374fc1a9dff9-userdata-shm.mount: Deactivated successfully. Dec 6 05:15:18 localhost systemd[1]: run-netns-qdhcp\x2dfeb354e1\x2d97d5\x2d4c74\x2d804a\x2deeb06e5bb155.mount: Deactivated successfully. Dec 6 05:15:20 localhost nova_compute[282193]: 2025-12-06 10:15:20.703 282197 DEBUG nova.compute.manager [req-05a2636c-fccb-4682-9d32-0ff3b66d77a7 req-11adf25e-0fd3-4002-8f7d-42187a1c57d2 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Received event network-vif-plugged-e87832d3-ffc3-44e0-9f77-cd2eb6073d62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 6 05:15:20 localhost nova_compute[282193]: 2025-12-06 10:15:20.703 282197 DEBUG oslo_concurrency.lockutils [req-05a2636c-fccb-4682-9d32-0ff3b66d77a7 req-11adf25e-0fd3-4002-8f7d-42187a1c57d2 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Acquiring lock "87dc2ce3-2b16-4764-9803-711c2d12c20f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:15:20 localhost nova_compute[282193]: 2025-12-06 10:15:20.704 282197 DEBUG oslo_concurrency.lockutils [req-05a2636c-fccb-4682-9d32-0ff3b66d77a7 req-11adf25e-0fd3-4002-8f7d-42187a1c57d2 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "87dc2ce3-2b16-4764-9803-711c2d12c20f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:15:20 localhost nova_compute[282193]: 2025-12-06 10:15:20.705 282197 DEBUG oslo_concurrency.lockutils [req-05a2636c-fccb-4682-9d32-0ff3b66d77a7 req-11adf25e-0fd3-4002-8f7d-42187a1c57d2 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "87dc2ce3-2b16-4764-9803-711c2d12c20f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:15:20 localhost nova_compute[282193]: 2025-12-06 10:15:20.705 282197 DEBUG nova.compute.manager [req-05a2636c-fccb-4682-9d32-0ff3b66d77a7 req-11adf25e-0fd3-4002-8f7d-42187a1c57d2 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] No waiting events found dispatching network-vif-plugged-e87832d3-ffc3-44e0-9f77-cd2eb6073d62 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Dec 6 05:15:20 localhost nova_compute[282193]: 2025-12-06 10:15:20.706 282197 WARNING nova.compute.manager [req-05a2636c-fccb-4682-9d32-0ff3b66d77a7 req-11adf25e-0fd3-4002-8f7d-42187a1c57d2 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Received unexpected event network-vif-plugged-e87832d3-ffc3-44e0-9f77-cd2eb6073d62 for instance with vm_state active and task_state None.#033[00m Dec 6 05:15:20 localhost nova_compute[282193]: 2025-12-06 10:15:20.942 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:21 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e114 e114: 6 total, 6 up, 6 in Dec 6 05:15:22 localhost dnsmasq[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 6 addresses Dec 6 05:15:22 localhost dnsmasq-dhcp[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:15:22 localhost podman[313287]: 2025-12-06 10:15:22.004082024 +0000 UTC m=+0.051521924 container kill e4e5ac05442955f65040fee10f3aa342d80518f3e4a072b3d79fe4302f513cf7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:15:22 localhost dnsmasq-dhcp[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:15:22 localhost nova_compute[282193]: 2025-12-06 10:15:22.053 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:22 localhost ovn_controller[154851]: 2025-12-06T10:15:22Z|00114|binding|INFO|Releasing lport 9a87eef5-19db-4fcf-a021-4f61b153af33 from this chassis (sb_readonly=0) Dec 6 05:15:22 localhost ovn_controller[154851]: 2025-12-06T10:15:22Z|00115|binding|INFO|Releasing lport 8839eeed-ff6b-46d9-b40d-610788617728 from this chassis (sb_readonly=0) Dec 6 05:15:22 localhost ovn_controller[154851]: 2025-12-06T10:15:22Z|00116|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:15:22 localhost nova_compute[282193]: 2025-12-06 10:15:22.176 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:22 localhost neutron_sriov_agent[256690]: 2025-12-06 10:15:22.678 2 INFO neutron.agent.securitygroups_rpc [req-32f9c27c-7e39-487b-9f96-37ea07c2a545 req-64092713-96b8-4823-87de-00cf06a3e614 da7bbd24eb95438897585b10577ea2e0 da995d8e002548889747013c0eeca935 - - default default] Security group member updated ['581a4637-eff2-45f4-92f3-d575b736a840']#033[00m Dec 6 05:15:22 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:22.733 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:15:22Z, description=, device_id=89419cdc-1b37-4fdd-ad4b-013514e141a9, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=22b2d742-fd5b-4bf4-898c-5da61dccc8af, ip_allocation=immediate, mac_address=fa:16:3e:df:0c:e8, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:14:54Z, description=, dns_domain=, id=deb7774c-e96b-4e7f-88d7-ed9d740915f4, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServersV294TestFqdnHostnames-1078460514-network, port_security_enabled=True, project_id=da995d8e002548889747013c0eeca935, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=20432, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=633, status=ACTIVE, subnets=['21dfbaea-2209-4e97-94d1-e29a4f3ba83b'], tags=[], tenant_id=da995d8e002548889747013c0eeca935, updated_at=2025-12-06T10:14:55Z, vlan_transparent=None, network_id=deb7774c-e96b-4e7f-88d7-ed9d740915f4, port_security_enabled=True, project_id=da995d8e002548889747013c0eeca935, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['581a4637-eff2-45f4-92f3-d575b736a840'], standard_attr_id=711, status=DOWN, tags=[], tenant_id=da995d8e002548889747013c0eeca935, updated_at=2025-12-06T10:15:22Z on network deb7774c-e96b-4e7f-88d7-ed9d740915f4#033[00m Dec 6 05:15:22 localhost sshd[313323]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:15:22 localhost nova_compute[282193]: 2025-12-06 10:15:22.917 282197 DEBUG oslo_concurrency.lockutils [None req-9d46d05d-bff5-4191-8d08-2ac7960589d8 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Acquiring lock "87dc2ce3-2b16-4764-9803-711c2d12c20f" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:15:22 localhost nova_compute[282193]: 2025-12-06 10:15:22.918 282197 DEBUG oslo_concurrency.lockutils [None req-9d46d05d-bff5-4191-8d08-2ac7960589d8 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Lock "87dc2ce3-2b16-4764-9803-711c2d12c20f" acquired by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:15:22 localhost nova_compute[282193]: 2025-12-06 10:15:22.918 282197 DEBUG oslo_concurrency.lockutils [None req-9d46d05d-bff5-4191-8d08-2ac7960589d8 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Acquiring lock "87dc2ce3-2b16-4764-9803-711c2d12c20f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:15:22 localhost nova_compute[282193]: 2025-12-06 10:15:22.919 282197 DEBUG oslo_concurrency.lockutils [None req-9d46d05d-bff5-4191-8d08-2ac7960589d8 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Lock "87dc2ce3-2b16-4764-9803-711c2d12c20f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:15:22 localhost nova_compute[282193]: 2025-12-06 10:15:22.920 282197 DEBUG oslo_concurrency.lockutils [None req-9d46d05d-bff5-4191-8d08-2ac7960589d8 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Lock "87dc2ce3-2b16-4764-9803-711c2d12c20f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:15:22 localhost nova_compute[282193]: 2025-12-06 10:15:22.921 282197 INFO nova.compute.manager [None req-9d46d05d-bff5-4191-8d08-2ac7960589d8 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Terminating instance#033[00m Dec 6 05:15:22 localhost nova_compute[282193]: 2025-12-06 10:15:22.923 282197 DEBUG nova.compute.manager [None req-9d46d05d-bff5-4191-8d08-2ac7960589d8 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m Dec 6 05:15:22 localhost dnsmasq[312566]: read /var/lib/neutron/dhcp/deb7774c-e96b-4e7f-88d7-ed9d740915f4/addn_hosts - 2 addresses Dec 6 05:15:22 localhost dnsmasq-dhcp[312566]: read /var/lib/neutron/dhcp/deb7774c-e96b-4e7f-88d7-ed9d740915f4/host Dec 6 05:15:22 localhost podman[313326]: 2025-12-06 10:15:22.978932463 +0000 UTC m=+0.073935065 container kill 8a8b7a6a9724101bff1398ade8c854164d1816271ca6c4f86a12732f70229362 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-deb7774c-e96b-4e7f-88d7-ed9d740915f4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:15:22 localhost dnsmasq-dhcp[312566]: read /var/lib/neutron/dhcp/deb7774c-e96b-4e7f-88d7-ed9d740915f4/opts Dec 6 05:15:22 localhost systemd[1]: tmp-crun.02v3Ck.mount: Deactivated successfully. Dec 6 05:15:22 localhost kernel: device tape87832d3-ff left promiscuous mode Dec 6 05:15:22 localhost NetworkManager[5973]: [1765016122.9862] device (tape87832d3-ff): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed') Dec 6 05:15:22 localhost ovn_controller[154851]: 2025-12-06T10:15:22Z|00117|binding|INFO|Releasing lport e87832d3-ffc3-44e0-9f77-cd2eb6073d62 from this chassis (sb_readonly=0) Dec 6 05:15:22 localhost ovn_controller[154851]: 2025-12-06T10:15:22Z|00118|binding|INFO|Setting lport e87832d3-ffc3-44e0-9f77-cd2eb6073d62 down in Southbound Dec 6 05:15:22 localhost ovn_controller[154851]: 2025-12-06T10:15:22Z|00119|binding|INFO|Releasing lport 3b69daca-b91a-4923-9795-2e6a02ee3d59 from this chassis (sb_readonly=0) Dec 6 05:15:22 localhost ovn_controller[154851]: 2025-12-06T10:15:22Z|00120|binding|INFO|Setting lport 3b69daca-b91a-4923-9795-2e6a02ee3d59 down in Southbound Dec 6 05:15:22 localhost ovn_controller[154851]: 2025-12-06T10:15:22Z|00121|binding|INFO|Removing iface tape87832d3-ff ovn-installed in OVS Dec 6 05:15:22 localhost nova_compute[282193]: 2025-12-06 10:15:22.994 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:23 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:23.002 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:f5:37 10.100.0.14'], port_security=['fa:16:3e:0e:f5:37 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-876689022', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-47d636a7-c520-4320-aa94-bfb41f418584', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-876689022', 'neutron:project_id': '7897d6398eb64eb29c66df8db792e581', 'neutron:revision_number': '12', 'neutron:security_group_ids': 'bfad329a-0ea3-4b02-8e91-9d15749f8c9b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548789.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6898c302-0153-460c-9cb1-4c62ebc9ff31, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=e87832d3-ffc3-44e0-9f77-cd2eb6073d62) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:15:23 localhost ovn_controller[154851]: 2025-12-06T10:15:23Z|00122|binding|INFO|Releasing lport 9a87eef5-19db-4fcf-a021-4f61b153af33 from this chassis (sb_readonly=0) Dec 6 05:15:23 localhost ovn_controller[154851]: 2025-12-06T10:15:23Z|00123|binding|INFO|Releasing lport 8839eeed-ff6b-46d9-b40d-610788617728 from this chassis (sb_readonly=0) Dec 6 05:15:23 localhost ovn_controller[154851]: 2025-12-06T10:15:23Z|00124|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:15:23 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:23.005 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:e1:a6 19.80.0.214'], port_security=['fa:16:3e:a8:e1:a6 19.80.0.214'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['e87832d3-ffc3-44e0-9f77-cd2eb6073d62'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-546955816', 'neutron:cidrs': '19.80.0.214/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-932e7489-8895-41d4-92c6-0d944505e7e6', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-546955816', 'neutron:project_id': '7897d6398eb64eb29c66df8db792e581', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'bfad329a-0ea3-4b02-8e91-9d15749f8c9b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=f9bb405c-aea0-4a81-a300-475f8e1e8050, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=3b69daca-b91a-4923-9795-2e6a02ee3d59) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:15:23 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:23.007 160509 INFO neutron.agent.ovn.metadata.agent [-] Port e87832d3-ffc3-44e0-9f77-cd2eb6073d62 in datapath 47d636a7-c520-4320-aa94-bfb41f418584 unbound from our chassis#033[00m Dec 6 05:15:23 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:23.011 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 47d636a7-c520-4320-aa94-bfb41f418584, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:15:23 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:23.012 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[57c5a341-953a-4e6f-afb0-66a6ac7ab0cb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:23 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:23.013 160509 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-47d636a7-c520-4320-aa94-bfb41f418584 namespace which is not needed anymore#033[00m Dec 6 05:15:23 localhost nova_compute[282193]: 2025-12-06 10:15:23.025 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:23 localhost systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000007.scope: Deactivated successfully. Dec 6 05:15:23 localhost systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000007.scope: Consumed 2.800s CPU time. Dec 6 05:15:23 localhost systemd-machined[84444]: Machine qemu-3-instance-00000007 terminated. Dec 6 05:15:23 localhost nova_compute[282193]: 2025-12-06 10:15:23.033 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:23 localhost nova_compute[282193]: 2025-12-06 10:15:23.160 282197 INFO nova.virt.libvirt.driver [-] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Instance destroyed successfully.#033[00m Dec 6 05:15:23 localhost nova_compute[282193]: 2025-12-06 10:15:23.161 282197 DEBUG nova.objects.instance [None req-9d46d05d-bff5-4191-8d08-2ac7960589d8 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Lazy-loading 'resources' on Instance uuid 87dc2ce3-2b16-4764-9803-711c2d12c20f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 6 05:15:23 localhost nova_compute[282193]: 2025-12-06 10:15:23.172 282197 DEBUG nova.virt.libvirt.vif [None req-9d46d05d-bff5-4191-8d08-2ac7960589d8 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T10:14:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-1999616987',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005548789.localdomain',hostname='tempest-liveautoblockmigrationv225test-server-1999616987',id=7,image_ref='6a944ab6-8965-4055-b7fc-af6e395005ea',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-12-06T10:14:45Z,launched_on='np0005548790.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=,new_flavor=None,node='np0005548789.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=,power_state=1,progress=0,project_id='7897d6398eb64eb29c66df8db792e581',ramdisk_id='',reservation_id='r-tcv45ne4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='6a944ab6-8965-4055-b7fc-af6e395005ea',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-265776820',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-265776820-project-member'},tags=,task_state='deleting',terminated_at=None,trusted_certs=,updated_at=2025-12-06T10:15:08Z,user_data=None,user_id='ac2e85103fd14829ad4e6df2357da95b',uuid=87dc2ce3-2b16-4764-9803-711c2d12c20f,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e87832d3-ffc3-44e0-9f77-cd2eb6073d62", "address": "fa:16:3e:0e:f5:37", "network": {"id": "47d636a7-c520-4320-aa94-bfb41f418584", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1313845827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "7897d6398eb64eb29c66df8db792e581", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape87832d3-ff", "ovs_interfaceid": "e87832d3-ffc3-44e0-9f77-cd2eb6073d62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m Dec 6 05:15:23 localhost nova_compute[282193]: 2025-12-06 10:15:23.173 282197 DEBUG nova.network.os_vif_util [None req-9d46d05d-bff5-4191-8d08-2ac7960589d8 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Converting VIF {"id": "e87832d3-ffc3-44e0-9f77-cd2eb6073d62", "address": "fa:16:3e:0e:f5:37", "network": {"id": "47d636a7-c520-4320-aa94-bfb41f418584", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1313845827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "7897d6398eb64eb29c66df8db792e581", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape87832d3-ff", "ovs_interfaceid": "e87832d3-ffc3-44e0-9f77-cd2eb6073d62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Dec 6 05:15:23 localhost nova_compute[282193]: 2025-12-06 10:15:23.174 282197 DEBUG nova.network.os_vif_util [None req-9d46d05d-bff5-4191-8d08-2ac7960589d8 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0e:f5:37,bridge_name='br-int',has_traffic_filtering=True,id=e87832d3-ffc3-44e0-9f77-cd2eb6073d62,network=Network(47d636a7-c520-4320-aa94-bfb41f418584),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape87832d3-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Dec 6 05:15:23 localhost nova_compute[282193]: 2025-12-06 10:15:23.175 282197 DEBUG os_vif [None req-9d46d05d-bff5-4191-8d08-2ac7960589d8 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:f5:37,bridge_name='br-int',has_traffic_filtering=True,id=e87832d3-ffc3-44e0-9f77-cd2eb6073d62,network=Network(47d636a7-c520-4320-aa94-bfb41f418584),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape87832d3-ff') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m Dec 6 05:15:23 localhost nova_compute[282193]: 2025-12-06 10:15:23.177 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:23 localhost nova_compute[282193]: 2025-12-06 10:15:23.177 282197 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape87832d3-ff, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 05:15:23 localhost systemd[1]: tmp-crun.vIdInn.mount: Deactivated successfully. Dec 6 05:15:23 localhost nova_compute[282193]: 2025-12-06 10:15:23.215 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:23 localhost neutron-haproxy-ovnmeta-47d636a7-c520-4320-aa94-bfb41f418584[312771]: [NOTICE] (312775) : haproxy version is 2.8.14-c23fe91 Dec 6 05:15:23 localhost neutron-haproxy-ovnmeta-47d636a7-c520-4320-aa94-bfb41f418584[312771]: [NOTICE] (312775) : path to executable is /usr/sbin/haproxy Dec 6 05:15:23 localhost neutron-haproxy-ovnmeta-47d636a7-c520-4320-aa94-bfb41f418584[312771]: [WARNING] (312775) : Exiting Master process... Dec 6 05:15:23 localhost neutron-haproxy-ovnmeta-47d636a7-c520-4320-aa94-bfb41f418584[312771]: [WARNING] (312775) : Exiting Master process... Dec 6 05:15:23 localhost nova_compute[282193]: 2025-12-06 10:15:23.219 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:15:23 localhost neutron-haproxy-ovnmeta-47d636a7-c520-4320-aa94-bfb41f418584[312771]: [ALERT] (312775) : Current worker (312777) exited with code 143 (Terminated) Dec 6 05:15:23 localhost neutron-haproxy-ovnmeta-47d636a7-c520-4320-aa94-bfb41f418584[312771]: [WARNING] (312775) : All workers exited. Exiting... (0) Dec 6 05:15:23 localhost systemd[1]: libpod-30f8df0ce350363c4f7f35e7678b3d71ec43583d72e5615def6c2519fa7edac0.scope: Deactivated successfully. Dec 6 05:15:23 localhost nova_compute[282193]: 2025-12-06 10:15:23.221 282197 INFO os_vif [None req-9d46d05d-bff5-4191-8d08-2ac7960589d8 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:f5:37,bridge_name='br-int',has_traffic_filtering=True,id=e87832d3-ffc3-44e0-9f77-cd2eb6073d62,network=Network(47d636a7-c520-4320-aa94-bfb41f418584),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape87832d3-ff')#033[00m Dec 6 05:15:23 localhost podman[313368]: 2025-12-06 10:15:23.23205495 +0000 UTC m=+0.113509384 container died 30f8df0ce350363c4f7f35e7678b3d71ec43583d72e5615def6c2519fa7edac0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-47d636a7-c520-4320-aa94-bfb41f418584, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 6 05:15:23 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:15:23 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:23.265 263652 INFO neutron.agent.dhcp.agent [None req-300ea3ab-a6b2-4b27-9192-3b7890bf5a3c - - - - - -] DHCP configuration for ports {'22b2d742-fd5b-4bf4-898c-5da61dccc8af'} is completed#033[00m Dec 6 05:15:23 localhost podman[313368]: 2025-12-06 10:15:23.268232487 +0000 UTC m=+0.149686911 container cleanup 30f8df0ce350363c4f7f35e7678b3d71ec43583d72e5615def6c2519fa7edac0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-47d636a7-c520-4320-aa94-bfb41f418584, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 6 05:15:23 localhost podman[313395]: 2025-12-06 10:15:23.296255277 +0000 UTC m=+0.061360722 container cleanup 30f8df0ce350363c4f7f35e7678b3d71ec43583d72e5615def6c2519fa7edac0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-47d636a7-c520-4320-aa94-bfb41f418584, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:15:23 localhost systemd[1]: libpod-conmon-30f8df0ce350363c4f7f35e7678b3d71ec43583d72e5615def6c2519fa7edac0.scope: Deactivated successfully. Dec 6 05:15:23 localhost podman[313424]: 2025-12-06 10:15:23.369497629 +0000 UTC m=+0.075172812 container remove 30f8df0ce350363c4f7f35e7678b3d71ec43583d72e5615def6c2519fa7edac0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-47d636a7-c520-4320-aa94-bfb41f418584, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true) Dec 6 05:15:23 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:23.372 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[066bf514-db4d-4353-9c59-c3f7dd825bda]: (4, ('Sat Dec 6 10:15:23 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-47d636a7-c520-4320-aa94-bfb41f418584 (30f8df0ce350363c4f7f35e7678b3d71ec43583d72e5615def6c2519fa7edac0)\n30f8df0ce350363c4f7f35e7678b3d71ec43583d72e5615def6c2519fa7edac0\nSat Dec 6 10:15:23 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-47d636a7-c520-4320-aa94-bfb41f418584 (30f8df0ce350363c4f7f35e7678b3d71ec43583d72e5615def6c2519fa7edac0)\n30f8df0ce350363c4f7f35e7678b3d71ec43583d72e5615def6c2519fa7edac0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:23 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:23.374 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[04278edd-3bf8-415b-92f6-4cbe64cc7d60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:23 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:23.375 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap47d636a7-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 05:15:23 localhost nova_compute[282193]: 2025-12-06 10:15:23.376 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:23 localhost kernel: device tap47d636a7-c0 left promiscuous mode Dec 6 05:15:23 localhost nova_compute[282193]: 2025-12-06 10:15:23.378 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:23 localhost nova_compute[282193]: 2025-12-06 10:15:23.384 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:23 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:23.384 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[b0f8c986-4794-409a-81a0-56787e2cb568]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:23 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:23.401 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[e0335da9-4a90-4feb-bf17-55ed0250bf24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:23 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:23.402 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[c5062c76-d79c-4ca5-81c9-f23cc8f29da8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:23 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:23.416 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[42c5bba4-a925-47c1-afa6-e3c53692e6c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1252232, 'reachable_time': 40375, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313445, 'error': None, 'target': 'ovnmeta-47d636a7-c520-4320-aa94-bfb41f418584', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:23 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:23.418 160720 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-47d636a7-c520-4320-aa94-bfb41f418584 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Dec 6 05:15:23 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:23.418 160720 DEBUG oslo.privsep.daemon [-] privsep: reply[fb66873e-ed04-4a51-816a-68fad6626b4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:23 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:23.419 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 3b69daca-b91a-4923-9795-2e6a02ee3d59 in datapath 932e7489-8895-41d4-92c6-0d944505e7e6 unbound from our chassis#033[00m Dec 6 05:15:23 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:23.420 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 932e7489-8895-41d4-92c6-0d944505e7e6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:15:23 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:23.421 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[c17a9cf2-977e-499b-9a81-f17fda8ee91a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:23 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:23.421 160509 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-932e7489-8895-41d4-92c6-0d944505e7e6 namespace which is not needed anymore#033[00m Dec 6 05:15:23 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:23.484 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=np0005548788.localdomain, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:15:22Z, description=, device_id=89419cdc-1b37-4fdd-ad4b-013514e141a9, device_owner=compute:nova, dns_assignment=[], dns_domain=, dns_name=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com, extra_dhcp_opts=[], fixed_ips=[], id=22b2d742-fd5b-4bf4-898c-5da61dccc8af, ip_allocation=immediate, mac_address=fa:16:3e:df:0c:e8, name=, network_id=deb7774c-e96b-4e7f-88d7-ed9d740915f4, port_security_enabled=True, project_id=da995d8e002548889747013c0eeca935, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['581a4637-eff2-45f4-92f3-d575b736a840'], standard_attr_id=711, status=DOWN, tags=[], tenant_id=da995d8e002548889747013c0eeca935, updated_at=2025-12-06T10:15:23Z on network deb7774c-e96b-4e7f-88d7-ed9d740915f4#033[00m Dec 6 05:15:23 localhost neutron-haproxy-ovnmeta-932e7489-8895-41d4-92c6-0d944505e7e6[312846]: [NOTICE] (312868) : haproxy version is 2.8.14-c23fe91 Dec 6 05:15:23 localhost neutron-haproxy-ovnmeta-932e7489-8895-41d4-92c6-0d944505e7e6[312846]: [NOTICE] (312868) : path to executable is /usr/sbin/haproxy Dec 6 05:15:23 localhost neutron-haproxy-ovnmeta-932e7489-8895-41d4-92c6-0d944505e7e6[312846]: [WARNING] (312868) : Exiting Master process... Dec 6 05:15:23 localhost neutron-haproxy-ovnmeta-932e7489-8895-41d4-92c6-0d944505e7e6[312846]: [ALERT] (312868) : Current worker (312870) exited with code 143 (Terminated) Dec 6 05:15:23 localhost neutron-haproxy-ovnmeta-932e7489-8895-41d4-92c6-0d944505e7e6[312846]: [WARNING] (312868) : All workers exited. Exiting... (0) Dec 6 05:15:23 localhost systemd[1]: libpod-612bd559f2204770efc500c44b1ad1302ada83654a900876cb5eb3da5fa5d50c.scope: Deactivated successfully. Dec 6 05:15:23 localhost dnsmasq[312566]: read /var/lib/neutron/dhcp/deb7774c-e96b-4e7f-88d7-ed9d740915f4/addn_hosts - 2 addresses Dec 6 05:15:23 localhost dnsmasq-dhcp[312566]: read /var/lib/neutron/dhcp/deb7774c-e96b-4e7f-88d7-ed9d740915f4/host Dec 6 05:15:23 localhost dnsmasq-dhcp[312566]: read /var/lib/neutron/dhcp/deb7774c-e96b-4e7f-88d7-ed9d740915f4/opts Dec 6 05:15:23 localhost podman[313503]: 2025-12-06 10:15:23.655199104 +0000 UTC m=+0.045975545 container kill 8a8b7a6a9724101bff1398ade8c854164d1816271ca6c4f86a12732f70229362 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-deb7774c-e96b-4e7f-88d7-ed9d740915f4, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Dec 6 05:15:23 localhost podman[313463]: 2025-12-06 10:15:23.686508134 +0000 UTC m=+0.167790910 container died 612bd559f2204770efc500c44b1ad1302ada83654a900876cb5eb3da5fa5d50c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-932e7489-8895-41d4-92c6-0d944505e7e6, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 6 05:15:23 localhost podman[313463]: 2025-12-06 10:15:23.709264144 +0000 UTC m=+0.190546900 container cleanup 612bd559f2204770efc500c44b1ad1302ada83654a900876cb5eb3da5fa5d50c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-932e7489-8895-41d4-92c6-0d944505e7e6, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:15:23 localhost podman[313492]: 2025-12-06 10:15:23.718983929 +0000 UTC m=+0.132482889 container cleanup 612bd559f2204770efc500c44b1ad1302ada83654a900876cb5eb3da5fa5d50c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-932e7489-8895-41d4-92c6-0d944505e7e6, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:15:23 localhost systemd[1]: libpod-conmon-612bd559f2204770efc500c44b1ad1302ada83654a900876cb5eb3da5fa5d50c.scope: Deactivated successfully. Dec 6 05:15:23 localhost podman[313529]: 2025-12-06 10:15:23.781493815 +0000 UTC m=+0.058974810 container remove 612bd559f2204770efc500c44b1ad1302ada83654a900876cb5eb3da5fa5d50c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-932e7489-8895-41d4-92c6-0d944505e7e6, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 6 05:15:23 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:23.787 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[d9bd1c63-7c1e-4bab-b769-4c1e8962bce3]: (4, ('Sat Dec 6 10:15:23 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-932e7489-8895-41d4-92c6-0d944505e7e6 (612bd559f2204770efc500c44b1ad1302ada83654a900876cb5eb3da5fa5d50c)\n612bd559f2204770efc500c44b1ad1302ada83654a900876cb5eb3da5fa5d50c\nSat Dec 6 10:15:23 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-932e7489-8895-41d4-92c6-0d944505e7e6 (612bd559f2204770efc500c44b1ad1302ada83654a900876cb5eb3da5fa5d50c)\n612bd559f2204770efc500c44b1ad1302ada83654a900876cb5eb3da5fa5d50c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:23 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:23.788 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[38186457-3592-4611-819b-cba29bd8de15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:23 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:23.788 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap932e7489-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 05:15:23 localhost nova_compute[282193]: 2025-12-06 10:15:23.790 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:23 localhost kernel: device tap932e7489-80 left promiscuous mode Dec 6 05:15:23 localhost nova_compute[282193]: 2025-12-06 10:15:23.795 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:23 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:23.797 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[b106c2da-f411-46e5-9050-c7edbb0c673b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:23 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:23.812 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[f2befca4-9408-4940-92c2-45538027c411]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:23 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:23.812 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[27dbd2b5-aedb-439d-ae91-9ecbb11553c4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:23 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:23.824 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[07eb7743-0f5d-4ba9-9860-60e22b56bed2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1252319, 'reachable_time': 23571, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313549, 'error': None, 'target': 'ovnmeta-932e7489-8895-41d4-92c6-0d944505e7e6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:23 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:23.825 160720 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-932e7489-8895-41d4-92c6-0d944505e7e6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Dec 6 05:15:23 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:23.825 160720 DEBUG oslo.privsep.daemon [-] privsep: reply[7a531c97-87bb-4928-947a-8c5ad60f041f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:23 localhost nova_compute[282193]: 2025-12-06 10:15:23.838 282197 INFO nova.virt.libvirt.driver [None req-9d46d05d-bff5-4191-8d08-2ac7960589d8 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Deleting instance files /var/lib/nova/instances/87dc2ce3-2b16-4764-9803-711c2d12c20f_del#033[00m Dec 6 05:15:23 localhost nova_compute[282193]: 2025-12-06 10:15:23.839 282197 INFO nova.virt.libvirt.driver [None req-9d46d05d-bff5-4191-8d08-2ac7960589d8 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Deletion of /var/lib/nova/instances/87dc2ce3-2b16-4764-9803-711c2d12c20f_del complete#033[00m Dec 6 05:15:23 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:23.885 263652 INFO neutron.agent.dhcp.agent [None req-efd4d040-c2ef-4514-8d3a-d6defd2372ce - - - - - -] DHCP configuration for ports {'22b2d742-fd5b-4bf4-898c-5da61dccc8af'} is completed#033[00m Dec 6 05:15:23 localhost nova_compute[282193]: 2025-12-06 10:15:23.891 282197 INFO nova.compute.manager [None req-9d46d05d-bff5-4191-8d08-2ac7960589d8 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Took 0.97 seconds to destroy the instance on the hypervisor.#033[00m Dec 6 05:15:23 localhost nova_compute[282193]: 2025-12-06 10:15:23.892 282197 DEBUG oslo.service.loopingcall [None req-9d46d05d-bff5-4191-8d08-2ac7960589d8 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m Dec 6 05:15:23 localhost nova_compute[282193]: 2025-12-06 10:15:23.892 282197 DEBUG nova.compute.manager [-] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m Dec 6 05:15:23 localhost nova_compute[282193]: 2025-12-06 10:15:23.893 282197 DEBUG nova.network.neutron [-] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m Dec 6 05:15:23 localhost podman[241090]: time="2025-12-06T10:15:23Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:15:23 localhost podman[241090]: @ - - [06/Dec/2025:10:15:23 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157928 "" "Go-http-client/1.1" Dec 6 05:15:23 localhost podman[241090]: @ - - [06/Dec/2025:10:15:23 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19735 "" "Go-http-client/1.1" Dec 6 05:15:24 localhost systemd[1]: var-lib-containers-storage-overlay-0cd54fc14950974dc4be0ab802b5b1b70a4a6778ec88aa2829505a69bc3f2898-merged.mount: Deactivated successfully. Dec 6 05:15:24 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-612bd559f2204770efc500c44b1ad1302ada83654a900876cb5eb3da5fa5d50c-userdata-shm.mount: Deactivated successfully. Dec 6 05:15:24 localhost systemd[1]: run-netns-ovnmeta\x2d932e7489\x2d8895\x2d41d4\x2d92c6\x2d0d944505e7e6.mount: Deactivated successfully. Dec 6 05:15:24 localhost systemd[1]: var-lib-containers-storage-overlay-9c58caa5621f3279794f7dc107a894db9a252904b5522821832a2bf549b22bd7-merged.mount: Deactivated successfully. Dec 6 05:15:24 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-30f8df0ce350363c4f7f35e7678b3d71ec43583d72e5615def6c2519fa7edac0-userdata-shm.mount: Deactivated successfully. Dec 6 05:15:24 localhost systemd[1]: run-netns-ovnmeta\x2d47d636a7\x2dc520\x2d4320\x2daa94\x2dbfb41f418584.mount: Deactivated successfully. Dec 6 05:15:25 localhost nova_compute[282193]: 2025-12-06 10:15:25.440 282197 DEBUG nova.network.neutron [-] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 6 05:15:25 localhost nova_compute[282193]: 2025-12-06 10:15:25.456 282197 INFO nova.compute.manager [-] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Took 1.56 seconds to deallocate network for instance.#033[00m Dec 6 05:15:25 localhost nova_compute[282193]: 2025-12-06 10:15:25.518 282197 DEBUG oslo_concurrency.lockutils [None req-9d46d05d-bff5-4191-8d08-2ac7960589d8 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:15:25 localhost nova_compute[282193]: 2025-12-06 10:15:25.519 282197 DEBUG oslo_concurrency.lockutils [None req-9d46d05d-bff5-4191-8d08-2ac7960589d8 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:15:25 localhost nova_compute[282193]: 2025-12-06 10:15:25.522 282197 DEBUG oslo_concurrency.lockutils [None req-9d46d05d-bff5-4191-8d08-2ac7960589d8 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:15:25 localhost nova_compute[282193]: 2025-12-06 10:15:25.562 282197 INFO nova.scheduler.client.report [None req-9d46d05d-bff5-4191-8d08-2ac7960589d8 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Deleted allocations for instance 87dc2ce3-2b16-4764-9803-711c2d12c20f#033[00m Dec 6 05:15:25 localhost nova_compute[282193]: 2025-12-06 10:15:25.638 282197 DEBUG oslo_concurrency.lockutils [None req-9d46d05d-bff5-4191-8d08-2ac7960589d8 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Lock "87dc2ce3-2b16-4764-9803-711c2d12c20f" "released" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: held 2.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:15:26 localhost neutron_sriov_agent[256690]: 2025-12-06 10:15:26.979 2 INFO neutron.agent.securitygroups_rpc [None req-4bf7090f-619c-441c-8a74-44ff051b2a47 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Security group member updated ['bfad329a-0ea3-4b02-8e91-9d15749f8c9b']#033[00m Dec 6 05:15:26 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e115 e115: 6 total, 6 up, 6 in Dec 6 05:15:27 localhost nova_compute[282193]: 2025-12-06 10:15:27.057 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:28 localhost nova_compute[282193]: 2025-12-06 10:15:28.217 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:28 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:15:28 localhost neutron_sriov_agent[256690]: 2025-12-06 10:15:28.683 2 INFO neutron.agent.securitygroups_rpc [None req-9fa949f8-0732-40f0-9fd9-bacbdfb578db ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Security group member updated ['bfad329a-0ea3-4b02-8e91-9d15749f8c9b']#033[00m Dec 6 05:15:28 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:28.865 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:15:28Z, description=, device_id=af0f743c-b34f-4641-9bca-6f879d4af6de, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=72f817fe-8a65-4586-937f-6a6314c57627, ip_allocation=immediate, mac_address=fa:16:3e:e5:9e:66, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=764, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-06T10:15:28Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:15:28 localhost nova_compute[282193]: 2025-12-06 10:15:28.899 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:28 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:28.900 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:6c:02', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:a8:2f:0c:cb:a1'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:15:28 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:28.902 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 6 05:15:29 localhost podman[313567]: 2025-12-06 10:15:29.076382397 +0000 UTC m=+0.045944164 container kill e4e5ac05442955f65040fee10f3aa342d80518f3e4a072b3d79fe4302f513cf7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:15:29 localhost dnsmasq[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 7 addresses Dec 6 05:15:29 localhost dnsmasq-dhcp[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:15:29 localhost dnsmasq-dhcp[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:15:29 localhost systemd[1]: tmp-crun.I6aJPS.mount: Deactivated successfully. Dec 6 05:15:29 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:29.247 263652 INFO neutron.agent.dhcp.agent [None req-0a3d6d5f-3c2f-4934-b687-092cf9c4a6cd - - - - - -] DHCP configuration for ports {'72f817fe-8a65-4586-937f-6a6314c57627'} is completed#033[00m Dec 6 05:15:29 localhost nova_compute[282193]: 2025-12-06 10:15:29.345 282197 DEBUG nova.virt.libvirt.driver [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Creating tmpfile /var/lib/nova/instances/tmpm9_iowog to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m Dec 6 05:15:29 localhost nova_compute[282193]: 2025-12-06 10:15:29.346 282197 DEBUG nova.compute.manager [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] destination check data is LibvirtLiveMigrateData(bdms=,block_migration=False,disk_available_mb=12288,disk_over_commit=False,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmpm9_iowog',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=,is_shared_block_storage=,is_shared_instance_path=,is_volume_backed=,migration=,old_vol_attachment_ids=,serial_listen_addr=None,serial_listen_ports=,src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=,target_connect_addr=,vifs=[VIFMigrateData],wait_for_vif_plugged=) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m Dec 6 05:15:29 localhost nova_compute[282193]: 2025-12-06 10:15:29.602 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:15:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:15:29 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:29.904 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b142a5ef-fbed-4e92-aa78-e3ad080c6370, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 05:15:29 localhost podman[313588]: 2025-12-06 10:15:29.934362692 +0000 UTC m=+0.084900337 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 6 05:15:29 localhost podman[313588]: 2025-12-06 10:15:29.94320561 +0000 UTC m=+0.093743255 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 6 05:15:29 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:15:30 localhost podman[313587]: 2025-12-06 10:15:30.036323034 +0000 UTC m=+0.192104788 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:15:30 localhost podman[313587]: 2025-12-06 10:15:30.069344796 +0000 UTC m=+0.225126520 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent) Dec 6 05:15:30 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:15:30 localhost nova_compute[282193]: 2025-12-06 10:15:30.262 282197 DEBUG nova.compute.manager [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=,block_migration=False,disk_available_mb=12288,disk_over_commit=False,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmpm9_iowog',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='ed40901b-0bfc-426a-bf70-48d87ce95aa6',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids=,serial_listen_addr=None,serial_listen_ports=,src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=,target_connect_addr=,vifs=[VIFMigrateData],wait_for_vif_plugged=) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m Dec 6 05:15:30 localhost nova_compute[282193]: 2025-12-06 10:15:30.285 282197 DEBUG oslo_concurrency.lockutils [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] Acquiring lock "refresh_cache-ed40901b-0bfc-426a-bf70-48d87ce95aa6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 6 05:15:30 localhost nova_compute[282193]: 2025-12-06 10:15:30.285 282197 DEBUG oslo_concurrency.lockutils [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] Acquired lock "refresh_cache-ed40901b-0bfc-426a-bf70-48d87ce95aa6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 6 05:15:30 localhost nova_compute[282193]: 2025-12-06 10:15:30.286 282197 DEBUG nova.network.neutron [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Dec 6 05:15:31 localhost nova_compute[282193]: 2025-12-06 10:15:31.099 282197 DEBUG nova.network.neutron [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Updating instance_info_cache with network_info: [{"id": "feb6a13d-305a-4541-a50e-4988833ecf82", "address": "fa:16:3e:e5:ea:4a", "network": {"id": "45604602-bc87-4608-9881-9568cbf90870", "bridge": "br-int", "label": "tempest-LiveMigrationTest-802114316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "9167331b2c424ef6961b096b551f8434", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfeb6a13d-30", "ovs_interfaceid": "feb6a13d-305a-4541-a50e-4988833ecf82", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 6 05:15:31 localhost nova_compute[282193]: 2025-12-06 10:15:31.121 282197 DEBUG oslo_concurrency.lockutils [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] Releasing lock "refresh_cache-ed40901b-0bfc-426a-bf70-48d87ce95aa6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 6 05:15:31 localhost nova_compute[282193]: 2025-12-06 10:15:31.124 282197 DEBUG nova.virt.libvirt.driver [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=,block_migration=False,disk_available_mb=12288,disk_over_commit=False,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmpm9_iowog',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='ed40901b-0bfc-426a-bf70-48d87ce95aa6',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=,src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=,target_connect_addr=,vifs=[VIFMigrateData],wait_for_vif_plugged=) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m Dec 6 05:15:31 localhost nova_compute[282193]: 2025-12-06 10:15:31.124 282197 DEBUG nova.virt.libvirt.driver [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Creating instance directory: /var/lib/nova/instances/ed40901b-0bfc-426a-bf70-48d87ce95aa6 pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m Dec 6 05:15:31 localhost nova_compute[282193]: 2025-12-06 10:15:31.125 282197 DEBUG nova.virt.libvirt.driver [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Ensure instance console log exists: /var/lib/nova/instances/ed40901b-0bfc-426a-bf70-48d87ce95aa6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m Dec 6 05:15:31 localhost nova_compute[282193]: 2025-12-06 10:15:31.126 282197 DEBUG nova.virt.libvirt.driver [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m Dec 6 05:15:31 localhost nova_compute[282193]: 2025-12-06 10:15:31.127 282197 DEBUG nova.virt.libvirt.vif [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T10:15:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-571789410',display_name='tempest-LiveMigrationTest-server-571789410',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005548790.localdomain',hostname='tempest-livemigrationtest-server-571789410',id=8,image_ref='6a944ab6-8965-4055-b7fc-af6e395005ea',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-12-06T10:15:26Z,launched_on='np0005548790.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=,new_flavor=None,node='np0005548790.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9167331b2c424ef6961b096b551f8434',ramdisk_id='',reservation_id='r-9204byw5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6a944ab6-8965-4055-b7fc-af6e395005ea',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-1593322913',owner_user_name='tempest-LiveMigrationTest-1593322913-project-member'},tags=,task_state='migrating',terminated_at=None,trusted_certs=,updated_at=2025-12-06T10:15:26Z,user_data=None,user_id='b25d9e5ec9eb4368a764482a325b9dda',uuid=ed40901b-0bfc-426a-bf70-48d87ce95aa6,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "feb6a13d-305a-4541-a50e-4988833ecf82", "address": "fa:16:3e:e5:ea:4a", "network": {"id": "45604602-bc87-4608-9881-9568cbf90870", "bridge": "br-int", "label": "tempest-LiveMigrationTest-802114316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "9167331b2c424ef6961b096b551f8434", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapfeb6a13d-30", "ovs_interfaceid": "feb6a13d-305a-4541-a50e-4988833ecf82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m Dec 6 05:15:31 localhost nova_compute[282193]: 2025-12-06 10:15:31.128 282197 DEBUG nova.network.os_vif_util [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] Converting VIF {"id": "feb6a13d-305a-4541-a50e-4988833ecf82", "address": "fa:16:3e:e5:ea:4a", "network": {"id": "45604602-bc87-4608-9881-9568cbf90870", "bridge": "br-int", "label": "tempest-LiveMigrationTest-802114316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "9167331b2c424ef6961b096b551f8434", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapfeb6a13d-30", "ovs_interfaceid": "feb6a13d-305a-4541-a50e-4988833ecf82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Dec 6 05:15:31 localhost nova_compute[282193]: 2025-12-06 10:15:31.129 282197 DEBUG nova.network.os_vif_util [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:ea:4a,bridge_name='br-int',has_traffic_filtering=True,id=feb6a13d-305a-4541-a50e-4988833ecf82,network=Network(45604602-bc87-4608-9881-9568cbf90870),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfeb6a13d-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Dec 6 05:15:31 localhost nova_compute[282193]: 2025-12-06 10:15:31.129 282197 DEBUG os_vif [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:ea:4a,bridge_name='br-int',has_traffic_filtering=True,id=feb6a13d-305a-4541-a50e-4988833ecf82,network=Network(45604602-bc87-4608-9881-9568cbf90870),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfeb6a13d-30') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m Dec 6 05:15:31 localhost nova_compute[282193]: 2025-12-06 10:15:31.130 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:31 localhost nova_compute[282193]: 2025-12-06 10:15:31.131 282197 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 05:15:31 localhost nova_compute[282193]: 2025-12-06 10:15:31.131 282197 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 6 05:15:31 localhost nova_compute[282193]: 2025-12-06 10:15:31.136 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:31 localhost nova_compute[282193]: 2025-12-06 10:15:31.137 282197 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfeb6a13d-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 05:15:31 localhost nova_compute[282193]: 2025-12-06 10:15:31.138 282197 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfeb6a13d-30, col_values=(('external_ids', {'iface-id': 'feb6a13d-305a-4541-a50e-4988833ecf82', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e5:ea:4a', 'vm-uuid': 'ed40901b-0bfc-426a-bf70-48d87ce95aa6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 05:15:31 localhost nova_compute[282193]: 2025-12-06 10:15:31.143 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:15:31 localhost nova_compute[282193]: 2025-12-06 10:15:31.147 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:31 localhost nova_compute[282193]: 2025-12-06 10:15:31.148 282197 INFO os_vif [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:ea:4a,bridge_name='br-int',has_traffic_filtering=True,id=feb6a13d-305a-4541-a50e-4988833ecf82,network=Network(45604602-bc87-4608-9881-9568cbf90870),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfeb6a13d-30')#033[00m Dec 6 05:15:31 localhost nova_compute[282193]: 2025-12-06 10:15:31.148 282197 DEBUG nova.virt.libvirt.driver [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m Dec 6 05:15:31 localhost nova_compute[282193]: 2025-12-06 10:15:31.149 282197 DEBUG nova.compute.manager [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=12288,disk_over_commit=False,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmpm9_iowog',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='ed40901b-0bfc-426a-bf70-48d87ce95aa6',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m Dec 6 05:15:31 localhost nova_compute[282193]: 2025-12-06 10:15:31.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:15:31 localhost nova_compute[282193]: 2025-12-06 10:15:31.200 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:15:31 localhost nova_compute[282193]: 2025-12-06 10:15:31.201 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:15:31 localhost nova_compute[282193]: 2025-12-06 10:15:31.201 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:15:31 localhost nova_compute[282193]: 2025-12-06 10:15:31.202 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Auditing locally available compute resources for np0005548789.localdomain (node: np0005548789.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 6 05:15:31 localhost nova_compute[282193]: 2025-12-06 10:15:31.202 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:15:31 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:15:31 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/936145217' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:15:31 localhost nova_compute[282193]: 2025-12-06 10:15:31.626 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:15:31 localhost nova_compute[282193]: 2025-12-06 10:15:31.719 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:15:31 localhost nova_compute[282193]: 2025-12-06 10:15:31.720 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:15:31 localhost nova_compute[282193]: 2025-12-06 10:15:31.960 282197 WARNING nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 6 05:15:31 localhost nova_compute[282193]: 2025-12-06 10:15:31.962 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Hypervisor/Node resource view: name=np0005548789.localdomain free_ram=11346MB free_disk=41.71154022216797GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 6 05:15:31 localhost nova_compute[282193]: 2025-12-06 10:15:31.962 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:15:31 localhost nova_compute[282193]: 2025-12-06 10:15:31.962 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:15:32 localhost nova_compute[282193]: 2025-12-06 10:15:32.009 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Migration for instance ed40901b-0bfc-426a-bf70-48d87ce95aa6 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m Dec 6 05:15:32 localhost nova_compute[282193]: 2025-12-06 10:15:32.031 282197 INFO nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Updating resource usage from migration 0c4fb838-191a-43fb-92ad-31ab3b6d11ce#033[00m Dec 6 05:15:32 localhost nova_compute[282193]: 2025-12-06 10:15:32.031 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Starting to track incoming migration 0c4fb838-191a-43fb-92ad-31ab3b6d11ce with flavor a0a7498e-22eb-495c-a2e3-89ba9e483bf6 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m Dec 6 05:15:32 localhost nova_compute[282193]: 2025-12-06 10:15:32.061 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:32 localhost nova_compute[282193]: 2025-12-06 10:15:32.078 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 6 05:15:32 localhost nova_compute[282193]: 2025-12-06 10:15:32.096 282197 WARNING nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Instance ed40901b-0bfc-426a-bf70-48d87ce95aa6 has been moved to another host np0005548790.localdomain(np0005548790.localdomain). There are allocations remaining against the source host that might need to be removed: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}.#033[00m Dec 6 05:15:32 localhost nova_compute[282193]: 2025-12-06 10:15:32.097 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 6 05:15:32 localhost nova_compute[282193]: 2025-12-06 10:15:32.097 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Final resource view: name=np0005548789.localdomain phys_ram=15738MB used_ram=1152MB phys_disk=41GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 6 05:15:32 localhost nova_compute[282193]: 2025-12-06 10:15:32.193 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:15:32 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:15:32 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/843309173' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:15:32 localhost nova_compute[282193]: 2025-12-06 10:15:32.693 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:15:32 localhost nova_compute[282193]: 2025-12-06 10:15:32.701 282197 DEBUG nova.compute.provider_tree [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed in ProviderTree for provider: 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 6 05:15:32 localhost nova_compute[282193]: 2025-12-06 10:15:32.718 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 6 05:15:32 localhost nova_compute[282193]: 2025-12-06 10:15:32.766 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Compute_service record updated for np0005548789.localdomain:np0005548789.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 6 05:15:32 localhost nova_compute[282193]: 2025-12-06 10:15:32.766 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:15:33 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:15:34 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:34.225 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:15:33Z, description=, device_id=e763caa9-7ac3-434d-b131-2742f1c4d17b, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=17d01ee3-d0a0-42f3-8c73-1578e34c0b4f, ip_allocation=immediate, mac_address=fa:16:3e:71:8d:2e, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=796, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-06T10:15:33Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:15:34 localhost dnsmasq[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 8 addresses Dec 6 05:15:34 localhost dnsmasq-dhcp[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:15:34 localhost dnsmasq-dhcp[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:15:34 localhost podman[313691]: 2025-12-06 10:15:34.458579789 +0000 UTC m=+0.048064980 container kill e4e5ac05442955f65040fee10f3aa342d80518f3e4a072b3d79fe4302f513cf7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:15:34 localhost nova_compute[282193]: 2025-12-06 10:15:34.506 282197 DEBUG nova.network.neutron [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Port feb6a13d-305a-4541-a50e-4988833ecf82 updated with migration profile {'migrating_to': 'np0005548789.localdomain'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m Dec 6 05:15:34 localhost nova_compute[282193]: 2025-12-06 10:15:34.508 282197 DEBUG nova.compute.manager [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=12288,disk_over_commit=False,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmpm9_iowog',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='ed40901b-0bfc-426a-bf70-48d87ce95aa6',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m Dec 6 05:15:34 localhost sshd[313713]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:15:34 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:34.743 263652 INFO neutron.agent.dhcp.agent [None req-ec0ca4b4-e71e-4b88-acd1-954cbbb82b6b - - - - - -] DHCP configuration for ports {'17d01ee3-d0a0-42f3-8c73-1578e34c0b4f'} is completed#033[00m Dec 6 05:15:34 localhost systemd[1]: Created slice User Slice of UID 42436. Dec 6 05:15:34 localhost systemd[1]: Starting User Runtime Directory /run/user/42436... Dec 6 05:15:34 localhost systemd-logind[766]: New session 77 of user nova. Dec 6 05:15:34 localhost systemd[1]: Finished User Runtime Directory /run/user/42436. Dec 6 05:15:34 localhost systemd[1]: Starting User Manager for UID 42436... Dec 6 05:15:34 localhost systemd[313717]: Queued start job for default target Main User Target. Dec 6 05:15:34 localhost systemd[313717]: Created slice User Application Slice. Dec 6 05:15:34 localhost systemd[313717]: Started Mark boot as successful after the user session has run 2 minutes. Dec 6 05:15:34 localhost systemd[313717]: Started Daily Cleanup of User's Temporary Directories. Dec 6 05:15:34 localhost systemd[313717]: Reached target Paths. Dec 6 05:15:34 localhost systemd[313717]: Reached target Timers. Dec 6 05:15:34 localhost systemd[313717]: Starting D-Bus User Message Bus Socket... Dec 6 05:15:34 localhost systemd[313717]: Starting Create User's Volatile Files and Directories... Dec 6 05:15:35 localhost systemd[313717]: Listening on D-Bus User Message Bus Socket. Dec 6 05:15:35 localhost systemd[313717]: Finished Create User's Volatile Files and Directories. Dec 6 05:15:35 localhost systemd[313717]: Reached target Sockets. Dec 6 05:15:35 localhost systemd[313717]: Reached target Basic System. Dec 6 05:15:35 localhost systemd[313717]: Reached target Main User Target. Dec 6 05:15:35 localhost systemd[313717]: Startup finished in 161ms. Dec 6 05:15:35 localhost systemd[1]: Started User Manager for UID 42436. Dec 6 05:15:35 localhost systemd[1]: Started Session 77 of User nova. Dec 6 05:15:35 localhost kernel: device tapfeb6a13d-30 entered promiscuous mode Dec 6 05:15:35 localhost NetworkManager[5973]: [1765016135.2126] manager: (tapfeb6a13d-30): new Tun device (/org/freedesktop/NetworkManager/Devices/26) Dec 6 05:15:35 localhost systemd-udevd[313775]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:15:35 localhost ovn_controller[154851]: 2025-12-06T10:15:35Z|00125|binding|INFO|Claiming lport feb6a13d-305a-4541-a50e-4988833ecf82 for this additional chassis. Dec 6 05:15:35 localhost ovn_controller[154851]: 2025-12-06T10:15:35Z|00126|binding|INFO|feb6a13d-305a-4541-a50e-4988833ecf82: Claiming fa:16:3e:e5:ea:4a 10.100.0.10 Dec 6 05:15:35 localhost ovn_controller[154851]: 2025-12-06T10:15:35Z|00127|binding|INFO|Claiming lport 99b309b3-9e3d-4a23-b110-d99707c2eb4e for this additional chassis. Dec 6 05:15:35 localhost ovn_controller[154851]: 2025-12-06T10:15:35Z|00128|binding|INFO|99b309b3-9e3d-4a23-b110-d99707c2eb4e: Claiming fa:16:3e:11:27:4d 19.80.0.152 Dec 6 05:15:35 localhost nova_compute[282193]: 2025-12-06 10:15:35.219 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:35 localhost podman[313753]: 2025-12-06 10:15:35.235247167 +0000 UTC m=+0.074443000 container kill e4e5ac05442955f65040fee10f3aa342d80518f3e4a072b3d79fe4302f513cf7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 6 05:15:35 localhost dnsmasq[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 7 addresses Dec 6 05:15:35 localhost dnsmasq-dhcp[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:15:35 localhost systemd[1]: tmp-crun.GdsLui.mount: Deactivated successfully. Dec 6 05:15:35 localhost dnsmasq-dhcp[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:15:35 localhost NetworkManager[5973]: [1765016135.2444] device (tapfeb6a13d-30): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Dec 6 05:15:35 localhost NetworkManager[5973]: [1765016135.2448] device (tapfeb6a13d-30): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external') Dec 6 05:15:35 localhost nova_compute[282193]: 2025-12-06 10:15:35.256 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:35 localhost systemd-machined[84444]: New machine qemu-4-instance-00000008. Dec 6 05:15:35 localhost ovn_controller[154851]: 2025-12-06T10:15:35Z|00129|binding|INFO|Setting lport feb6a13d-305a-4541-a50e-4988833ecf82 ovn-installed in OVS Dec 6 05:15:35 localhost nova_compute[282193]: 2025-12-06 10:15:35.265 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:35 localhost nova_compute[282193]: 2025-12-06 10:15:35.265 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:35 localhost systemd[1]: Started Virtual Machine qemu-4-instance-00000008. Dec 6 05:15:35 localhost ovn_controller[154851]: 2025-12-06T10:15:35Z|00130|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:15:35 localhost nova_compute[282193]: 2025-12-06 10:15:35.490 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:35 localhost nova_compute[282193]: 2025-12-06 10:15:35.594 282197 DEBUG nova.virt.driver [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Emitting event Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 6 05:15:35 localhost nova_compute[282193]: 2025-12-06 10:15:35.594 282197 INFO nova.compute.manager [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] VM Started (Lifecycle Event)#033[00m Dec 6 05:15:35 localhost nova_compute[282193]: 2025-12-06 10:15:35.643 282197 DEBUG nova.compute.manager [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 6 05:15:35 localhost nova_compute[282193]: 2025-12-06 10:15:35.766 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:15:35 localhost nova_compute[282193]: 2025-12-06 10:15:35.767 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:15:35 localhost nova_compute[282193]: 2025-12-06 10:15:35.768 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 6 05:15:35 localhost nova_compute[282193]: 2025-12-06 10:15:35.768 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 6 05:15:35 localhost nova_compute[282193]: 2025-12-06 10:15:35.898 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 6 05:15:35 localhost nova_compute[282193]: 2025-12-06 10:15:35.899 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquired lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 6 05:15:35 localhost nova_compute[282193]: 2025-12-06 10:15:35.899 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 6 05:15:35 localhost nova_compute[282193]: 2025-12-06 10:15:35.900 282197 DEBUG nova.objects.instance [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 6 05:15:36 localhost nova_compute[282193]: 2025-12-06 10:15:36.141 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:36 localhost nova_compute[282193]: 2025-12-06 10:15:36.175 282197 DEBUG nova.virt.driver [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] Emitting event Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 6 05:15:36 localhost nova_compute[282193]: 2025-12-06 10:15:36.176 282197 INFO nova.compute.manager [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] VM Resumed (Lifecycle Event)#033[00m Dec 6 05:15:36 localhost nova_compute[282193]: 2025-12-06 10:15:36.195 282197 DEBUG nova.compute.manager [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 6 05:15:36 localhost nova_compute[282193]: 2025-12-06 10:15:36.200 282197 DEBUG nova.compute.manager [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Dec 6 05:15:36 localhost nova_compute[282193]: 2025-12-06 10:15:36.226 282197 INFO nova.compute.manager [None req-c2cff4e5-3d1d-47f9-bef4-e7abce6c5152 - - - - - -] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] During the sync_power process the instance has moved from host np0005548790.localdomain to host np0005548789.localdomain#033[00m Dec 6 05:15:36 localhost nova_compute[282193]: 2025-12-06 10:15:36.334 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:15:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:15:36 localhost systemd[1]: session-77.scope: Deactivated successfully. Dec 6 05:15:36 localhost systemd-logind[766]: Session 77 logged out. Waiting for processes to exit. Dec 6 05:15:36 localhost systemd-logind[766]: Removed session 77. Dec 6 05:15:36 localhost systemd[1]: tmp-crun.ua8FNR.mount: Deactivated successfully. Dec 6 05:15:36 localhost podman[313839]: 2025-12-06 10:15:36.569036883 +0000 UTC m=+0.113942048 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-type=git, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, distribution-scope=public, version=9.6, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, config_id=edpm, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 6 05:15:36 localhost podman[313840]: 2025-12-06 10:15:36.626899788 +0000 UTC m=+0.170694698 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 6 05:15:36 localhost podman[313840]: 2025-12-06 10:15:36.639270363 +0000 UTC m=+0.183065293 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Dec 6 05:15:36 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:15:36 localhost podman[313839]: 2025-12-06 10:15:36.650597246 +0000 UTC m=+0.195502361 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, version=9.6, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.33.7, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., release=1755695350) Dec 6 05:15:36 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:15:36 localhost nova_compute[282193]: 2025-12-06 10:15:36.753 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updating instance_info_cache with network_info: [{"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 6 05:15:36 localhost nova_compute[282193]: 2025-12-06 10:15:36.788 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Releasing lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 6 05:15:36 localhost nova_compute[282193]: 2025-12-06 10:15:36.788 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 6 05:15:36 localhost nova_compute[282193]: 2025-12-06 10:15:36.790 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:15:36 localhost nova_compute[282193]: 2025-12-06 10:15:36.791 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:15:37 localhost nova_compute[282193]: 2025-12-06 10:15:37.066 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:37 localhost nova_compute[282193]: 2025-12-06 10:15:37.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:15:37 localhost nova_compute[282193]: 2025-12-06 10:15:37.182 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:15:37 localhost nova_compute[282193]: 2025-12-06 10:15:37.183 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:15:37 localhost nova_compute[282193]: 2025-12-06 10:15:37.184 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 6 05:15:37 localhost ovn_controller[154851]: 2025-12-06T10:15:37Z|00131|binding|INFO|Claiming lport feb6a13d-305a-4541-a50e-4988833ecf82 for this chassis. Dec 6 05:15:37 localhost ovn_controller[154851]: 2025-12-06T10:15:37Z|00132|binding|INFO|feb6a13d-305a-4541-a50e-4988833ecf82: Claiming fa:16:3e:e5:ea:4a 10.100.0.10 Dec 6 05:15:37 localhost ovn_controller[154851]: 2025-12-06T10:15:37Z|00133|binding|INFO|Claiming lport 99b309b3-9e3d-4a23-b110-d99707c2eb4e for this chassis. Dec 6 05:15:37 localhost ovn_controller[154851]: 2025-12-06T10:15:37Z|00134|binding|INFO|99b309b3-9e3d-4a23-b110-d99707c2eb4e: Claiming fa:16:3e:11:27:4d 19.80.0.152 Dec 6 05:15:37 localhost ovn_controller[154851]: 2025-12-06T10:15:37Z|00135|binding|INFO|Setting lport feb6a13d-305a-4541-a50e-4988833ecf82 up in Southbound Dec 6 05:15:37 localhost ovn_controller[154851]: 2025-12-06T10:15:37Z|00136|binding|INFO|Setting lport 99b309b3-9e3d-4a23-b110-d99707c2eb4e up in Southbound Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:37.392 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:27:4d 19.80.0.152'], port_security=['fa:16:3e:11:27:4d 19.80.0.152'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': ''}, parent_port=['feb6a13d-305a-4541-a50e-4988833ecf82'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-2060007817', 'neutron:cidrs': '19.80.0.152/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-19043ea6-c6b2-4272-aa60-1b11a7b5bd93', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-2060007817', 'neutron:project_id': '9167331b2c424ef6961b096b551f8434', 'neutron:revision_number': '3', 'neutron:security_group_ids': '4c82b56e-0fc5-4c7f-8922-ceb8236815fd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=927c8639-172d-4240-b8a1-85db1fd6c03d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=99b309b3-9e3d-4a23-b110-d99707c2eb4e) old=Port_Binding(up=[False], additional_chassis=[], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:37.396 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:ea:4a 10.100.0.10'], port_security=['fa:16:3e:e5:ea:4a 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1146072664', 'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'ed40901b-0bfc-426a-bf70-48d87ce95aa6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-45604602-bc87-4608-9881-9568cbf90870', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1146072664', 'neutron:project_id': '9167331b2c424ef6961b096b551f8434', 'neutron:revision_number': '9', 'neutron:security_group_ids': '4c82b56e-0fc5-4c7f-8922-ceb8236815fd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548790.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d40d335f-7e85-43c3-894d-993c12735497, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=feb6a13d-305a-4541-a50e-4988833ecf82) old=Port_Binding(up=[False], additional_chassis=[], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:37.398 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 99b309b3-9e3d-4a23-b110-d99707c2eb4e in datapath 19043ea6-c6b2-4272-aa60-1b11a7b5bd93 bound to our chassis#033[00m Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:37.402 160509 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 19043ea6-c6b2-4272-aa60-1b11a7b5bd93#033[00m Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:37.413 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[dd044226-6764-49b6-90b1-1e2a8b665e75]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:37.414 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap19043ea6-c1 in ovnmeta-19043ea6-c6b2-4272-aa60-1b11a7b5bd93 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:37.418 160674 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap19043ea6-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:37.418 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[eb03a6ff-8bea-47eb-adc1-e85dafc84aee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:37 localhost neutron_sriov_agent[256690]: 2025-12-06 10:15:37.419 2 WARNING neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c req-d809e72f-e65e-4220-83cc-53ce9206b29d f52779cce5374723ad2618b5c2916973 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] This port is not SRIOV, skip binding for port feb6a13d-305a-4541-a50e-4988833ecf82.#033[00m Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:37.420 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[0d23b896-8c87-4b97-b349-a11a495a99e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:37.427 160720 DEBUG oslo.privsep.daemon [-] privsep: reply[9bbcfd87-2af0-4b11-a5f9-47315b31e15d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:37.439 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[0ac2b6fa-cc9b-4d14-981a-a7cac84b657a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:37.462 160700 DEBUG oslo.privsep.daemon [-] privsep: reply[6deb00f4-3275-4551-a0c8-1089de73851d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:37 localhost NetworkManager[5973]: [1765016137.4712] manager: (tap19043ea6-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/27) Dec 6 05:15:37 localhost systemd-udevd[313781]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:37.473 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[c0de9c5b-eba1-4340-8bab-ab8b939fb63f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:37.500 160700 DEBUG oslo.privsep.daemon [-] privsep: reply[e084804a-56b1-480d-b98d-b6f481fd53ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:37.503 160700 DEBUG oslo.privsep.daemon [-] privsep: reply[ed3776b4-3d6a-4916-afa4-e4e4659f9bac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:37 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap19043ea6-c1: link becomes ready Dec 6 05:15:37 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap19043ea6-c0: link becomes ready Dec 6 05:15:37 localhost NetworkManager[5973]: [1765016137.5268] device (tap19043ea6-c0): carrier: link connected Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:37.534 160700 DEBUG oslo.privsep.daemon [-] privsep: reply[7282731e-2dfa-4930-a30e-043dfa91c702]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:37 localhost systemd[1]: tmp-crun.vsRuIR.mount: Deactivated successfully. Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:37.552 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[bfe5f017-e17f-4095-bd7d-51a951ffd2e6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap19043ea6-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:00:81:15'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1255570, 'reachable_time': 35481, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313900, 'error': None, 'target': 'ovnmeta-19043ea6-c6b2-4272-aa60-1b11a7b5bd93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:37 localhost nova_compute[282193]: 2025-12-06 10:15:37.559 282197 INFO nova.compute.manager [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Post operation of migration started#033[00m Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:37.571 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[c7cdce00-3e18-45f2-a327-dece9aa64e86]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe00:8115'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1255570, 'tstamp': 1255570}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313901, 'error': None, 'target': 'ovnmeta-19043ea6-c6b2-4272-aa60-1b11a7b5bd93', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:37.584 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[574a2186-9941-4ec2-9f73-fbf88d0c7fc3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap19043ea6-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:00:81:15'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1255570, 'reachable_time': 35481, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 313902, 'error': None, 'target': 'ovnmeta-19043ea6-c6b2-4272-aa60-1b11a7b5bd93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:37.604 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[aa42e77d-a2f0-4f8e-9b41-809e05052d5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:37.644 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[146c87d7-8efd-4281-91cd-2e823e907e30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:37.647 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap19043ea6-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:37.647 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:37.648 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap19043ea6-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 05:15:37 localhost nova_compute[282193]: 2025-12-06 10:15:37.702 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:37 localhost kernel: device tap19043ea6-c0 entered promiscuous mode Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:37.706 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap19043ea6-c0, col_values=(('external_ids', {'iface-id': 'b960e3cf-838e-4b32-93f1-7da76cedadcc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 05:15:37 localhost nova_compute[282193]: 2025-12-06 10:15:37.707 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:37 localhost ovn_controller[154851]: 2025-12-06T10:15:37Z|00137|binding|INFO|Releasing lport b960e3cf-838e-4b32-93f1-7da76cedadcc from this chassis (sb_readonly=0) Dec 6 05:15:37 localhost nova_compute[282193]: 2025-12-06 10:15:37.712 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:37.712 160509 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/19043ea6-c6b2-4272-aa60-1b11a7b5bd93.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/19043ea6-c6b2-4272-aa60-1b11a7b5bd93.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:37.715 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[1e7d9d15-85eb-4a8a-aefe-7a566ed48a0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:37 localhost nova_compute[282193]: 2025-12-06 10:15:37.716 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:37.717 160509 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: global Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: log /dev/log local0 debug Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: log-tag haproxy-metadata-proxy-19043ea6-c6b2-4272-aa60-1b11a7b5bd93 Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: user root Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: group root Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: maxconn 1024 Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: pidfile /var/lib/neutron/external/pids/19043ea6-c6b2-4272-aa60-1b11a7b5bd93.pid.haproxy Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: daemon Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: defaults Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: log global Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: mode http Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: option httplog Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: option dontlognull Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: option http-server-close Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: option forwardfor Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: retries 3 Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: timeout http-request 30s Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: timeout connect 30s Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: timeout client 32s Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: timeout server 32s Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: timeout http-keep-alive 30s Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: listen listener Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: bind 169.254.169.254:80 Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: server metadata /var/lib/neutron/metadata_proxy Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: http-request add-header X-OVN-Network-ID 19043ea6-c6b2-4272-aa60-1b11a7b5bd93 Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Dec 6 05:15:37 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:37.718 160509 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-19043ea6-c6b2-4272-aa60-1b11a7b5bd93', 'env', 'PROCESS_TAG=haproxy-19043ea6-c6b2-4272-aa60-1b11a7b5bd93', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/19043ea6-c6b2-4272-aa60-1b11a7b5bd93.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Dec 6 05:15:37 localhost nova_compute[282193]: 2025-12-06 10:15:37.725 282197 DEBUG oslo_concurrency.lockutils [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] Acquiring lock "refresh_cache-ed40901b-0bfc-426a-bf70-48d87ce95aa6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 6 05:15:37 localhost nova_compute[282193]: 2025-12-06 10:15:37.725 282197 DEBUG oslo_concurrency.lockutils [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] Acquired lock "refresh_cache-ed40901b-0bfc-426a-bf70-48d87ce95aa6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 6 05:15:37 localhost nova_compute[282193]: 2025-12-06 10:15:37.725 282197 DEBUG nova.network.neutron [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Dec 6 05:15:38 localhost podman[313935]: Dec 6 05:15:38 localhost podman[313935]: 2025-12-06 10:15:38.070600538 +0000 UTC m=+0.068523800 container create 57bafd33260bc96ff76c85f7693ba9786f043d552185f6da913acb510f4f60ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-19043ea6-c6b2-4272-aa60-1b11a7b5bd93, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 6 05:15:38 localhost systemd[1]: Started libpod-conmon-57bafd33260bc96ff76c85f7693ba9786f043d552185f6da913acb510f4f60ab.scope. Dec 6 05:15:38 localhost systemd[1]: Started libcrun container. Dec 6 05:15:38 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/970f665873ff889fc4ce87e8eb815e45fa33cad2aebf50d32a77643cc655aa94/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:15:38 localhost podman[313935]: 2025-12-06 10:15:38.038859225 +0000 UTC m=+0.036782517 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Dec 6 05:15:38 localhost podman[313935]: 2025-12-06 10:15:38.144604352 +0000 UTC m=+0.142527604 container init 57bafd33260bc96ff76c85f7693ba9786f043d552185f6da913acb510f4f60ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-19043ea6-c6b2-4272-aa60-1b11a7b5bd93, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 6 05:15:38 localhost podman[313935]: 2025-12-06 10:15:38.153353447 +0000 UTC m=+0.151276739 container start 57bafd33260bc96ff76c85f7693ba9786f043d552185f6da913acb510f4f60ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-19043ea6-c6b2-4272-aa60-1b11a7b5bd93, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:15:38 localhost nova_compute[282193]: 2025-12-06 10:15:38.158 282197 DEBUG nova.virt.driver [-] Emitting event Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 6 05:15:38 localhost nova_compute[282193]: 2025-12-06 10:15:38.159 282197 INFO nova.compute.manager [-] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] VM Stopped (Lifecycle Event)#033[00m Dec 6 05:15:38 localhost neutron-haproxy-ovnmeta-19043ea6-c6b2-4272-aa60-1b11a7b5bd93[313949]: [NOTICE] (313953) : New worker (313955) forked Dec 6 05:15:38 localhost neutron-haproxy-ovnmeta-19043ea6-c6b2-4272-aa60-1b11a7b5bd93[313949]: [NOTICE] (313953) : Loading success. Dec 6 05:15:38 localhost nova_compute[282193]: 2025-12-06 10:15:38.184 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:38.198 160509 INFO neutron.agent.ovn.metadata.agent [-] Port feb6a13d-305a-4541-a50e-4988833ecf82 in datapath 45604602-bc87-4608-9881-9568cbf90870 bound to our chassis#033[00m Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:38.204 160509 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 45604602-bc87-4608-9881-9568cbf90870#033[00m Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:38.214 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[4c075fae-73f4-474a-93cd-9655c07fd3c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:38.215 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap45604602-b1 in ovnmeta-45604602-bc87-4608-9881-9568cbf90870 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:38.217 160674 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap45604602-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:38.217 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[2e0351ee-5481-4ad9-b196-684d9121f87a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:38.218 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[1d798907-195a-4cc3-9f04-93c3db6bc9a6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:38.225 160720 DEBUG oslo.privsep.daemon [-] privsep: reply[42b00b73-7e86-4efd-b02b-663b97d0d245]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:38 localhost nova_compute[282193]: 2025-12-06 10:15:38.234 282197 DEBUG nova.compute.manager [None req-3af31981-be92-4ad3-b33a-abddd1f5b0a5 - - - - - -] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 6 05:15:38 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:38.249 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[ea929242-b598-45af-9947-037068cf05fb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:38.281 160700 DEBUG oslo.privsep.daemon [-] privsep: reply[8ddd50a1-bb38-414d-a070-0aab29637304]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:38 localhost NetworkManager[5973]: [1765016138.2903] manager: (tap45604602-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/28) Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:38.290 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[9a126116-86a2-4c51-ae15-e46540d40c37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:38 localhost systemd-udevd[313887]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:38.325 160700 DEBUG oslo.privsep.daemon [-] privsep: reply[9f89b17c-5397-49be-b7cb-8556b7cd8eaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:38.329 160700 DEBUG oslo.privsep.daemon [-] privsep: reply[aee4fd69-cbdd-44e1-a208-20b1599785fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:38 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap45604602-b0: link becomes ready Dec 6 05:15:38 localhost NetworkManager[5973]: [1765016138.3521] device (tap45604602-b0): carrier: link connected Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:38.357 160700 DEBUG oslo.privsep.daemon [-] privsep: reply[8808a77c-8d61-4809-90df-f8055aadbb06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:38.371 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[02df119f-47d4-400c-acd0-7b8c69906b43]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap45604602-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:48:e6:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1255653, 'reachable_time': 20864, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313974, 'error': None, 'target': 'ovnmeta-45604602-bc87-4608-9881-9568cbf90870', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:38.384 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[5351d3b6-d616-47b0-b78c-b0876968feec]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe48:e68f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1255653, 'tstamp': 1255653}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313975, 'error': None, 'target': 'ovnmeta-45604602-bc87-4608-9881-9568cbf90870', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:38.400 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[95b1650c-f572-4687-82a1-2de79e9411fb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap45604602-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:48:e6:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1255653, 'reachable_time': 20864, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 313976, 'error': None, 'target': 'ovnmeta-45604602-bc87-4608-9881-9568cbf90870', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:38.429 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[d16bad81-6f0d-4d56-b034-98b8aeaaeb65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:38.486 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[06c9c145-7cd0-4105-a878-cff4826a03bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:38.488 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap45604602-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:38.489 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:38.489 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap45604602-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 05:15:38 localhost nova_compute[282193]: 2025-12-06 10:15:38.492 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:38 localhost kernel: device tap45604602-b0 entered promiscuous mode Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:38.496 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap45604602-b0, col_values=(('external_ids', {'iface-id': 'd57132cf-ea52-419a-82d6-37dcdb5dd89a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 05:15:38 localhost nova_compute[282193]: 2025-12-06 10:15:38.497 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:38 localhost ovn_controller[154851]: 2025-12-06T10:15:38Z|00138|binding|INFO|Releasing lport d57132cf-ea52-419a-82d6-37dcdb5dd89a from this chassis (sb_readonly=0) Dec 6 05:15:38 localhost nova_compute[282193]: 2025-12-06 10:15:38.509 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:38.509 160509 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/45604602-bc87-4608-9881-9568cbf90870.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/45604602-bc87-4608-9881-9568cbf90870.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:38.510 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[267cb533-ceed-4912-ab2c-6de95e9f713d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:38.511 160509 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: global Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: log /dev/log local0 debug Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: log-tag haproxy-metadata-proxy-45604602-bc87-4608-9881-9568cbf90870 Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: user root Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: group root Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: maxconn 1024 Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: pidfile /var/lib/neutron/external/pids/45604602-bc87-4608-9881-9568cbf90870.pid.haproxy Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: daemon Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: defaults Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: log global Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: mode http Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: option httplog Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: option dontlognull Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: option http-server-close Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: option forwardfor Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: retries 3 Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: timeout http-request 30s Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: timeout connect 30s Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: timeout client 32s Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: timeout server 32s Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: timeout http-keep-alive 30s Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: listen listener Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: bind 169.254.169.254:80 Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: server metadata /var/lib/neutron/metadata_proxy Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: http-request add-header X-OVN-Network-ID 45604602-bc87-4608-9881-9568cbf90870 Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Dec 6 05:15:38 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:38.512 160509 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-45604602-bc87-4608-9881-9568cbf90870', 'env', 'PROCESS_TAG=haproxy-45604602-bc87-4608-9881-9568cbf90870', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/45604602-bc87-4608-9881-9568cbf90870.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Dec 6 05:15:38 localhost systemd[1]: tmp-crun.hUFR5P.mount: Deactivated successfully. Dec 6 05:15:38 localhost nova_compute[282193]: 2025-12-06 10:15:38.708 282197 DEBUG nova.network.neutron [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Updating instance_info_cache with network_info: [{"id": "feb6a13d-305a-4541-a50e-4988833ecf82", "address": "fa:16:3e:e5:ea:4a", "network": {"id": "45604602-bc87-4608-9881-9568cbf90870", "bridge": "br-int", "label": "tempest-LiveMigrationTest-802114316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "9167331b2c424ef6961b096b551f8434", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfeb6a13d-30", "ovs_interfaceid": "feb6a13d-305a-4541-a50e-4988833ecf82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 6 05:15:38 localhost nova_compute[282193]: 2025-12-06 10:15:38.732 282197 DEBUG oslo_concurrency.lockutils [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] Releasing lock "refresh_cache-ed40901b-0bfc-426a-bf70-48d87ce95aa6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 6 05:15:38 localhost nova_compute[282193]: 2025-12-06 10:15:38.747 282197 DEBUG oslo_concurrency.lockutils [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:15:38 localhost nova_compute[282193]: 2025-12-06 10:15:38.748 282197 DEBUG oslo_concurrency.lockutils [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:15:38 localhost nova_compute[282193]: 2025-12-06 10:15:38.749 282197 DEBUG oslo_concurrency.lockutils [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:15:38 localhost nova_compute[282193]: 2025-12-06 10:15:38.754 282197 INFO nova.virt.libvirt.driver [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m Dec 6 05:15:38 localhost journal[203911]: Domain id=4 name='instance-00000008' uuid=ed40901b-0bfc-426a-bf70-48d87ce95aa6 is tainted: custom-monitor Dec 6 05:15:38 localhost podman[314008]: Dec 6 05:15:38 localhost podman[314008]: 2025-12-06 10:15:38.932832801 +0000 UTC m=+0.079632977 container create 5178c8fa98670b4c6c4f4039300dbbe2725e333d5ba69ae0dc4e1e9f3ca6a714 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45604602-bc87-4608-9881-9568cbf90870, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125) Dec 6 05:15:38 localhost systemd[1]: Started libpod-conmon-5178c8fa98670b4c6c4f4039300dbbe2725e333d5ba69ae0dc4e1e9f3ca6a714.scope. Dec 6 05:15:38 localhost systemd[1]: tmp-crun.vFgCCB.mount: Deactivated successfully. Dec 6 05:15:39 localhost podman[314008]: 2025-12-06 10:15:38.900904502 +0000 UTC m=+0.047704708 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Dec 6 05:15:39 localhost systemd[1]: Started libcrun container. Dec 6 05:15:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d7eab40805a009327f68fd560cfacb738e0b20b8a1c52a765c6668a441db2f8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:15:39 localhost podman[314008]: 2025-12-06 10:15:39.024997656 +0000 UTC m=+0.171797882 container init 5178c8fa98670b4c6c4f4039300dbbe2725e333d5ba69ae0dc4e1e9f3ca6a714 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45604602-bc87-4608-9881-9568cbf90870, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:15:39 localhost podman[314008]: 2025-12-06 10:15:39.036276358 +0000 UTC m=+0.183076534 container start 5178c8fa98670b4c6c4f4039300dbbe2725e333d5ba69ae0dc4e1e9f3ca6a714 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45604602-bc87-4608-9881-9568cbf90870, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 6 05:15:39 localhost neutron-haproxy-ovnmeta-45604602-bc87-4608-9881-9568cbf90870[314022]: [NOTICE] (314026) : New worker (314028) forked Dec 6 05:15:39 localhost neutron-haproxy-ovnmeta-45604602-bc87-4608-9881-9568cbf90870[314022]: [NOTICE] (314026) : Loading success. Dec 6 05:15:39 localhost neutron_sriov_agent[256690]: 2025-12-06 10:15:39.489 2 INFO neutron.agent.securitygroups_rpc [None req-e5d6490d-2b46-4f4e-92e1-5479a93607f8 13b250438f8e49ee9d0d9f0fe4791c05 a22ced63e346459ab637424ae7833af7 - - default default] Security group member updated ['55c805cd-9bbe-4434-83af-206ee080e6b9']#033[00m Dec 6 05:15:39 localhost nova_compute[282193]: 2025-12-06 10:15:39.763 282197 INFO nova.virt.libvirt.driver [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m Dec 6 05:15:40 localhost ovn_controller[154851]: 2025-12-06T10:15:40Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e5:ea:4a 10.100.0.10 Dec 6 05:15:40 localhost ovn_controller[154851]: 2025-12-06T10:15:40Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e5:ea:4a 10.100.0.10 Dec 6 05:15:40 localhost neutron_sriov_agent[256690]: 2025-12-06 10:15:40.223 2 INFO neutron.agent.securitygroups_rpc [None req-806a1120-e80b-4f72-b62c-6adbb0e69b26 13b250438f8e49ee9d0d9f0fe4791c05 a22ced63e346459ab637424ae7833af7 - - default default] Security group member updated ['55c805cd-9bbe-4434-83af-206ee080e6b9']#033[00m Dec 6 05:15:40 localhost nova_compute[282193]: 2025-12-06 10:15:40.770 282197 INFO nova.virt.libvirt.driver [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m Dec 6 05:15:40 localhost nova_compute[282193]: 2025-12-06 10:15:40.776 282197 DEBUG nova.compute.manager [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 6 05:15:40 localhost nova_compute[282193]: 2025-12-06 10:15:40.796 282197 DEBUG nova.objects.instance [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m Dec 6 05:15:41 localhost nova_compute[282193]: 2025-12-06 10:15:41.143 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:41 localhost dnsmasq[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 6 addresses Dec 6 05:15:41 localhost dnsmasq-dhcp[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:15:41 localhost dnsmasq-dhcp[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:15:41 localhost podman[314053]: 2025-12-06 10:15:41.169677068 +0000 UTC m=+0.050947997 container kill e4e5ac05442955f65040fee10f3aa342d80518f3e4a072b3d79fe4302f513cf7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Dec 6 05:15:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:15:41 localhost podman[314066]: 2025-12-06 10:15:41.272971541 +0000 UTC m=+0.083287408 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd) Dec 6 05:15:41 localhost podman[314066]: 2025-12-06 10:15:41.282257202 +0000 UTC m=+0.092573059 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:15:41 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:15:41 localhost ovn_controller[154851]: 2025-12-06T10:15:41Z|00139|binding|INFO|Releasing lport b960e3cf-838e-4b32-93f1-7da76cedadcc from this chassis (sb_readonly=0) Dec 6 05:15:41 localhost ovn_controller[154851]: 2025-12-06T10:15:41Z|00140|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:15:41 localhost ovn_controller[154851]: 2025-12-06T10:15:41Z|00141|binding|INFO|Releasing lport d57132cf-ea52-419a-82d6-37dcdb5dd89a from this chassis (sb_readonly=0) Dec 6 05:15:41 localhost nova_compute[282193]: 2025-12-06 10:15:41.531 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:42 localhost nova_compute[282193]: 2025-12-06 10:15:42.071 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:42 localhost nova_compute[282193]: 2025-12-06 10:15:42.347 282197 DEBUG nova.compute.manager [req-58a47339-d9a8-4d94-937f-4ff6614182b8 req-bc59a15f-4e32-4e8a-878c-b84e2fb1a9d5 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Received event network-vif-plugged-feb6a13d-305a-4541-a50e-4988833ecf82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 6 05:15:42 localhost nova_compute[282193]: 2025-12-06 10:15:42.348 282197 DEBUG oslo_concurrency.lockutils [req-58a47339-d9a8-4d94-937f-4ff6614182b8 req-bc59a15f-4e32-4e8a-878c-b84e2fb1a9d5 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Acquiring lock "ed40901b-0bfc-426a-bf70-48d87ce95aa6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:15:42 localhost nova_compute[282193]: 2025-12-06 10:15:42.348 282197 DEBUG oslo_concurrency.lockutils [req-58a47339-d9a8-4d94-937f-4ff6614182b8 req-bc59a15f-4e32-4e8a-878c-b84e2fb1a9d5 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "ed40901b-0bfc-426a-bf70-48d87ce95aa6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:15:42 localhost nova_compute[282193]: 2025-12-06 10:15:42.349 282197 DEBUG oslo_concurrency.lockutils [req-58a47339-d9a8-4d94-937f-4ff6614182b8 req-bc59a15f-4e32-4e8a-878c-b84e2fb1a9d5 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "ed40901b-0bfc-426a-bf70-48d87ce95aa6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:15:42 localhost nova_compute[282193]: 2025-12-06 10:15:42.350 282197 DEBUG nova.compute.manager [req-58a47339-d9a8-4d94-937f-4ff6614182b8 req-bc59a15f-4e32-4e8a-878c-b84e2fb1a9d5 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] No waiting events found dispatching network-vif-plugged-feb6a13d-305a-4541-a50e-4988833ecf82 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Dec 6 05:15:42 localhost nova_compute[282193]: 2025-12-06 10:15:42.350 282197 WARNING nova.compute.manager [req-58a47339-d9a8-4d94-937f-4ff6614182b8 req-bc59a15f-4e32-4e8a-878c-b84e2fb1a9d5 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Received unexpected event network-vif-plugged-feb6a13d-305a-4541-a50e-4988833ecf82 for instance with vm_state active and task_state None.#033[00m Dec 6 05:15:42 localhost podman[314107]: 2025-12-06 10:15:42.498936117 +0000 UTC m=+0.059464515 container kill e4e5ac05442955f65040fee10f3aa342d80518f3e4a072b3d79fe4302f513cf7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:15:42 localhost dnsmasq[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 5 addresses Dec 6 05:15:42 localhost dnsmasq-dhcp[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:15:42 localhost dnsmasq-dhcp[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:15:42 localhost ovn_controller[154851]: 2025-12-06T10:15:42Z|00142|binding|INFO|Releasing lport b960e3cf-838e-4b32-93f1-7da76cedadcc from this chassis (sb_readonly=0) Dec 6 05:15:42 localhost ovn_controller[154851]: 2025-12-06T10:15:42Z|00143|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:15:42 localhost ovn_controller[154851]: 2025-12-06T10:15:42Z|00144|binding|INFO|Releasing lport d57132cf-ea52-419a-82d6-37dcdb5dd89a from this chassis (sb_readonly=0) Dec 6 05:15:42 localhost nova_compute[282193]: 2025-12-06 10:15:42.728 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:43 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:15:44 localhost nova_compute[282193]: 2025-12-06 10:15:44.457 282197 DEBUG nova.compute.manager [req-d1a6df8e-7261-4618-bfd1-aa863fcadd05 req-95ceee35-40dc-449a-9810-2971100999c9 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Received event network-vif-plugged-feb6a13d-305a-4541-a50e-4988833ecf82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 6 05:15:44 localhost nova_compute[282193]: 2025-12-06 10:15:44.459 282197 DEBUG oslo_concurrency.lockutils [req-d1a6df8e-7261-4618-bfd1-aa863fcadd05 req-95ceee35-40dc-449a-9810-2971100999c9 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Acquiring lock "ed40901b-0bfc-426a-bf70-48d87ce95aa6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:15:44 localhost nova_compute[282193]: 2025-12-06 10:15:44.459 282197 DEBUG oslo_concurrency.lockutils [req-d1a6df8e-7261-4618-bfd1-aa863fcadd05 req-95ceee35-40dc-449a-9810-2971100999c9 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "ed40901b-0bfc-426a-bf70-48d87ce95aa6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:15:44 localhost nova_compute[282193]: 2025-12-06 10:15:44.460 282197 DEBUG oslo_concurrency.lockutils [req-d1a6df8e-7261-4618-bfd1-aa863fcadd05 req-95ceee35-40dc-449a-9810-2971100999c9 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "ed40901b-0bfc-426a-bf70-48d87ce95aa6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:15:44 localhost nova_compute[282193]: 2025-12-06 10:15:44.460 282197 DEBUG nova.compute.manager [req-d1a6df8e-7261-4618-bfd1-aa863fcadd05 req-95ceee35-40dc-449a-9810-2971100999c9 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] No waiting events found dispatching network-vif-plugged-feb6a13d-305a-4541-a50e-4988833ecf82 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Dec 6 05:15:44 localhost nova_compute[282193]: 2025-12-06 10:15:44.461 282197 WARNING nova.compute.manager [req-d1a6df8e-7261-4618-bfd1-aa863fcadd05 req-95ceee35-40dc-449a-9810-2971100999c9 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Received unexpected event network-vif-plugged-feb6a13d-305a-4541-a50e-4988833ecf82 for instance with vm_state active and task_state None.#033[00m Dec 6 05:15:44 localhost nova_compute[282193]: 2025-12-06 10:15:44.473 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:44 localhost kernel: device tape1277966-bb left promiscuous mode Dec 6 05:15:44 localhost ovn_controller[154851]: 2025-12-06T10:15:44Z|00145|binding|INFO|Releasing lport e1277966-bb4e-4c31-a08b-185a772cbf5b from this chassis (sb_readonly=0) Dec 6 05:15:44 localhost ovn_controller[154851]: 2025-12-06T10:15:44Z|00146|binding|INFO|Setting lport e1277966-bb4e-4c31-a08b-185a772cbf5b down in Southbound Dec 6 05:15:44 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:44.485 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.122.172/24', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-8e238f59-5792-4ff4-95af-f993c8e9e14f', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e238f59-5792-4ff4-95af-f993c8e9e14f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548789.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ae43cb4c-3e04-441f-9177-31d5e45dfad9, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=e1277966-bb4e-4c31-a08b-185a772cbf5b) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:15:44 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:44.486 160509 INFO neutron.agent.ovn.metadata.agent [-] Port e1277966-bb4e-4c31-a08b-185a772cbf5b in datapath 8e238f59-5792-4ff4-95af-f993c8e9e14f unbound from our chassis#033[00m Dec 6 05:15:44 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:44.488 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8e238f59-5792-4ff4-95af-f993c8e9e14f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:15:44 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:44.489 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[04b6805d-2f86-41e3-8e86-744721b336b0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:44 localhost nova_compute[282193]: 2025-12-06 10:15:44.502 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:15:44 localhost systemd[1]: tmp-crun.VJ6mAI.mount: Deactivated successfully. Dec 6 05:15:44 localhost podman[314130]: 2025-12-06 10:15:44.940647278 +0000 UTC m=+0.094980752 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:15:44 localhost podman[314130]: 2025-12-06 10:15:44.949161866 +0000 UTC m=+0.103495340 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 6 05:15:44 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:15:46 localhost nova_compute[282193]: 2025-12-06 10:15:46.162 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:46 localhost systemd[1]: Stopping User Manager for UID 42436... Dec 6 05:15:46 localhost systemd[313717]: Activating special unit Exit the Session... Dec 6 05:15:46 localhost systemd[313717]: Stopped target Main User Target. Dec 6 05:15:46 localhost systemd[313717]: Stopped target Basic System. Dec 6 05:15:46 localhost systemd[313717]: Stopped target Paths. Dec 6 05:15:46 localhost systemd[313717]: Stopped target Sockets. Dec 6 05:15:46 localhost systemd[313717]: Stopped target Timers. Dec 6 05:15:46 localhost systemd[313717]: Stopped Mark boot as successful after the user session has run 2 minutes. Dec 6 05:15:46 localhost systemd[313717]: Stopped Daily Cleanup of User's Temporary Directories. Dec 6 05:15:46 localhost systemd[313717]: Closed D-Bus User Message Bus Socket. Dec 6 05:15:46 localhost systemd[313717]: Stopped Create User's Volatile Files and Directories. Dec 6 05:15:46 localhost systemd[313717]: Removed slice User Application Slice. Dec 6 05:15:46 localhost systemd[313717]: Reached target Shutdown. Dec 6 05:15:46 localhost systemd[313717]: Finished Exit the Session. Dec 6 05:15:46 localhost systemd[313717]: Reached target Exit the Session. Dec 6 05:15:46 localhost systemd[1]: user@42436.service: Deactivated successfully. Dec 6 05:15:46 localhost systemd[1]: Stopped User Manager for UID 42436. Dec 6 05:15:46 localhost systemd[1]: Stopping User Runtime Directory /run/user/42436... Dec 6 05:15:46 localhost systemd[1]: run-user-42436.mount: Deactivated successfully. Dec 6 05:15:46 localhost openstack_network_exporter[243110]: ERROR 10:15:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:15:46 localhost openstack_network_exporter[243110]: ERROR 10:15:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:15:46 localhost openstack_network_exporter[243110]: ERROR 10:15:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:15:46 localhost systemd[1]: user-runtime-dir@42436.service: Deactivated successfully. Dec 6 05:15:46 localhost systemd[1]: Stopped User Runtime Directory /run/user/42436. Dec 6 05:15:46 localhost openstack_network_exporter[243110]: ERROR 10:15:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:15:46 localhost openstack_network_exporter[243110]: Dec 6 05:15:46 localhost openstack_network_exporter[243110]: ERROR 10:15:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:15:46 localhost openstack_network_exporter[243110]: Dec 6 05:15:46 localhost systemd[1]: Removed slice User Slice of UID 42436. Dec 6 05:15:47 localhost nova_compute[282193]: 2025-12-06 10:15:47.073 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:47.305 160509 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:15:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:47.305 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:15:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:47.306 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:15:47 localhost sshd[314156]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:15:48 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:15:48 localhost nova_compute[282193]: 2025-12-06 10:15:48.495 282197 DEBUG oslo_concurrency.lockutils [None req-e61abce1-6552-4ff7-89fe-964ccbdef70d b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Acquiring lock "ed40901b-0bfc-426a-bf70-48d87ce95aa6" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:15:48 localhost nova_compute[282193]: 2025-12-06 10:15:48.498 282197 DEBUG oslo_concurrency.lockutils [None req-e61abce1-6552-4ff7-89fe-964ccbdef70d b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Lock "ed40901b-0bfc-426a-bf70-48d87ce95aa6" acquired by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:15:48 localhost nova_compute[282193]: 2025-12-06 10:15:48.498 282197 DEBUG oslo_concurrency.lockutils [None req-e61abce1-6552-4ff7-89fe-964ccbdef70d b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Acquiring lock "ed40901b-0bfc-426a-bf70-48d87ce95aa6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:15:48 localhost nova_compute[282193]: 2025-12-06 10:15:48.499 282197 DEBUG oslo_concurrency.lockutils [None req-e61abce1-6552-4ff7-89fe-964ccbdef70d b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Lock "ed40901b-0bfc-426a-bf70-48d87ce95aa6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:15:48 localhost nova_compute[282193]: 2025-12-06 10:15:48.499 282197 DEBUG oslo_concurrency.lockutils [None req-e61abce1-6552-4ff7-89fe-964ccbdef70d b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Lock "ed40901b-0bfc-426a-bf70-48d87ce95aa6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:15:48 localhost nova_compute[282193]: 2025-12-06 10:15:48.501 282197 INFO nova.compute.manager [None req-e61abce1-6552-4ff7-89fe-964ccbdef70d b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Terminating instance#033[00m Dec 6 05:15:48 localhost nova_compute[282193]: 2025-12-06 10:15:48.502 282197 DEBUG nova.compute.manager [None req-e61abce1-6552-4ff7-89fe-964ccbdef70d b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m Dec 6 05:15:48 localhost neutron_sriov_agent[256690]: 2025-12-06 10:15:48.509 2 INFO neutron.agent.securitygroups_rpc [None req-0ed3e916-bdef-45c7-9c1d-50729e74f02a 13b250438f8e49ee9d0d9f0fe4791c05 a22ced63e346459ab637424ae7833af7 - - default default] Security group member updated ['55c805cd-9bbe-4434-83af-206ee080e6b9']#033[00m Dec 6 05:15:48 localhost kernel: device tapfeb6a13d-30 left promiscuous mode Dec 6 05:15:48 localhost NetworkManager[5973]: [1765016148.6118] device (tapfeb6a13d-30): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed') Dec 6 05:15:48 localhost nova_compute[282193]: 2025-12-06 10:15:48.622 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:48 localhost ovn_controller[154851]: 2025-12-06T10:15:48Z|00147|binding|INFO|Releasing lport feb6a13d-305a-4541-a50e-4988833ecf82 from this chassis (sb_readonly=0) Dec 6 05:15:48 localhost ovn_controller[154851]: 2025-12-06T10:15:48Z|00148|binding|INFO|Setting lport feb6a13d-305a-4541-a50e-4988833ecf82 down in Southbound Dec 6 05:15:48 localhost ovn_controller[154851]: 2025-12-06T10:15:48Z|00149|binding|INFO|Releasing lport 99b309b3-9e3d-4a23-b110-d99707c2eb4e from this chassis (sb_readonly=0) Dec 6 05:15:48 localhost ovn_controller[154851]: 2025-12-06T10:15:48Z|00150|binding|INFO|Setting lport 99b309b3-9e3d-4a23-b110-d99707c2eb4e down in Southbound Dec 6 05:15:48 localhost ovn_controller[154851]: 2025-12-06T10:15:48Z|00151|binding|INFO|Removing iface tapfeb6a13d-30 ovn-installed in OVS Dec 6 05:15:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:15:48 localhost nova_compute[282193]: 2025-12-06 10:15:48.630 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:48 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:48.636 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:27:4d 19.80.0.152'], port_security=['fa:16:3e:11:27:4d 19.80.0.152'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['feb6a13d-305a-4541-a50e-4988833ecf82'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-2060007817', 'neutron:cidrs': '19.80.0.152/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-19043ea6-c6b2-4272-aa60-1b11a7b5bd93', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-2060007817', 'neutron:project_id': '9167331b2c424ef6961b096b551f8434', 'neutron:revision_number': '5', 'neutron:security_group_ids': '4c82b56e-0fc5-4c7f-8922-ceb8236815fd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=927c8639-172d-4240-b8a1-85db1fd6c03d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=99b309b3-9e3d-4a23-b110-d99707c2eb4e) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:15:48 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:48.638 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:ea:4a 10.100.0.10'], port_security=['fa:16:3e:e5:ea:4a 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1146072664', 'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'ed40901b-0bfc-426a-bf70-48d87ce95aa6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-45604602-bc87-4608-9881-9568cbf90870', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1146072664', 'neutron:project_id': '9167331b2c424ef6961b096b551f8434', 'neutron:revision_number': '12', 'neutron:security_group_ids': '4c82b56e-0fc5-4c7f-8922-ceb8236815fd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548789.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d40d335f-7e85-43c3-894d-993c12735497, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=feb6a13d-305a-4541-a50e-4988833ecf82) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:15:48 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:48.639 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 99b309b3-9e3d-4a23-b110-d99707c2eb4e in datapath 19043ea6-c6b2-4272-aa60-1b11a7b5bd93 unbound from our chassis#033[00m Dec 6 05:15:48 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:48.641 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 19043ea6-c6b2-4272-aa60-1b11a7b5bd93, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:15:48 localhost ovn_controller[154851]: 2025-12-06T10:15:48Z|00152|binding|INFO|Releasing lport b960e3cf-838e-4b32-93f1-7da76cedadcc from this chassis (sb_readonly=0) Dec 6 05:15:48 localhost ovn_controller[154851]: 2025-12-06T10:15:48Z|00153|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:15:48 localhost ovn_controller[154851]: 2025-12-06T10:15:48Z|00154|binding|INFO|Releasing lport d57132cf-ea52-419a-82d6-37dcdb5dd89a from this chassis (sb_readonly=0) Dec 6 05:15:48 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:48.642 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[653e203b-8ef4-435e-9953-923dec3d6a2f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:48 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:48.644 160509 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-19043ea6-c6b2-4272-aa60-1b11a7b5bd93 namespace which is not needed anymore#033[00m Dec 6 05:15:48 localhost systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000008.scope: Deactivated successfully. Dec 6 05:15:48 localhost systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000008.scope: Consumed 4.458s CPU time. Dec 6 05:15:48 localhost systemd-machined[84444]: Machine qemu-4-instance-00000008 terminated. Dec 6 05:15:48 localhost nova_compute[282193]: 2025-12-06 10:15:48.665 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:48 localhost nova_compute[282193]: 2025-12-06 10:15:48.674 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:48 localhost nova_compute[282193]: 2025-12-06 10:15:48.722 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:48 localhost systemd[1]: tmp-crun.kKuSWH.mount: Deactivated successfully. Dec 6 05:15:48 localhost nova_compute[282193]: 2025-12-06 10:15:48.729 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:48 localhost podman[314163]: 2025-12-06 10:15:48.747652849 +0000 UTC m=+0.104348385 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 6 05:15:48 localhost nova_compute[282193]: 2025-12-06 10:15:48.753 282197 INFO nova.virt.libvirt.driver [-] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Instance destroyed successfully.#033[00m Dec 6 05:15:48 localhost nova_compute[282193]: 2025-12-06 10:15:48.754 282197 DEBUG nova.objects.instance [None req-e61abce1-6552-4ff7-89fe-964ccbdef70d b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Lazy-loading 'resources' on Instance uuid ed40901b-0bfc-426a-bf70-48d87ce95aa6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 6 05:15:48 localhost nova_compute[282193]: 2025-12-06 10:15:48.770 282197 DEBUG nova.virt.libvirt.vif [None req-e61abce1-6552-4ff7-89fe-964ccbdef70d b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T10:15:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-571789410',display_name='tempest-LiveMigrationTest-server-571789410',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005548789.localdomain',hostname='tempest-livemigrationtest-server-571789410',id=8,image_ref='6a944ab6-8965-4055-b7fc-af6e395005ea',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-12-06T10:15:26Z,launched_on='np0005548790.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=,new_flavor=None,node='np0005548789.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=,power_state=1,progress=0,project_id='9167331b2c424ef6961b096b551f8434',ramdisk_id='',reservation_id='r-9204byw5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='6a944ab6-8965-4055-b7fc-af6e395005ea',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-1593322913',owner_user_name='tempest-LiveMigrationTest-1593322913-project-member'},tags=,task_state='deleting',terminated_at=None,trusted_certs=,updated_at=2025-12-06T10:15:40Z,user_data=None,user_id='b25d9e5ec9eb4368a764482a325b9dda',uuid=ed40901b-0bfc-426a-bf70-48d87ce95aa6,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "feb6a13d-305a-4541-a50e-4988833ecf82", "address": "fa:16:3e:e5:ea:4a", "network": {"id": "45604602-bc87-4608-9881-9568cbf90870", "bridge": "br-int", "label": "tempest-LiveMigrationTest-802114316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "9167331b2c424ef6961b096b551f8434", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfeb6a13d-30", "ovs_interfaceid": "feb6a13d-305a-4541-a50e-4988833ecf82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m Dec 6 05:15:48 localhost nova_compute[282193]: 2025-12-06 10:15:48.770 282197 DEBUG nova.network.os_vif_util [None req-e61abce1-6552-4ff7-89fe-964ccbdef70d b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Converting VIF {"id": "feb6a13d-305a-4541-a50e-4988833ecf82", "address": "fa:16:3e:e5:ea:4a", "network": {"id": "45604602-bc87-4608-9881-9568cbf90870", "bridge": "br-int", "label": "tempest-LiveMigrationTest-802114316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "9167331b2c424ef6961b096b551f8434", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfeb6a13d-30", "ovs_interfaceid": "feb6a13d-305a-4541-a50e-4988833ecf82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Dec 6 05:15:48 localhost nova_compute[282193]: 2025-12-06 10:15:48.771 282197 DEBUG nova.network.os_vif_util [None req-e61abce1-6552-4ff7-89fe-964ccbdef70d b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:ea:4a,bridge_name='br-int',has_traffic_filtering=True,id=feb6a13d-305a-4541-a50e-4988833ecf82,network=Network(45604602-bc87-4608-9881-9568cbf90870),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfeb6a13d-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Dec 6 05:15:48 localhost nova_compute[282193]: 2025-12-06 10:15:48.772 282197 DEBUG os_vif [None req-e61abce1-6552-4ff7-89fe-964ccbdef70d b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:ea:4a,bridge_name='br-int',has_traffic_filtering=True,id=feb6a13d-305a-4541-a50e-4988833ecf82,network=Network(45604602-bc87-4608-9881-9568cbf90870),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfeb6a13d-30') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m Dec 6 05:15:48 localhost nova_compute[282193]: 2025-12-06 10:15:48.773 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 18 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:48 localhost nova_compute[282193]: 2025-12-06 10:15:48.774 282197 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfeb6a13d-30, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 05:15:48 localhost nova_compute[282193]: 2025-12-06 10:15:48.776 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:48 localhost nova_compute[282193]: 2025-12-06 10:15:48.778 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:15:48 localhost nova_compute[282193]: 2025-12-06 10:15:48.780 282197 INFO os_vif [None req-e61abce1-6552-4ff7-89fe-964ccbdef70d b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:ea:4a,bridge_name='br-int',has_traffic_filtering=True,id=feb6a13d-305a-4541-a50e-4988833ecf82,network=Network(45604602-bc87-4608-9881-9568cbf90870),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfeb6a13d-30')#033[00m Dec 6 05:15:48 localhost podman[314163]: 2025-12-06 10:15:48.80170951 +0000 UTC m=+0.158404976 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 6 05:15:48 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:15:48 localhost neutron-haproxy-ovnmeta-19043ea6-c6b2-4272-aa60-1b11a7b5bd93[313949]: [NOTICE] (313953) : haproxy version is 2.8.14-c23fe91 Dec 6 05:15:48 localhost neutron-haproxy-ovnmeta-19043ea6-c6b2-4272-aa60-1b11a7b5bd93[313949]: [NOTICE] (313953) : path to executable is /usr/sbin/haproxy Dec 6 05:15:48 localhost neutron-haproxy-ovnmeta-19043ea6-c6b2-4272-aa60-1b11a7b5bd93[313949]: [WARNING] (313953) : Exiting Master process... Dec 6 05:15:48 localhost neutron-haproxy-ovnmeta-19043ea6-c6b2-4272-aa60-1b11a7b5bd93[313949]: [WARNING] (313953) : Exiting Master process... Dec 6 05:15:48 localhost systemd[1]: tmp-crun.9fuanR.mount: Deactivated successfully. Dec 6 05:15:48 localhost neutron-haproxy-ovnmeta-19043ea6-c6b2-4272-aa60-1b11a7b5bd93[313949]: [ALERT] (313953) : Current worker (313955) exited with code 143 (Terminated) Dec 6 05:15:48 localhost neutron-haproxy-ovnmeta-19043ea6-c6b2-4272-aa60-1b11a7b5bd93[313949]: [WARNING] (313953) : All workers exited. Exiting... (0) Dec 6 05:15:48 localhost systemd[1]: libpod-57bafd33260bc96ff76c85f7693ba9786f043d552185f6da913acb510f4f60ab.scope: Deactivated successfully. Dec 6 05:15:48 localhost podman[314211]: 2025-12-06 10:15:48.897907477 +0000 UTC m=+0.131397856 container died 57bafd33260bc96ff76c85f7693ba9786f043d552185f6da913acb510f4f60ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-19043ea6-c6b2-4272-aa60-1b11a7b5bd93, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 6 05:15:48 localhost podman[314211]: 2025-12-06 10:15:48.946016526 +0000 UTC m=+0.179506895 container cleanup 57bafd33260bc96ff76c85f7693ba9786f043d552185f6da913acb510f4f60ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-19043ea6-c6b2-4272-aa60-1b11a7b5bd93, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Dec 6 05:15:48 localhost podman[314247]: 2025-12-06 10:15:48.980354408 +0000 UTC m=+0.072290463 container cleanup 57bafd33260bc96ff76c85f7693ba9786f043d552185f6da913acb510f4f60ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-19043ea6-c6b2-4272-aa60-1b11a7b5bd93, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:15:48 localhost systemd[1]: libpod-conmon-57bafd33260bc96ff76c85f7693ba9786f043d552185f6da913acb510f4f60ab.scope: Deactivated successfully. Dec 6 05:15:49 localhost podman[314263]: 2025-12-06 10:15:49.03480908 +0000 UTC m=+0.072155690 container remove 57bafd33260bc96ff76c85f7693ba9786f043d552185f6da913acb510f4f60ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-19043ea6-c6b2-4272-aa60-1b11a7b5bd93, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:15:49 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:49.043 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[dfcabb9a-a546-425f-8463-9d2a28434cc8]: (4, ('Sat Dec 6 10:15:48 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-19043ea6-c6b2-4272-aa60-1b11a7b5bd93 (57bafd33260bc96ff76c85f7693ba9786f043d552185f6da913acb510f4f60ab)\n57bafd33260bc96ff76c85f7693ba9786f043d552185f6da913acb510f4f60ab\nSat Dec 6 10:15:48 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-19043ea6-c6b2-4272-aa60-1b11a7b5bd93 (57bafd33260bc96ff76c85f7693ba9786f043d552185f6da913acb510f4f60ab)\n57bafd33260bc96ff76c85f7693ba9786f043d552185f6da913acb510f4f60ab\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:49 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:49.045 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[fbd7c4d6-3fcb-44f6-a42b-44721b04dc54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:49 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:49.046 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap19043ea6-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 05:15:49 localhost kernel: device tap19043ea6-c0 left promiscuous mode Dec 6 05:15:49 localhost nova_compute[282193]: 2025-12-06 10:15:49.051 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:49 localhost nova_compute[282193]: 2025-12-06 10:15:49.060 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:49 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:49.063 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[b2abb294-e1e0-43db-89a8-2010de181859]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:49 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:49.077 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[f6746242-1be8-4dd4-a4b3-adccdfbe91de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:49 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:49.078 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[ec7adbe4-9189-459e-ba4d-93723da4a479]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:49 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:49.092 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[7bc0fcc5-35eb-4ec1-96a4-86ec39d8bbd5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1255563, 'reachable_time': 40135, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314278, 'error': None, 'target': 'ovnmeta-19043ea6-c6b2-4272-aa60-1b11a7b5bd93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:49 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:49.095 160720 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-19043ea6-c6b2-4272-aa60-1b11a7b5bd93 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Dec 6 05:15:49 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:49.095 160720 DEBUG oslo.privsep.daemon [-] privsep: reply[5a317dbf-cadf-4aeb-a790-11b4ea41a99c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:49 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:49.096 160509 INFO neutron.agent.ovn.metadata.agent [-] Port feb6a13d-305a-4541-a50e-4988833ecf82 in datapath 45604602-bc87-4608-9881-9568cbf90870 unbound from our chassis#033[00m Dec 6 05:15:49 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:49.100 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 45604602-bc87-4608-9881-9568cbf90870, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:15:49 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:49.101 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[3290096a-8cc8-4526-a355-218d3b761869]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:49 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:49.102 160509 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-45604602-bc87-4608-9881-9568cbf90870 namespace which is not needed anymore#033[00m Dec 6 05:15:49 localhost neutron-haproxy-ovnmeta-45604602-bc87-4608-9881-9568cbf90870[314022]: [NOTICE] (314026) : haproxy version is 2.8.14-c23fe91 Dec 6 05:15:49 localhost neutron-haproxy-ovnmeta-45604602-bc87-4608-9881-9568cbf90870[314022]: [NOTICE] (314026) : path to executable is /usr/sbin/haproxy Dec 6 05:15:49 localhost neutron-haproxy-ovnmeta-45604602-bc87-4608-9881-9568cbf90870[314022]: [WARNING] (314026) : Exiting Master process... Dec 6 05:15:49 localhost neutron-haproxy-ovnmeta-45604602-bc87-4608-9881-9568cbf90870[314022]: [ALERT] (314026) : Current worker (314028) exited with code 143 (Terminated) Dec 6 05:15:49 localhost neutron-haproxy-ovnmeta-45604602-bc87-4608-9881-9568cbf90870[314022]: [WARNING] (314026) : All workers exited. Exiting... (0) Dec 6 05:15:49 localhost systemd[1]: libpod-5178c8fa98670b4c6c4f4039300dbbe2725e333d5ba69ae0dc4e1e9f3ca6a714.scope: Deactivated successfully. Dec 6 05:15:49 localhost podman[314297]: 2025-12-06 10:15:49.292332661 +0000 UTC m=+0.066873029 container died 5178c8fa98670b4c6c4f4039300dbbe2725e333d5ba69ae0dc4e1e9f3ca6a714 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45604602-bc87-4608-9881-9568cbf90870, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Dec 6 05:15:49 localhost podman[314297]: 2025-12-06 10:15:49.327630281 +0000 UTC m=+0.102170589 container cleanup 5178c8fa98670b4c6c4f4039300dbbe2725e333d5ba69ae0dc4e1e9f3ca6a714 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45604602-bc87-4608-9881-9568cbf90870, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 6 05:15:49 localhost nova_compute[282193]: 2025-12-06 10:15:49.352 282197 INFO nova.virt.libvirt.driver [None req-e61abce1-6552-4ff7-89fe-964ccbdef70d b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Deleting instance files /var/lib/nova/instances/ed40901b-0bfc-426a-bf70-48d87ce95aa6_del#033[00m Dec 6 05:15:49 localhost nova_compute[282193]: 2025-12-06 10:15:49.354 282197 INFO nova.virt.libvirt.driver [None req-e61abce1-6552-4ff7-89fe-964ccbdef70d b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Deletion of /var/lib/nova/instances/ed40901b-0bfc-426a-bf70-48d87ce95aa6_del complete#033[00m Dec 6 05:15:49 localhost podman[314311]: 2025-12-06 10:15:49.37705223 +0000 UTC m=+0.077328336 container cleanup 5178c8fa98670b4c6c4f4039300dbbe2725e333d5ba69ae0dc4e1e9f3ca6a714 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45604602-bc87-4608-9881-9568cbf90870, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:15:49 localhost systemd[1]: libpod-conmon-5178c8fa98670b4c6c4f4039300dbbe2725e333d5ba69ae0dc4e1e9f3ca6a714.scope: Deactivated successfully. Dec 6 05:15:49 localhost podman[314325]: 2025-12-06 10:15:49.405795842 +0000 UTC m=+0.062433224 container remove 5178c8fa98670b4c6c4f4039300dbbe2725e333d5ba69ae0dc4e1e9f3ca6a714 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45604602-bc87-4608-9881-9568cbf90870, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 6 05:15:49 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:49.409 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[352449f8-b931-49ff-b4e6-d048a3e34c9a]: (4, ('Sat Dec 6 10:15:49 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-45604602-bc87-4608-9881-9568cbf90870 (5178c8fa98670b4c6c4f4039300dbbe2725e333d5ba69ae0dc4e1e9f3ca6a714)\n5178c8fa98670b4c6c4f4039300dbbe2725e333d5ba69ae0dc4e1e9f3ca6a714\nSat Dec 6 10:15:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-45604602-bc87-4608-9881-9568cbf90870 (5178c8fa98670b4c6c4f4039300dbbe2725e333d5ba69ae0dc4e1e9f3ca6a714)\n5178c8fa98670b4c6c4f4039300dbbe2725e333d5ba69ae0dc4e1e9f3ca6a714\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:49 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:49.411 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[d7156475-1e3a-4f8f-be03-d8fe6132f836]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:49 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:49.412 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap45604602-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 05:15:49 localhost nova_compute[282193]: 2025-12-06 10:15:49.414 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:49 localhost kernel: device tap45604602-b0 left promiscuous mode Dec 6 05:15:49 localhost nova_compute[282193]: 2025-12-06 10:15:49.416 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:49 localhost nova_compute[282193]: 2025-12-06 10:15:49.423 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:49 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:49.425 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[150104c9-9879-46d4-b5b6-a3a122c1384c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:49 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:49.442 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[f76337b7-11cf-46fd-8802-6b6f0b120d91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:49 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:49.444 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[9b9d81af-444f-4857-90bd-825582b9e7ca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:49 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:49.463 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[5693bac5-ec10-4e7d-85ae-04d7ea4750d9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1255645, 'reachable_time': 44259, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314345, 'error': None, 'target': 'ovnmeta-45604602-bc87-4608-9881-9568cbf90870', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:49 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:49.466 160720 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-45604602-bc87-4608-9881-9568cbf90870 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Dec 6 05:15:49 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:49.466 160720 DEBUG oslo.privsep.daemon [-] privsep: reply[9c7c0616-1224-4c6f-aeae-47f261c0acc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:49 localhost nova_compute[282193]: 2025-12-06 10:15:49.473 282197 INFO nova.compute.manager [None req-e61abce1-6552-4ff7-89fe-964ccbdef70d b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Took 0.97 seconds to destroy the instance on the hypervisor.#033[00m Dec 6 05:15:49 localhost nova_compute[282193]: 2025-12-06 10:15:49.474 282197 DEBUG oslo.service.loopingcall [None req-e61abce1-6552-4ff7-89fe-964ccbdef70d b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m Dec 6 05:15:49 localhost nova_compute[282193]: 2025-12-06 10:15:49.475 282197 DEBUG nova.compute.manager [-] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m Dec 6 05:15:49 localhost nova_compute[282193]: 2025-12-06 10:15:49.475 282197 DEBUG nova.network.neutron [-] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m Dec 6 05:15:49 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:49.528 263652 INFO neutron.agent.linux.ip_lib [None req-021eeeb8-15de-4467-bbe5-42cc245776a5 - - - - - -] Device tap1d53082e-11 cannot be used as it has no MAC address#033[00m Dec 6 05:15:49 localhost nova_compute[282193]: 2025-12-06 10:15:49.552 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:49 localhost kernel: device tap1d53082e-11 entered promiscuous mode Dec 6 05:15:49 localhost NetworkManager[5973]: [1765016149.5585] manager: (tap1d53082e-11): new Generic device (/org/freedesktop/NetworkManager/Devices/29) Dec 6 05:15:49 localhost ovn_controller[154851]: 2025-12-06T10:15:49Z|00155|binding|INFO|Claiming lport 1d53082e-11ae-49e3-9448-7b2e1b2ec267 for this chassis. Dec 6 05:15:49 localhost nova_compute[282193]: 2025-12-06 10:15:49.559 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:49 localhost ovn_controller[154851]: 2025-12-06T10:15:49Z|00156|binding|INFO|1d53082e-11ae-49e3-9448-7b2e1b2ec267: Claiming unknown Dec 6 05:15:49 localhost systemd-udevd[314160]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:15:49 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:49.569 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-7bcb9995-c8be-445e-890a-c8635f090fa6', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7bcb9995-c8be-445e-890a-c8635f090fa6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '44e6bb9426fc43a084f983db0bd7f0ad', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6349ccef-9387-4e01-b0b2-fbf339bbd83f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=1d53082e-11ae-49e3-9448-7b2e1b2ec267) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:15:49 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:49.576 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 1d53082e-11ae-49e3-9448-7b2e1b2ec267 in datapath 7bcb9995-c8be-445e-890a-c8635f090fa6 bound to our chassis#033[00m Dec 6 05:15:49 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:49.578 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 7bcb9995-c8be-445e-890a-c8635f090fa6 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:15:49 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:49.581 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[239ef574-f07a-46c6-850f-51ab86809bbf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:49 localhost ovn_controller[154851]: 2025-12-06T10:15:49Z|00157|binding|INFO|Setting lport 1d53082e-11ae-49e3-9448-7b2e1b2ec267 ovn-installed in OVS Dec 6 05:15:49 localhost nova_compute[282193]: 2025-12-06 10:15:49.603 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:49 localhost ovn_controller[154851]: 2025-12-06T10:15:49Z|00158|binding|INFO|Setting lport 1d53082e-11ae-49e3-9448-7b2e1b2ec267 up in Southbound Dec 6 05:15:49 localhost nova_compute[282193]: 2025-12-06 10:15:49.605 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:49 localhost nova_compute[282193]: 2025-12-06 10:15:49.669 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:49 localhost nova_compute[282193]: 2025-12-06 10:15:49.680 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:49 localhost systemd[1]: var-lib-containers-storage-overlay-8d7eab40805a009327f68fd560cfacb738e0b20b8a1c52a765c6668a441db2f8-merged.mount: Deactivated successfully. Dec 6 05:15:49 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5178c8fa98670b4c6c4f4039300dbbe2725e333d5ba69ae0dc4e1e9f3ca6a714-userdata-shm.mount: Deactivated successfully. Dec 6 05:15:49 localhost systemd[1]: run-netns-ovnmeta\x2d45604602\x2dbc87\x2d4608\x2d9881\x2d9568cbf90870.mount: Deactivated successfully. Dec 6 05:15:49 localhost systemd[1]: var-lib-containers-storage-overlay-970f665873ff889fc4ce87e8eb815e45fa33cad2aebf50d32a77643cc655aa94-merged.mount: Deactivated successfully. Dec 6 05:15:49 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-57bafd33260bc96ff76c85f7693ba9786f043d552185f6da913acb510f4f60ab-userdata-shm.mount: Deactivated successfully. Dec 6 05:15:49 localhost systemd[1]: run-netns-ovnmeta\x2d19043ea6\x2dc6b2\x2d4272\x2daa60\x2d1b11a7b5bd93.mount: Deactivated successfully. Dec 6 05:15:49 localhost neutron_sriov_agent[256690]: 2025-12-06 10:15:49.994 2 INFO neutron.agent.securitygroups_rpc [req-da70e705-23ca-45d2-aa6c-68d8abc979e1 req-8ff8aa52-7146-4285-bb8a-51bbd99a36a5 da7bbd24eb95438897585b10577ea2e0 da995d8e002548889747013c0eeca935 - - default default] Security group member updated ['581a4637-eff2-45f4-92f3-d575b736a840']#033[00m Dec 6 05:15:50 localhost dnsmasq[312566]: read /var/lib/neutron/dhcp/deb7774c-e96b-4e7f-88d7-ed9d740915f4/addn_hosts - 1 addresses Dec 6 05:15:50 localhost dnsmasq-dhcp[312566]: read /var/lib/neutron/dhcp/deb7774c-e96b-4e7f-88d7-ed9d740915f4/host Dec 6 05:15:50 localhost dnsmasq-dhcp[312566]: read /var/lib/neutron/dhcp/deb7774c-e96b-4e7f-88d7-ed9d740915f4/opts Dec 6 05:15:50 localhost podman[314401]: 2025-12-06 10:15:50.290099955 +0000 UTC m=+0.060258680 container kill 8a8b7a6a9724101bff1398ade8c854164d1816271ca6c4f86a12732f70229362 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-deb7774c-e96b-4e7f-88d7-ed9d740915f4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0) Dec 6 05:15:50 localhost podman[314443]: Dec 6 05:15:50 localhost podman[314443]: 2025-12-06 10:15:50.517664177 +0000 UTC m=+0.081633887 container create 7deecf5c59b0f04bb3525d11262bd98a6e888a03a84db6c0d01f340a61932760 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7bcb9995-c8be-445e-890a-c8635f090fa6, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:15:50 localhost systemd[1]: Started libpod-conmon-7deecf5c59b0f04bb3525d11262bd98a6e888a03a84db6c0d01f340a61932760.scope. Dec 6 05:15:50 localhost podman[314443]: 2025-12-06 10:15:50.472305101 +0000 UTC m=+0.036274831 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:15:50 localhost systemd[1]: Started libcrun container. Dec 6 05:15:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c28922d979d8687baf0d17c061c99ccd1f7b4506833ab64cdd748cd838b58f4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:15:50 localhost podman[314443]: 2025-12-06 10:15:50.615039951 +0000 UTC m=+0.179009651 container init 7deecf5c59b0f04bb3525d11262bd98a6e888a03a84db6c0d01f340a61932760 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7bcb9995-c8be-445e-890a-c8635f090fa6, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 6 05:15:50 localhost podman[314443]: 2025-12-06 10:15:50.625740155 +0000 UTC m=+0.189709825 container start 7deecf5c59b0f04bb3525d11262bd98a6e888a03a84db6c0d01f340a61932760 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7bcb9995-c8be-445e-890a-c8635f090fa6, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3) Dec 6 05:15:50 localhost dnsmasq[314461]: started, version 2.85 cachesize 150 Dec 6 05:15:50 localhost dnsmasq[314461]: DNS service limited to local subnets Dec 6 05:15:50 localhost dnsmasq[314461]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:15:50 localhost dnsmasq[314461]: warning: no upstream servers configured Dec 6 05:15:50 localhost dnsmasq-dhcp[314461]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 6 05:15:50 localhost dnsmasq[314461]: read /var/lib/neutron/dhcp/7bcb9995-c8be-445e-890a-c8635f090fa6/addn_hosts - 0 addresses Dec 6 05:15:50 localhost dnsmasq-dhcp[314461]: read /var/lib/neutron/dhcp/7bcb9995-c8be-445e-890a-c8635f090fa6/host Dec 6 05:15:50 localhost dnsmasq-dhcp[314461]: read /var/lib/neutron/dhcp/7bcb9995-c8be-445e-890a-c8635f090fa6/opts Dec 6 05:15:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:50.643 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:15:50Z, description=, device_id=dafd896d-42a7-4e64-be65-9942f12d900d, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=bbbf4983-178c-402a-8c72-520f40e6ea28, ip_allocation=immediate, mac_address=fa:16:3e:f7:d6:18, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=869, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-06T10:15:50Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:15:50 localhost systemd[1]: tmp-crun.Z0Em85.mount: Deactivated successfully. Dec 6 05:15:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:50.735 263652 INFO neutron.agent.dhcp.agent [None req-9b66ce23-e78c-4b59-84cc-5ec79f319ecc - - - - - -] DHCP configuration for ports {'ada56b72-5c1e-433b-9bad-d65b17c1775a'} is completed#033[00m Dec 6 05:15:50 localhost dnsmasq[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 6 addresses Dec 6 05:15:50 localhost podman[314478]: 2025-12-06 10:15:50.86493105 +0000 UTC m=+0.056432142 container kill e4e5ac05442955f65040fee10f3aa342d80518f3e4a072b3d79fe4302f513cf7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 6 05:15:50 localhost dnsmasq-dhcp[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:15:50 localhost dnsmasq-dhcp[263859]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:15:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:50.887 263652 ERROR neutron.agent.dhcp.agent [None req-d149bc2e-8ff4-4340-9073-147bbd6df0a3 - - - - - -] Unable to reload_allocations dhcp for 8e238f59-5792-4ff4-95af-f993c8e9e14f.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tape1277966-bb not found in namespace qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f. Dec 6 05:15:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:50.887 263652 ERROR neutron.agent.dhcp.agent Traceback (most recent call last): Dec 6 05:15:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:50.887 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver Dec 6 05:15:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:50.887 263652 ERROR neutron.agent.dhcp.agent rv = getattr(driver, action)(**action_kwargs) Dec 6 05:15:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:50.887 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations Dec 6 05:15:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:50.887 263652 ERROR neutron.agent.dhcp.agent self.device_manager.update(self.network, self.interface_name) Dec 6 05:15:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:50.887 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update Dec 6 05:15:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:50.887 263652 ERROR neutron.agent.dhcp.agent self._set_default_route(network, device_name) Dec 6 05:15:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:50.887 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route Dec 6 05:15:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:50.887 263652 ERROR neutron.agent.dhcp.agent self._set_default_route_ip_version(network, device_name, Dec 6 05:15:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:50.887 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version Dec 6 05:15:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:50.887 263652 ERROR neutron.agent.dhcp.agent gateway = device.route.get_gateway(ip_version=ip_version) Dec 6 05:15:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:50.887 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway Dec 6 05:15:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:50.887 263652 ERROR neutron.agent.dhcp.agent routes = self.list_routes(ip_version, scope=scope, table=table) Dec 6 05:15:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:50.887 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes Dec 6 05:15:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:50.887 263652 ERROR neutron.agent.dhcp.agent return list_ip_routes(self._parent.namespace, ip_version, scope=scope, Dec 6 05:15:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:50.887 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes Dec 6 05:15:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:50.887 263652 ERROR neutron.agent.dhcp.agent routes = privileged.list_ip_routes(namespace, ip_version, device=device, Dec 6 05:15:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:50.887 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f Dec 6 05:15:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:50.887 263652 ERROR neutron.agent.dhcp.agent return self(f, *args, **kw) Dec 6 05:15:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:50.887 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__ Dec 6 05:15:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:50.887 263652 ERROR neutron.agent.dhcp.agent do = self.iter(retry_state=retry_state) Dec 6 05:15:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:50.887 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter Dec 6 05:15:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:50.887 263652 ERROR neutron.agent.dhcp.agent return fut.result() Dec 6 05:15:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:50.887 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result Dec 6 05:15:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:50.887 263652 ERROR neutron.agent.dhcp.agent return self.__get_result() Dec 6 05:15:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:50.887 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result Dec 6 05:15:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:50.887 263652 ERROR neutron.agent.dhcp.agent raise self._exception Dec 6 05:15:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:50.887 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__ Dec 6 05:15:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:50.887 263652 ERROR neutron.agent.dhcp.agent result = fn(*args, **kwargs) Dec 6 05:15:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:50.887 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap Dec 6 05:15:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:50.887 263652 ERROR neutron.agent.dhcp.agent return self.channel.remote_call(name, args, kwargs, Dec 6 05:15:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:50.887 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call Dec 6 05:15:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:50.887 263652 ERROR neutron.agent.dhcp.agent raise exc_type(*result[2]) Dec 6 05:15:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:50.887 263652 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tape1277966-bb not found in namespace qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f. Dec 6 05:15:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:50.887 263652 ERROR neutron.agent.dhcp.agent #033[00m Dec 6 05:15:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:50.894 263652 INFO neutron.agent.dhcp.agent [-] Synchronizing state#033[00m Dec 6 05:15:51 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:51.020 263652 INFO neutron.agent.dhcp.agent [None req-7636e4dd-e0d1-4665-8024-29e6984d21f3 - - - - - -] DHCP configuration for ports {'bbbf4983-178c-402a-8c72-520f40e6ea28'} is completed#033[00m Dec 6 05:15:51 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:51.287 263652 INFO neutron.agent.dhcp.agent [None req-90b892c8-0506-4dbe-af3a-d4a70d2244e9 - - - - - -] All active networks have been fetched through RPC.#033[00m Dec 6 05:15:51 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:51.289 263652 INFO neutron.agent.dhcp.agent [-] Starting network 8e238f59-5792-4ff4-95af-f993c8e9e14f dhcp configuration#033[00m Dec 6 05:15:51 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:51.294 263652 INFO neutron.agent.dhcp.agent [-] Starting network f095d28f-14aa-4e63-9d6e-f230615c3946 dhcp configuration#033[00m Dec 6 05:15:51 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:51.295 263652 INFO neutron.agent.dhcp.agent [-] Finished network f095d28f-14aa-4e63-9d6e-f230615c3946 dhcp configuration#033[00m Dec 6 05:15:51 localhost dnsmasq[263859]: exiting on receipt of SIGTERM Dec 6 05:15:51 localhost systemd[1]: libpod-e4e5ac05442955f65040fee10f3aa342d80518f3e4a072b3d79fe4302f513cf7.scope: Deactivated successfully. Dec 6 05:15:51 localhost podman[314508]: 2025-12-06 10:15:51.434855307 +0000 UTC m=+0.052757451 container kill e4e5ac05442955f65040fee10f3aa342d80518f3e4a072b3d79fe4302f513cf7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Dec 6 05:15:51 localhost podman[314528]: 2025-12-06 10:15:51.504071526 +0000 UTC m=+0.049546553 container died e4e5ac05442955f65040fee10f3aa342d80518f3e4a072b3d79fe4302f513cf7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:15:51 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e4e5ac05442955f65040fee10f3aa342d80518f3e4a072b3d79fe4302f513cf7-userdata-shm.mount: Deactivated successfully. Dec 6 05:15:51 localhost nova_compute[282193]: 2025-12-06 10:15:51.554 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:51 localhost podman[314528]: 2025-12-06 10:15:51.558460026 +0000 UTC m=+0.103935003 container remove e4e5ac05442955f65040fee10f3aa342d80518f3e4a072b3d79fe4302f513cf7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 6 05:15:51 localhost systemd[1]: libpod-conmon-e4e5ac05442955f65040fee10f3aa342d80518f3e4a072b3d79fe4302f513cf7.scope: Deactivated successfully. Dec 6 05:15:51 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:51.609 263652 INFO neutron.agent.linux.ip_lib [-] Device tape1277966-bb cannot be used as it has no MAC address#033[00m Dec 6 05:15:51 localhost nova_compute[282193]: 2025-12-06 10:15:51.633 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:51 localhost kernel: device tape1277966-bb entered promiscuous mode Dec 6 05:15:51 localhost ovn_controller[154851]: 2025-12-06T10:15:51Z|00159|binding|INFO|Claiming lport e1277966-bb4e-4c31-a08b-185a772cbf5b for this chassis. Dec 6 05:15:51 localhost NetworkManager[5973]: [1765016151.6425] manager: (tape1277966-bb): new Generic device (/org/freedesktop/NetworkManager/Devices/30) Dec 6 05:15:51 localhost nova_compute[282193]: 2025-12-06 10:15:51.643 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:51 localhost ovn_controller[154851]: 2025-12-06T10:15:51Z|00160|binding|INFO|e1277966-bb4e-4c31-a08b-185a772cbf5b: Claiming unknown Dec 6 05:15:51 localhost ovn_controller[154851]: 2025-12-06T10:15:51Z|00161|binding|INFO|Setting lport e1277966-bb4e-4c31-a08b-185a772cbf5b ovn-installed in OVS Dec 6 05:15:51 localhost ovn_controller[154851]: 2025-12-06T10:15:51Z|00162|binding|INFO|Setting lport e1277966-bb4e-4c31-a08b-185a772cbf5b up in Southbound Dec 6 05:15:51 localhost nova_compute[282193]: 2025-12-06 10:15:51.653 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:51 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:51.655 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.122.172/24', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-8e238f59-5792-4ff4-95af-f993c8e9e14f', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e238f59-5792-4ff4-95af-f993c8e9e14f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ae43cb4c-3e04-441f-9177-31d5e45dfad9, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=e1277966-bb4e-4c31-a08b-185a772cbf5b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:15:51 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:51.656 160509 INFO neutron.agent.ovn.metadata.agent [-] Port e1277966-bb4e-4c31-a08b-185a772cbf5b in datapath 8e238f59-5792-4ff4-95af-f993c8e9e14f bound to our chassis#033[00m Dec 6 05:15:51 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:51.660 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Port e972a0a4-c434-4624-85e8-2a72a8f17075 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 6 05:15:51 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:51.660 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8e238f59-5792-4ff4-95af-f993c8e9e14f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:15:51 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:51.662 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[b373f804-ed1b-4d2b-8151-56e439f10d07]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:51 localhost journal[230404]: ethtool ioctl error on tape1277966-bb: No such device Dec 6 05:15:51 localhost nova_compute[282193]: 2025-12-06 10:15:51.679 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:51 localhost journal[230404]: ethtool ioctl error on tape1277966-bb: No such device Dec 6 05:15:51 localhost nova_compute[282193]: 2025-12-06 10:15:51.685 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:51 localhost journal[230404]: ethtool ioctl error on tape1277966-bb: No such device Dec 6 05:15:51 localhost journal[230404]: ethtool ioctl error on tape1277966-bb: No such device Dec 6 05:15:51 localhost journal[230404]: ethtool ioctl error on tape1277966-bb: No such device Dec 6 05:15:51 localhost journal[230404]: ethtool ioctl error on tape1277966-bb: No such device Dec 6 05:15:51 localhost journal[230404]: ethtool ioctl error on tape1277966-bb: No such device Dec 6 05:15:51 localhost journal[230404]: ethtool ioctl error on tape1277966-bb: No such device Dec 6 05:15:51 localhost nova_compute[282193]: 2025-12-06 10:15:51.719 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:51 localhost systemd[1]: var-lib-containers-storage-overlay-b9a1d5715f59223f418c72b8a9e9ba377238db2c64e0646a31ed33f6c0f74cb2-merged.mount: Deactivated successfully. Dec 6 05:15:51 localhost nova_compute[282193]: 2025-12-06 10:15:51.745 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:52 localhost nova_compute[282193]: 2025-12-06 10:15:52.076 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:52 localhost podman[314618]: Dec 6 05:15:52 localhost podman[314618]: 2025-12-06 10:15:52.533454689 +0000 UTC m=+0.093770645 container create dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Dec 6 05:15:52 localhost systemd[1]: Started libpod-conmon-dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f.scope. Dec 6 05:15:52 localhost systemd[1]: tmp-crun.ZGIMcA.mount: Deactivated successfully. Dec 6 05:15:52 localhost podman[314618]: 2025-12-06 10:15:52.489888347 +0000 UTC m=+0.050204353 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:15:52 localhost systemd[1]: Started libcrun container. Dec 6 05:15:52 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb03c4cde9b0f29ff04b093db84c97dd4b49f1d4ea32d27ad712695572fe9220/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:15:52 localhost podman[314618]: 2025-12-06 10:15:52.616664022 +0000 UTC m=+0.176979978 container init dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 6 05:15:52 localhost podman[314618]: 2025-12-06 10:15:52.625702297 +0000 UTC m=+0.186018253 container start dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Dec 6 05:15:52 localhost dnsmasq[314636]: started, version 2.85 cachesize 150 Dec 6 05:15:52 localhost dnsmasq[314636]: DNS service limited to local subnets Dec 6 05:15:52 localhost dnsmasq[314636]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:15:52 localhost dnsmasq[314636]: warning: no upstream servers configured Dec 6 05:15:52 localhost dnsmasq-dhcp[314636]: DHCP, static leases only on 192.168.122.0, lease time 1d Dec 6 05:15:52 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 6 addresses Dec 6 05:15:52 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:15:52 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:15:52 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:52.695 263652 INFO neutron.agent.dhcp.agent [None req-02d524e0-d630-4406-b3b5-6b17be357144 - - - - - -] Finished network 8e238f59-5792-4ff4-95af-f993c8e9e14f dhcp configuration#033[00m Dec 6 05:15:52 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:52.696 263652 INFO neutron.agent.dhcp.agent [None req-90b892c8-0506-4dbe-af3a-d4a70d2244e9 - - - - - -] Synchronizing state complete#033[00m Dec 6 05:15:52 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:52.697 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:15:52Z, description=, device_id=e94155bd-29ed-456c-8250-93318746e895, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=7b541b0c-c8b5-4bf5-a92b-45c17ae95d79, ip_allocation=immediate, mac_address=fa:16:3e:b9:63:28, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=873, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-06T10:15:52Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:15:52 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 7 addresses Dec 6 05:15:52 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:15:52 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:15:52 localhost podman[314655]: 2025-12-06 10:15:52.87570944 +0000 UTC m=+0.046031767 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 6 05:15:53 localhost nova_compute[282193]: 2025-12-06 10:15:53.153 282197 DEBUG nova.compute.manager [req-fa4866e3-05c4-48b1-ae31-89d7c470a335 req-840f4c4c-ea6c-44aa-8e15-f08277ba9082 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Received event network-vif-plugged-feb6a13d-305a-4541-a50e-4988833ecf82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m Dec 6 05:15:53 localhost nova_compute[282193]: 2025-12-06 10:15:53.153 282197 DEBUG oslo_concurrency.lockutils [req-fa4866e3-05c4-48b1-ae31-89d7c470a335 req-840f4c4c-ea6c-44aa-8e15-f08277ba9082 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Acquiring lock "ed40901b-0bfc-426a-bf70-48d87ce95aa6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:15:53 localhost nova_compute[282193]: 2025-12-06 10:15:53.153 282197 DEBUG oslo_concurrency.lockutils [req-fa4866e3-05c4-48b1-ae31-89d7c470a335 req-840f4c4c-ea6c-44aa-8e15-f08277ba9082 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "ed40901b-0bfc-426a-bf70-48d87ce95aa6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:15:53 localhost nova_compute[282193]: 2025-12-06 10:15:53.155 282197 DEBUG oslo_concurrency.lockutils [req-fa4866e3-05c4-48b1-ae31-89d7c470a335 req-840f4c4c-ea6c-44aa-8e15-f08277ba9082 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "ed40901b-0bfc-426a-bf70-48d87ce95aa6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:15:53 localhost nova_compute[282193]: 2025-12-06 10:15:53.155 282197 DEBUG nova.compute.manager [req-fa4866e3-05c4-48b1-ae31-89d7c470a335 req-840f4c4c-ea6c-44aa-8e15-f08277ba9082 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] No waiting events found dispatching network-vif-plugged-feb6a13d-305a-4541-a50e-4988833ecf82 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m Dec 6 05:15:53 localhost nova_compute[282193]: 2025-12-06 10:15:53.155 282197 WARNING nova.compute.manager [req-fa4866e3-05c4-48b1-ae31-89d7c470a335 req-840f4c4c-ea6c-44aa-8e15-f08277ba9082 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Received unexpected event network-vif-plugged-feb6a13d-305a-4541-a50e-4988833ecf82 for instance with vm_state active and task_state deleting.#033[00m Dec 6 05:15:53 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:53.183 263652 INFO neutron.agent.dhcp.agent [None req-8fe7eaf0-b599-4ad5-96d6-978781532ab9 - - - - - -] DHCP configuration for ports {'49b140a4-d9f8-482f-b1ba-2b28b09c2e14', '17d01ee3-d0a0-42f3-8c73-1578e34c0b4f', '3f202222-16a8-4488-bcc9-0691af80a9ba', 'bbbf4983-178c-402a-8c72-520f40e6ea28', '75f7252a-6b17-46d4-b761-60a0a33ef03b', '5f9b5a36-6f9d-4432-a50f-3ba7cd01f2c4', '03184373-6102-4573-83e2-c438dfc086ce', 'e1277966-bb4e-4c31-a08b-185a772cbf5b', '55ddb56c-afe2-4248-b1cd-f45aef0a3725', '6e17a10f-dbbc-42b2-aeeb-b43e917b0e3c', '8fd47356-f471-4742-820f-2e8ea70c8e0e'} is completed#033[00m Dec 6 05:15:53 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:15:53 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:53.355 263652 INFO neutron.agent.dhcp.agent [None req-c8b16c74-a807-4bc4-a43d-fa77b4f76723 - - - - - -] DHCP configuration for ports {'7b541b0c-c8b5-4bf5-a92b-45c17ae95d79'} is completed#033[00m Dec 6 05:15:53 localhost nova_compute[282193]: 2025-12-06 10:15:53.776 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:53 localhost nova_compute[282193]: 2025-12-06 10:15:53.844 282197 DEBUG nova.network.neutron [-] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 6 05:15:53 localhost nova_compute[282193]: 2025-12-06 10:15:53.866 282197 INFO nova.compute.manager [-] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Took 4.39 seconds to deallocate network for instance.#033[00m Dec 6 05:15:53 localhost podman[241090]: time="2025-12-06T10:15:53Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:15:53 localhost podman[241090]: @ - - [06/Dec/2025:10:15:53 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 159751 "" "Go-http-client/1.1" Dec 6 05:15:53 localhost podman[241090]: @ - - [06/Dec/2025:10:15:53 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20213 "" "Go-http-client/1.1" Dec 6 05:15:53 localhost nova_compute[282193]: 2025-12-06 10:15:53.965 282197 DEBUG oslo_concurrency.lockutils [None req-e61abce1-6552-4ff7-89fe-964ccbdef70d b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:15:53 localhost nova_compute[282193]: 2025-12-06 10:15:53.967 282197 DEBUG oslo_concurrency.lockutils [None req-e61abce1-6552-4ff7-89fe-964ccbdef70d b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:15:53 localhost nova_compute[282193]: 2025-12-06 10:15:53.970 282197 DEBUG oslo_concurrency.lockutils [None req-e61abce1-6552-4ff7-89fe-964ccbdef70d b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:15:54 localhost nova_compute[282193]: 2025-12-06 10:15:54.037 282197 INFO nova.scheduler.client.report [None req-e61abce1-6552-4ff7-89fe-964ccbdef70d b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Deleted allocations for instance ed40901b-0bfc-426a-bf70-48d87ce95aa6#033[00m Dec 6 05:15:54 localhost nova_compute[282193]: 2025-12-06 10:15:54.156 282197 DEBUG oslo_concurrency.lockutils [None req-e61abce1-6552-4ff7-89fe-964ccbdef70d b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Lock "ed40901b-0bfc-426a-bf70-48d87ce95aa6" "released" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: held 5.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:15:54 localhost podman[314760]: 2025-12-06 10:15:54.688229177 +0000 UTC m=+0.042163140 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Dec 6 05:15:54 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 7 addresses Dec 6 05:15:54 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:15:54 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:15:55 localhost nova_compute[282193]: 2025-12-06 10:15:55.025 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:55 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:15:55 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:15:55 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 7 addresses Dec 6 05:15:55 localhost podman[314816]: 2025-12-06 10:15:55.657526447 +0000 UTC m=+0.041360335 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 6 05:15:55 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:15:55 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:15:56 localhost podman[314856]: 2025-12-06 10:15:56.266725145 +0000 UTC m=+0.051594216 container kill 8a8b7a6a9724101bff1398ade8c854164d1816271ca6c4f86a12732f70229362 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-deb7774c-e96b-4e7f-88d7-ed9d740915f4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:15:56 localhost dnsmasq[312566]: read /var/lib/neutron/dhcp/deb7774c-e96b-4e7f-88d7-ed9d740915f4/addn_hosts - 0 addresses Dec 6 05:15:56 localhost dnsmasq-dhcp[312566]: read /var/lib/neutron/dhcp/deb7774c-e96b-4e7f-88d7-ed9d740915f4/host Dec 6 05:15:56 localhost dnsmasq-dhcp[312566]: read /var/lib/neutron/dhcp/deb7774c-e96b-4e7f-88d7-ed9d740915f4/opts Dec 6 05:15:56 localhost neutron_sriov_agent[256690]: 2025-12-06 10:15:56.496 2 INFO neutron.agent.securitygroups_rpc [None req-97ddf7c5-61a2-4ea7-a37a-afceb032745e 13b250438f8e49ee9d0d9f0fe4791c05 a22ced63e346459ab637424ae7833af7 - - default default] Security group member updated ['55c805cd-9bbe-4434-83af-206ee080e6b9']#033[00m Dec 6 05:15:56 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:56.549 263652 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:15:56 localhost nova_compute[282193]: 2025-12-06 10:15:56.668 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:56 localhost ovn_controller[154851]: 2025-12-06T10:15:56Z|00163|binding|INFO|Releasing lport ff588d77-fd65-43a9-bd18-9402d0aef61a from this chassis (sb_readonly=0) Dec 6 05:15:56 localhost kernel: device tapff588d77-fd left promiscuous mode Dec 6 05:15:56 localhost ovn_controller[154851]: 2025-12-06T10:15:56Z|00164|binding|INFO|Setting lport ff588d77-fd65-43a9-bd18-9402d0aef61a down in Southbound Dec 6 05:15:56 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:56.674 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:15:56Z, description=, device_id=e94155bd-29ed-456c-8250-93318746e895, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=e452c5fc-e3cc-46cd-9292-74c6f34d2647, ip_allocation=immediate, mac_address=fa:16:3e:82:3c:c4, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:15:47Z, description=, dns_domain=, id=7bcb9995-c8be-445e-890a-c8635f090fa6, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupsNegativeTestJSON-219944885-network, port_security_enabled=True, project_id=44e6bb9426fc43a084f983db0bd7f0ad, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=14244, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=846, status=ACTIVE, subnets=['f922be8a-8295-4360-8d2b-6f7f6ff5fc6d'], tags=[], tenant_id=44e6bb9426fc43a084f983db0bd7f0ad, updated_at=2025-12-06T10:15:48Z, vlan_transparent=None, network_id=7bcb9995-c8be-445e-890a-c8635f090fa6, port_security_enabled=False, project_id=44e6bb9426fc43a084f983db0bd7f0ad, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=883, status=DOWN, tags=[], tenant_id=44e6bb9426fc43a084f983db0bd7f0ad, updated_at=2025-12-06T10:15:56Z on network 7bcb9995-c8be-445e-890a-c8635f090fa6#033[00m Dec 6 05:15:56 localhost ovn_controller[154851]: 2025-12-06T10:15:56Z|00165|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:15:56 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:56.686 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-deb7774c-e96b-4e7f-88d7-ed9d740915f4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-deb7774c-e96b-4e7f-88d7-ed9d740915f4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da995d8e002548889747013c0eeca935', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548789.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9cc41455-e125-49b5-8c35-a9f7e38c8e70, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=ff588d77-fd65-43a9-bd18-9402d0aef61a) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:15:56 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:56.687 160509 INFO neutron.agent.ovn.metadata.agent [-] Port ff588d77-fd65-43a9-bd18-9402d0aef61a in datapath deb7774c-e96b-4e7f-88d7-ed9d740915f4 unbound from our chassis#033[00m Dec 6 05:15:56 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:56.691 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network deb7774c-e96b-4e7f-88d7-ed9d740915f4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:15:56 localhost ovn_metadata_agent[160504]: 2025-12-06 10:15:56.691 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[c9c02ccf-3b2f-47f1-899c-ca37c61b4ee5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:15:56 localhost nova_compute[282193]: 2025-12-06 10:15:56.692 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:56 localhost nova_compute[282193]: 2025-12-06 10:15:56.700 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:56 localhost nova_compute[282193]: 2025-12-06 10:15:56.731 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:56 localhost dnsmasq[314461]: read /var/lib/neutron/dhcp/7bcb9995-c8be-445e-890a-c8635f090fa6/addn_hosts - 1 addresses Dec 6 05:15:56 localhost dnsmasq-dhcp[314461]: read /var/lib/neutron/dhcp/7bcb9995-c8be-445e-890a-c8635f090fa6/host Dec 6 05:15:56 localhost podman[314896]: 2025-12-06 10:15:56.85505378 +0000 UTC m=+0.051653578 container kill 7deecf5c59b0f04bb3525d11262bd98a6e888a03a84db6c0d01f340a61932760 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7bcb9995-c8be-445e-890a-c8635f090fa6, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:15:56 localhost dnsmasq-dhcp[314461]: read /var/lib/neutron/dhcp/7bcb9995-c8be-445e-890a-c8635f090fa6/opts Dec 6 05:15:56 localhost neutron_sriov_agent[256690]: 2025-12-06 10:15:56.991 2 INFO neutron.agent.securitygroups_rpc [None req-e28bc6dc-5f9c-4334-81fe-cd06724fee5d b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Security group member updated ['4c82b56e-0fc5-4c7f-8922-ceb8236815fd']#033[00m Dec 6 05:15:57 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:57.024 263652 INFO neutron.agent.dhcp.agent [None req-d2591ab1-8127-4135-bbd3-4b064b18b219 - - - - - -] DHCP configuration for ports {'e452c5fc-e3cc-46cd-9292-74c6f34d2647'} is completed#033[00m Dec 6 05:15:57 localhost nova_compute[282193]: 2025-12-06 10:15:57.078 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:58 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:15:58 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:15:58 localhost sshd[314916]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:15:58 localhost nova_compute[282193]: 2025-12-06 10:15:58.779 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:15:59 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:15:59.787 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:15:56Z, description=, device_id=e94155bd-29ed-456c-8250-93318746e895, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=e452c5fc-e3cc-46cd-9292-74c6f34d2647, ip_allocation=immediate, mac_address=fa:16:3e:82:3c:c4, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:15:47Z, description=, dns_domain=, id=7bcb9995-c8be-445e-890a-c8635f090fa6, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupsNegativeTestJSON-219944885-network, port_security_enabled=True, project_id=44e6bb9426fc43a084f983db0bd7f0ad, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=14244, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=846, status=ACTIVE, subnets=['f922be8a-8295-4360-8d2b-6f7f6ff5fc6d'], tags=[], tenant_id=44e6bb9426fc43a084f983db0bd7f0ad, updated_at=2025-12-06T10:15:48Z, vlan_transparent=None, network_id=7bcb9995-c8be-445e-890a-c8635f090fa6, port_security_enabled=False, project_id=44e6bb9426fc43a084f983db0bd7f0ad, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=883, status=DOWN, tags=[], tenant_id=44e6bb9426fc43a084f983db0bd7f0ad, updated_at=2025-12-06T10:15:56Z on network 7bcb9995-c8be-445e-890a-c8635f090fa6#033[00m Dec 6 05:15:59 localhost dnsmasq[314461]: read /var/lib/neutron/dhcp/7bcb9995-c8be-445e-890a-c8635f090fa6/addn_hosts - 1 addresses Dec 6 05:15:59 localhost dnsmasq-dhcp[314461]: read /var/lib/neutron/dhcp/7bcb9995-c8be-445e-890a-c8635f090fa6/host Dec 6 05:15:59 localhost dnsmasq-dhcp[314461]: read /var/lib/neutron/dhcp/7bcb9995-c8be-445e-890a-c8635f090fa6/opts Dec 6 05:15:59 localhost podman[314935]: 2025-12-06 10:15:59.991923525 +0000 UTC m=+0.064448335 container kill 7deecf5c59b0f04bb3525d11262bd98a6e888a03a84db6c0d01f340a61932760 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7bcb9995-c8be-445e-890a-c8635f090fa6, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:15:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:16:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:16:00 localhost podman[314947]: 2025-12-06 10:16:00.113101321 +0000 UTC m=+0.104759358 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 6 05:16:00 localhost podman[314947]: 2025-12-06 10:16:00.120163785 +0000 UTC m=+0.111821862 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 6 05:16:00 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:16:00 localhost podman[314969]: 2025-12-06 10:16:00.20207375 +0000 UTC m=+0.072943084 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 6 05:16:00 localhost podman[314969]: 2025-12-06 10:16:00.206846285 +0000 UTC m=+0.077715609 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:16:00 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:16:00 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:00.340 263652 INFO neutron.agent.dhcp.agent [None req-9abaec00-89f9-49f2-9312-d0303475dcd3 - - - - - -] DHCP configuration for ports {'e452c5fc-e3cc-46cd-9292-74c6f34d2647'} is completed#033[00m Dec 6 05:16:00 localhost neutron_sriov_agent[256690]: 2025-12-06 10:16:00.364 2 INFO neutron.agent.securitygroups_rpc [None req-96bdfd29-c14f-4ef8-b3b0-32d637d65e93 13b250438f8e49ee9d0d9f0fe4791c05 a22ced63e346459ab637424ae7833af7 - - default default] Security group member updated ['55c805cd-9bbe-4434-83af-206ee080e6b9']#033[00m Dec 6 05:16:01 localhost neutron_sriov_agent[256690]: 2025-12-06 10:16:01.607 2 INFO neutron.agent.securitygroups_rpc [None req-7d84f32e-96fa-49ab-97a7-a8cf557247b9 b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Security group member updated ['4c82b56e-0fc5-4c7f-8922-ceb8236815fd']#033[00m Dec 6 05:16:02 localhost nova_compute[282193]: 2025-12-06 10:16:02.082 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:02 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:02.787 263652 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.dhcp_release_cmd', '--privsep_sock_path', '/tmp/tmpcr8m7m0g/privsep.sock']#033[00m Dec 6 05:16:03 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:16:03 localhost ovn_controller[154851]: 2025-12-06T10:16:03Z|00166|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:16:03 localhost nova_compute[282193]: 2025-12-06 10:16:03.300 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:03 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:03.436 263652 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m Dec 6 05:16:03 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:03.315 314999 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Dec 6 05:16:03 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:03.318 314999 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Dec 6 05:16:03 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:03.320 314999 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m Dec 6 05:16:03 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:03.320 314999 INFO oslo.privsep.daemon [-] privsep daemon running as pid 314999#033[00m Dec 6 05:16:03 localhost dnsmasq-dhcp[314636]: DHCPRELEASE(tape1277966-bb) 192.168.122.197 fa:16:3e:f7:d6:18 Dec 6 05:16:03 localhost nova_compute[282193]: 2025-12-06 10:16:03.748 282197 DEBUG nova.virt.driver [-] Emitting event Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Dec 6 05:16:03 localhost nova_compute[282193]: 2025-12-06 10:16:03.748 282197 INFO nova.compute.manager [-] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] VM Stopped (Lifecycle Event)#033[00m Dec 6 05:16:03 localhost nova_compute[282193]: 2025-12-06 10:16:03.776 282197 DEBUG nova.compute.manager [None req-212ba95c-4e57-49cd-a9fd-9640f0e0cec8 - - - - - -] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Dec 6 05:16:03 localhost nova_compute[282193]: 2025-12-06 10:16:03.811 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:04 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 6 addresses Dec 6 05:16:04 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:16:04 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:16:04 localhost podman[315019]: 2025-12-06 10:16:04.406665392 +0000 UTC m=+0.070602392 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 6 05:16:04 localhost dnsmasq-dhcp[314636]: DHCPRELEASE(tape1277966-bb) 192.168.122.189 fa:16:3e:ad:2b:28 Dec 6 05:16:05 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 5 addresses Dec 6 05:16:05 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:16:05 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:16:05 localhost podman[315057]: 2025-12-06 10:16:05.100461396 +0000 UTC m=+0.057289899 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true) Dec 6 05:16:05 localhost dnsmasq[312566]: exiting on receipt of SIGTERM Dec 6 05:16:05 localhost podman[315093]: 2025-12-06 10:16:05.327975366 +0000 UTC m=+0.039868190 container kill 8a8b7a6a9724101bff1398ade8c854164d1816271ca6c4f86a12732f70229362 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-deb7774c-e96b-4e7f-88d7-ed9d740915f4, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:16:05 localhost systemd[1]: libpod-8a8b7a6a9724101bff1398ade8c854164d1816271ca6c4f86a12732f70229362.scope: Deactivated successfully. Dec 6 05:16:05 localhost podman[315107]: 2025-12-06 10:16:05.371483046 +0000 UTC m=+0.035915300 container died 8a8b7a6a9724101bff1398ade8c854164d1816271ca6c4f86a12732f70229362 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-deb7774c-e96b-4e7f-88d7-ed9d740915f4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2) Dec 6 05:16:05 localhost podman[315107]: 2025-12-06 10:16:05.401257309 +0000 UTC m=+0.065689513 container cleanup 8a8b7a6a9724101bff1398ade8c854164d1816271ca6c4f86a12732f70229362 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-deb7774c-e96b-4e7f-88d7-ed9d740915f4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS) Dec 6 05:16:05 localhost systemd[1]: var-lib-containers-storage-overlay-06549e5dbf4ea1c819a27ad89b0090c0fd564fb1fbcc2e1eabbb66d37085811c-merged.mount: Deactivated successfully. Dec 6 05:16:05 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8a8b7a6a9724101bff1398ade8c854164d1816271ca6c4f86a12732f70229362-userdata-shm.mount: Deactivated successfully. Dec 6 05:16:05 localhost systemd[1]: libpod-conmon-8a8b7a6a9724101bff1398ade8c854164d1816271ca6c4f86a12732f70229362.scope: Deactivated successfully. Dec 6 05:16:05 localhost podman[315109]: 2025-12-06 10:16:05.455429842 +0000 UTC m=+0.114009569 container remove 8a8b7a6a9724101bff1398ade8c854164d1816271ca6c4f86a12732f70229362 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-deb7774c-e96b-4e7f-88d7-ed9d740915f4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:16:05 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:05.549 263652 INFO neutron.agent.dhcp.agent [None req-0128c940-9b2a-4d6e-83de-f01e7db06316 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:16:05 localhost systemd[1]: run-netns-qdhcp\x2ddeb7774c\x2de96b\x2d4e7f\x2d88d7\x2ded9d740915f4.mount: Deactivated successfully. Dec 6 05:16:05 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:05.773 263652 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:16:05 localhost neutron_sriov_agent[256690]: 2025-12-06 10:16:05.822 2 INFO neutron.agent.securitygroups_rpc [None req-be960e3b-e920-4ec4-8e87-e409a0af324a 13b250438f8e49ee9d0d9f0fe4791c05 a22ced63e346459ab637424ae7833af7 - - default default] Security group member updated ['55c805cd-9bbe-4434-83af-206ee080e6b9']#033[00m Dec 6 05:16:05 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:05.874 263652 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:16:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:16:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:16:06 localhost systemd[1]: tmp-crun.qfFFGU.mount: Deactivated successfully. Dec 6 05:16:06 localhost podman[315132]: 2025-12-06 10:16:06.93004223 +0000 UTC m=+0.091053614 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vcs-type=git, architecture=x86_64, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, name=ubi9-minimal, io.openshift.expose-services=, distribution-scope=public) Dec 6 05:16:06 localhost podman[315132]: 2025-12-06 10:16:06.971223968 +0000 UTC m=+0.132235362 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, config_id=edpm, container_name=openstack_network_exporter, architecture=x86_64, io.buildah.version=1.33.7, io.openshift.expose-services=, version=9.6, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-type=git, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 6 05:16:06 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:16:06 localhost podman[315133]: 2025-12-06 10:16:06.989264976 +0000 UTC m=+0.146812274 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125) Dec 6 05:16:07 localhost podman[315133]: 2025-12-06 10:16:07.028272249 +0000 UTC m=+0.185819607 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 6 05:16:07 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:16:07 localhost nova_compute[282193]: 2025-12-06 10:16:07.085 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:08 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:16:08 localhost dnsmasq[314461]: read /var/lib/neutron/dhcp/7bcb9995-c8be-445e-890a-c8635f090fa6/addn_hosts - 0 addresses Dec 6 05:16:08 localhost dnsmasq-dhcp[314461]: read /var/lib/neutron/dhcp/7bcb9995-c8be-445e-890a-c8635f090fa6/host Dec 6 05:16:08 localhost podman[315188]: 2025-12-06 10:16:08.566881997 +0000 UTC m=+0.056187635 container kill 7deecf5c59b0f04bb3525d11262bd98a6e888a03a84db6c0d01f340a61932760 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7bcb9995-c8be-445e-890a-c8635f090fa6, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 6 05:16:08 localhost dnsmasq-dhcp[314461]: read /var/lib/neutron/dhcp/7bcb9995-c8be-445e-890a-c8635f090fa6/opts Dec 6 05:16:08 localhost nova_compute[282193]: 2025-12-06 10:16:08.812 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:08 localhost nova_compute[282193]: 2025-12-06 10:16:08.991 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:08 localhost ovn_controller[154851]: 2025-12-06T10:16:08Z|00167|binding|INFO|Releasing lport 1d53082e-11ae-49e3-9448-7b2e1b2ec267 from this chassis (sb_readonly=0) Dec 6 05:16:08 localhost ovn_controller[154851]: 2025-12-06T10:16:08Z|00168|binding|INFO|Setting lport 1d53082e-11ae-49e3-9448-7b2e1b2ec267 down in Southbound Dec 6 05:16:08 localhost kernel: device tap1d53082e-11 left promiscuous mode Dec 6 05:16:09 localhost ovn_metadata_agent[160504]: 2025-12-06 10:16:09.007 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-7bcb9995-c8be-445e-890a-c8635f090fa6', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7bcb9995-c8be-445e-890a-c8635f090fa6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '44e6bb9426fc43a084f983db0bd7f0ad', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548789.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6349ccef-9387-4e01-b0b2-fbf339bbd83f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=1d53082e-11ae-49e3-9448-7b2e1b2ec267) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:16:09 localhost ovn_metadata_agent[160504]: 2025-12-06 10:16:09.009 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 1d53082e-11ae-49e3-9448-7b2e1b2ec267 in datapath 7bcb9995-c8be-445e-890a-c8635f090fa6 unbound from our chassis#033[00m Dec 6 05:16:09 localhost ovn_metadata_agent[160504]: 2025-12-06 10:16:09.014 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7bcb9995-c8be-445e-890a-c8635f090fa6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:16:09 localhost ovn_metadata_agent[160504]: 2025-12-06 10:16:09.015 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[416b626d-1018-4208-af77-7034a0749a69]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:16:09 localhost nova_compute[282193]: 2025-12-06 10:16:09.024 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:10 localhost dnsmasq-dhcp[314636]: DHCPRELEASE(tape1277966-bb) 192.168.122.192 fa:16:3e:80:e4:79 Dec 6 05:16:10 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 4 addresses Dec 6 05:16:10 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:16:10 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:16:10 localhost podman[315230]: 2025-12-06 10:16:10.918826296 +0000 UTC m=+0.065735845 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:16:10 localhost ovn_controller[154851]: 2025-12-06T10:16:10Z|00169|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:16:11 localhost nova_compute[282193]: 2025-12-06 10:16:11.013 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:11 localhost ovn_metadata_agent[160504]: 2025-12-06 10:16:11.325 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:6c:02', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:a8:2f:0c:cb:a1'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:16:11 localhost nova_compute[282193]: 2025-12-06 10:16:11.326 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:11 localhost ovn_metadata_agent[160504]: 2025-12-06 10:16:11.327 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 6 05:16:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:16:11 localhost podman[315250]: 2025-12-06 10:16:11.929540602 +0000 UTC m=+0.086552925 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd) Dec 6 05:16:11 localhost podman[315250]: 2025-12-06 10:16:11.938899697 +0000 UTC m=+0.095911970 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 6 05:16:11 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:16:12 localhost nova_compute[282193]: 2025-12-06 10:16:12.110 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:12 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 3 addresses Dec 6 05:16:12 localhost podman[315286]: 2025-12-06 10:16:12.427928949 +0000 UTC m=+0.046383767 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:16:12 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:16:12 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:16:12 localhost ovn_controller[154851]: 2025-12-06T10:16:12Z|00170|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:16:12 localhost nova_compute[282193]: 2025-12-06 10:16:12.613 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:12 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:12.881 263652 INFO neutron.agent.linux.ip_lib [None req-993d3ab0-a589-4781-996c-7ffeb36d7b9b - - - - - -] Device tapda02d3d2-69 cannot be used as it has no MAC address#033[00m Dec 6 05:16:12 localhost nova_compute[282193]: 2025-12-06 10:16:12.904 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:12 localhost kernel: device tapda02d3d2-69 entered promiscuous mode Dec 6 05:16:12 localhost NetworkManager[5973]: [1765016172.9149] manager: (tapda02d3d2-69): new Generic device (/org/freedesktop/NetworkManager/Devices/31) Dec 6 05:16:12 localhost ovn_controller[154851]: 2025-12-06T10:16:12Z|00171|binding|INFO|Claiming lport da02d3d2-692f-455e-be00-1cf20526dba9 for this chassis. Dec 6 05:16:12 localhost ovn_controller[154851]: 2025-12-06T10:16:12Z|00172|binding|INFO|da02d3d2-692f-455e-be00-1cf20526dba9: Claiming unknown Dec 6 05:16:12 localhost nova_compute[282193]: 2025-12-06 10:16:12.916 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:12 localhost systemd-udevd[315316]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:16:12 localhost ovn_metadata_agent[160504]: 2025-12-06 10:16:12.927 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-a1e70fff-f7c1-4a44-8853-ff024a9f780b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a1e70fff-f7c1-4a44-8853-ff024a9f780b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7435808e897043e08b27fd5dcaabc003', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ccc8a57e-8463-42c0-9469-317af07ded18, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=da02d3d2-692f-455e-be00-1cf20526dba9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:16:12 localhost ovn_metadata_agent[160504]: 2025-12-06 10:16:12.928 160509 INFO neutron.agent.ovn.metadata.agent [-] Port da02d3d2-692f-455e-be00-1cf20526dba9 in datapath a1e70fff-f7c1-4a44-8853-ff024a9f780b bound to our chassis#033[00m Dec 6 05:16:12 localhost ovn_metadata_agent[160504]: 2025-12-06 10:16:12.928 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a1e70fff-f7c1-4a44-8853-ff024a9f780b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:16:12 localhost ovn_metadata_agent[160504]: 2025-12-06 10:16:12.929 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[9b982680-a8b4-44cf-89e4-254f5b44f0a2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:16:12 localhost journal[230404]: ethtool ioctl error on tapda02d3d2-69: No such device Dec 6 05:16:12 localhost ovn_controller[154851]: 2025-12-06T10:16:12Z|00173|binding|INFO|Setting lport da02d3d2-692f-455e-be00-1cf20526dba9 ovn-installed in OVS Dec 6 05:16:12 localhost ovn_controller[154851]: 2025-12-06T10:16:12Z|00174|binding|INFO|Setting lport da02d3d2-692f-455e-be00-1cf20526dba9 up in Southbound Dec 6 05:16:12 localhost nova_compute[282193]: 2025-12-06 10:16:12.957 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:12 localhost journal[230404]: ethtool ioctl error on tapda02d3d2-69: No such device Dec 6 05:16:12 localhost journal[230404]: ethtool ioctl error on tapda02d3d2-69: No such device Dec 6 05:16:12 localhost journal[230404]: ethtool ioctl error on tapda02d3d2-69: No such device Dec 6 05:16:12 localhost journal[230404]: ethtool ioctl error on tapda02d3d2-69: No such device Dec 6 05:16:12 localhost journal[230404]: ethtool ioctl error on tapda02d3d2-69: No such device Dec 6 05:16:12 localhost journal[230404]: ethtool ioctl error on tapda02d3d2-69: No such device Dec 6 05:16:12 localhost journal[230404]: ethtool ioctl error on tapda02d3d2-69: No such device Dec 6 05:16:12 localhost nova_compute[282193]: 2025-12-06 10:16:12.997 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:13 localhost nova_compute[282193]: 2025-12-06 10:16:13.024 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:13 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:16:13 localhost nova_compute[282193]: 2025-12-06 10:16:13.815 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:13 localhost podman[315387]: Dec 6 05:16:13 localhost podman[315387]: 2025-12-06 10:16:13.922937705 +0000 UTC m=+0.071044306 container create ab9a2c7f430a211bca8b7e65add9f1fb30eab1d81430fd08f590978abe7e2bd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a1e70fff-f7c1-4a44-8853-ff024a9f780b, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 6 05:16:13 localhost systemd[1]: Started libpod-conmon-ab9a2c7f430a211bca8b7e65add9f1fb30eab1d81430fd08f590978abe7e2bd2.scope. Dec 6 05:16:13 localhost systemd[1]: Started libcrun container. Dec 6 05:16:13 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10ea93285ee6bf33fabd8991e6c84ec56f594aa69c471e85c2d0afdfeabfc6e8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:16:13 localhost podman[315387]: 2025-12-06 10:16:13.89144515 +0000 UTC m=+0.039551781 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:16:13 localhost podman[315387]: 2025-12-06 10:16:13.99926759 +0000 UTC m=+0.147374191 container init ab9a2c7f430a211bca8b7e65add9f1fb30eab1d81430fd08f590978abe7e2bd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a1e70fff-f7c1-4a44-8853-ff024a9f780b, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 6 05:16:14 localhost podman[315387]: 2025-12-06 10:16:14.008932954 +0000 UTC m=+0.157039555 container start ab9a2c7f430a211bca8b7e65add9f1fb30eab1d81430fd08f590978abe7e2bd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a1e70fff-f7c1-4a44-8853-ff024a9f780b, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:16:14 localhost dnsmasq[315430]: started, version 2.85 cachesize 150 Dec 6 05:16:14 localhost dnsmasq[315430]: DNS service limited to local subnets Dec 6 05:16:14 localhost dnsmasq[315430]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:16:14 localhost dnsmasq[315430]: warning: no upstream servers configured Dec 6 05:16:14 localhost dnsmasq-dhcp[315430]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 6 05:16:14 localhost dnsmasq[315430]: read /var/lib/neutron/dhcp/a1e70fff-f7c1-4a44-8853-ff024a9f780b/addn_hosts - 0 addresses Dec 6 05:16:14 localhost dnsmasq-dhcp[315430]: read /var/lib/neutron/dhcp/a1e70fff-f7c1-4a44-8853-ff024a9f780b/host Dec 6 05:16:14 localhost dnsmasq-dhcp[315430]: read /var/lib/neutron/dhcp/a1e70fff-f7c1-4a44-8853-ff024a9f780b/opts Dec 6 05:16:14 localhost dnsmasq[314461]: exiting on receipt of SIGTERM Dec 6 05:16:14 localhost podman[315422]: 2025-12-06 10:16:14.062624212 +0000 UTC m=+0.063289191 container kill 7deecf5c59b0f04bb3525d11262bd98a6e888a03a84db6c0d01f340a61932760 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7bcb9995-c8be-445e-890a-c8635f090fa6, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 6 05:16:14 localhost systemd[1]: libpod-7deecf5c59b0f04bb3525d11262bd98a6e888a03a84db6c0d01f340a61932760.scope: Deactivated successfully. Dec 6 05:16:14 localhost podman[315436]: 2025-12-06 10:16:14.115687682 +0000 UTC m=+0.043883363 container died 7deecf5c59b0f04bb3525d11262bd98a6e888a03a84db6c0d01f340a61932760 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7bcb9995-c8be-445e-890a-c8635f090fa6, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:16:14 localhost podman[315436]: 2025-12-06 10:16:14.157308994 +0000 UTC m=+0.085504615 container cleanup 7deecf5c59b0f04bb3525d11262bd98a6e888a03a84db6c0d01f340a61932760 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7bcb9995-c8be-445e-890a-c8635f090fa6, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 6 05:16:14 localhost systemd[1]: libpod-conmon-7deecf5c59b0f04bb3525d11262bd98a6e888a03a84db6c0d01f340a61932760.scope: Deactivated successfully. Dec 6 05:16:14 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:14.177 263652 INFO neutron.agent.dhcp.agent [None req-0bf1d0b3-8dd9-4d39-8c2d-e5a6c4c6cf25 - - - - - -] DHCP configuration for ports {'c8ac7c67-b7ec-4bf1-ad7a-a9af2fd0e8bd'} is completed#033[00m Dec 6 05:16:14 localhost podman[315443]: 2025-12-06 10:16:14.213656943 +0000 UTC m=+0.127857639 container remove 7deecf5c59b0f04bb3525d11262bd98a6e888a03a84db6c0d01f340a61932760 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-7bcb9995-c8be-445e-890a-c8635f090fa6, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2) Dec 6 05:16:14 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:14.240 263652 INFO neutron.agent.dhcp.agent [None req-939a5336-603b-4411-b162-221e78243ab7 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:16:14 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:14.241 263652 INFO neutron.agent.dhcp.agent [None req-939a5336-603b-4411-b162-221e78243ab7 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:16:14 localhost snmpd[67279]: empty variable list in _query Dec 6 05:16:14 localhost snmpd[67279]: empty variable list in _query Dec 6 05:16:14 localhost snmpd[67279]: empty variable list in _query Dec 6 05:16:14 localhost snmpd[67279]: empty variable list in _query Dec 6 05:16:14 localhost snmpd[67279]: empty variable list in _query Dec 6 05:16:14 localhost snmpd[67279]: empty variable list in _query Dec 6 05:16:14 localhost sshd[315467]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:16:14 localhost dnsmasq-dhcp[314636]: DHCPRELEASE(tape1277966-bb) 192.168.122.248 fa:16:3e:6f:70:a0 Dec 6 05:16:14 localhost systemd[1]: var-lib-containers-storage-overlay-5c28922d979d8687baf0d17c061c99ccd1f7b4506833ab64cdd748cd838b58f4-merged.mount: Deactivated successfully. Dec 6 05:16:14 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7deecf5c59b0f04bb3525d11262bd98a6e888a03a84db6c0d01f340a61932760-userdata-shm.mount: Deactivated successfully. Dec 6 05:16:14 localhost systemd[1]: run-netns-qdhcp\x2d7bcb9995\x2dc8be\x2d445e\x2d890a\x2dc8635f090fa6.mount: Deactivated successfully. Dec 6 05:16:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:16:15 localhost systemd[1]: tmp-crun.yTXPy2.mount: Deactivated successfully. Dec 6 05:16:15 localhost podman[315470]: 2025-12-06 10:16:15.113046883 +0000 UTC m=+0.085361440 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 6 05:16:15 localhost podman[315470]: 2025-12-06 10:16:15.121269963 +0000 UTC m=+0.093584460 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 6 05:16:15 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:16:15 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 2 addresses Dec 6 05:16:15 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:16:15 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:16:15 localhost podman[315510]: 2025-12-06 10:16:15.232222388 +0000 UTC m=+0.054412081 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 6 05:16:15 localhost ovn_controller[154851]: 2025-12-06T10:16:15Z|00175|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:16:15 localhost nova_compute[282193]: 2025-12-06 10:16:15.683 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:15 localhost systemd[1]: tmp-crun.ElSw4c.mount: Deactivated successfully. Dec 6 05:16:16 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:16.405 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:16:15Z, description=, device_id=71af98f7-3b87-4f22-8e42-c4c7e7586541, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=5955baae-5bb8-453d-bf95-d281294502a6, ip_allocation=immediate, mac_address=fa:16:3e:08:b3:c4, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=975, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-06T10:16:16Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:16:16 localhost openstack_network_exporter[243110]: ERROR 10:16:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:16:16 localhost openstack_network_exporter[243110]: ERROR 10:16:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:16:16 localhost openstack_network_exporter[243110]: ERROR 10:16:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:16:16 localhost openstack_network_exporter[243110]: ERROR 10:16:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:16:16 localhost openstack_network_exporter[243110]: Dec 6 05:16:16 localhost openstack_network_exporter[243110]: ERROR 10:16:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:16:16 localhost openstack_network_exporter[243110]: Dec 6 05:16:16 localhost systemd[1]: tmp-crun.qpIOlk.mount: Deactivated successfully. Dec 6 05:16:16 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 3 addresses Dec 6 05:16:16 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:16:16 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:16:16 localhost podman[315547]: 2025-12-06 10:16:16.648589098 +0000 UTC m=+0.069706615 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:16:16 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:16.835 263652 INFO neutron.agent.dhcp.agent [None req-99a7d35a-8aa7-4e0f-b7b0-fa1b60028996 - - - - - -] DHCP configuration for ports {'5955baae-5bb8-453d-bf95-d281294502a6'} is completed#033[00m Dec 6 05:16:17 localhost nova_compute[282193]: 2025-12-06 10:16:17.154 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:17 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:17.590 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:16:17Z, description=, device_id=1a41ced9-29be-4992-bdce-4aa27040262d, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=7c050743-5ffe-4017-9560-2b6d5888c4c3, ip_allocation=immediate, mac_address=fa:16:3e:07:97:ff, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=977, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-06T10:16:17Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:16:17 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 4 addresses Dec 6 05:16:17 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:16:17 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:16:17 localhost podman[315584]: 2025-12-06 10:16:17.840692726 +0000 UTC m=+0.059262048 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 6 05:16:18 localhost nova_compute[282193]: 2025-12-06 10:16:18.054 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:18 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:18.092 263652 INFO neutron.agent.dhcp.agent [None req-f2c6871d-afc5-4cfc-8439-964d4c458bc6 - - - - - -] DHCP configuration for ports {'7c050743-5ffe-4017-9560-2b6d5888c4c3'} is completed#033[00m Dec 6 05:16:18 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:16:18 localhost nova_compute[282193]: 2025-12-06 10:16:18.817 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:19 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:19.161 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:16:18Z, description=, device_id=71af98f7-3b87-4f22-8e42-c4c7e7586541, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=015e1950-2195-40d8-a2a7-064ffc59ec35, ip_allocation=immediate, mac_address=fa:16:3e:c1:27:af, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:16:10Z, description=, dns_domain=, id=a1e70fff-f7c1-4a44-8853-ff024a9f780b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServersTestFqdnHostnames-1975212823-network, port_security_enabled=True, project_id=7435808e897043e08b27fd5dcaabc003, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=25925, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=944, status=ACTIVE, subnets=['0915ac38-6dfd-47e4-bf76-8ab2ffd38d09'], tags=[], tenant_id=7435808e897043e08b27fd5dcaabc003, updated_at=2025-12-06T10:16:11Z, vlan_transparent=None, network_id=a1e70fff-f7c1-4a44-8853-ff024a9f780b, port_security_enabled=False, project_id=7435808e897043e08b27fd5dcaabc003, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=978, status=DOWN, tags=[], tenant_id=7435808e897043e08b27fd5dcaabc003, updated_at=2025-12-06T10:16:18Z on network a1e70fff-f7c1-4a44-8853-ff024a9f780b#033[00m Dec 6 05:16:19 localhost podman[315620]: 2025-12-06 10:16:19.374992734 +0000 UTC m=+0.066340033 container kill ab9a2c7f430a211bca8b7e65add9f1fb30eab1d81430fd08f590978abe7e2bd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a1e70fff-f7c1-4a44-8853-ff024a9f780b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:16:19 localhost dnsmasq[315430]: read /var/lib/neutron/dhcp/a1e70fff-f7c1-4a44-8853-ff024a9f780b/addn_hosts - 1 addresses Dec 6 05:16:19 localhost dnsmasq-dhcp[315430]: read /var/lib/neutron/dhcp/a1e70fff-f7c1-4a44-8853-ff024a9f780b/host Dec 6 05:16:19 localhost dnsmasq-dhcp[315430]: read /var/lib/neutron/dhcp/a1e70fff-f7c1-4a44-8853-ff024a9f780b/opts Dec 6 05:16:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:16:19 localhost systemd[1]: tmp-crun.FfYdLt.mount: Deactivated successfully. Dec 6 05:16:19 localhost podman[315635]: 2025-12-06 10:16:19.4948731 +0000 UTC m=+0.089978870 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller) Dec 6 05:16:19 localhost podman[315635]: 2025-12-06 10:16:19.562360507 +0000 UTC m=+0.157466207 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Dec 6 05:16:19 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:16:19 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:19.660 263652 INFO neutron.agent.dhcp.agent [None req-a72bd817-8070-4b28-b3e6-908280b0e2c4 - - - - - -] DHCP configuration for ports {'015e1950-2195-40d8-a2a7-064ffc59ec35'} is completed#033[00m Dec 6 05:16:20 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:20.299 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:16:18Z, description=, device_id=71af98f7-3b87-4f22-8e42-c4c7e7586541, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=015e1950-2195-40d8-a2a7-064ffc59ec35, ip_allocation=immediate, mac_address=fa:16:3e:c1:27:af, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:16:10Z, description=, dns_domain=, id=a1e70fff-f7c1-4a44-8853-ff024a9f780b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServersTestFqdnHostnames-1975212823-network, port_security_enabled=True, project_id=7435808e897043e08b27fd5dcaabc003, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=25925, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=944, status=ACTIVE, subnets=['0915ac38-6dfd-47e4-bf76-8ab2ffd38d09'], tags=[], tenant_id=7435808e897043e08b27fd5dcaabc003, updated_at=2025-12-06T10:16:11Z, vlan_transparent=None, network_id=a1e70fff-f7c1-4a44-8853-ff024a9f780b, port_security_enabled=False, project_id=7435808e897043e08b27fd5dcaabc003, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=978, status=DOWN, tags=[], tenant_id=7435808e897043e08b27fd5dcaabc003, updated_at=2025-12-06T10:16:18Z on network a1e70fff-f7c1-4a44-8853-ff024a9f780b#033[00m Dec 6 05:16:20 localhost dnsmasq[315430]: read /var/lib/neutron/dhcp/a1e70fff-f7c1-4a44-8853-ff024a9f780b/addn_hosts - 1 addresses Dec 6 05:16:20 localhost dnsmasq-dhcp[315430]: read /var/lib/neutron/dhcp/a1e70fff-f7c1-4a44-8853-ff024a9f780b/host Dec 6 05:16:20 localhost dnsmasq-dhcp[315430]: read /var/lib/neutron/dhcp/a1e70fff-f7c1-4a44-8853-ff024a9f780b/opts Dec 6 05:16:20 localhost podman[315686]: 2025-12-06 10:16:20.531290746 +0000 UTC m=+0.063977632 container kill ab9a2c7f430a211bca8b7e65add9f1fb30eab1d81430fd08f590978abe7e2bd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a1e70fff-f7c1-4a44-8853-ff024a9f780b, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 6 05:16:20 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:20.823 263652 INFO neutron.agent.dhcp.agent [None req-99081cd3-a8ee-4ee3-af5f-6287fd90038f - - - - - -] DHCP configuration for ports {'015e1950-2195-40d8-a2a7-064ffc59ec35'} is completed#033[00m Dec 6 05:16:21 localhost ovn_metadata_agent[160504]: 2025-12-06 10:16:21.329 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b142a5ef-fbed-4e92-aa78-e3ad080c6370, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 05:16:22 localhost nova_compute[282193]: 2025-12-06 10:16:22.200 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:22 localhost neutron_sriov_agent[256690]: 2025-12-06 10:16:22.875 2 INFO neutron.agent.securitygroups_rpc [None req-08283fcf-8c3f-4ce1-8201-1776fe09eb71 13b250438f8e49ee9d0d9f0fe4791c05 a22ced63e346459ab637424ae7833af7 - - default default] Security group member updated ['55c805cd-9bbe-4434-83af-206ee080e6b9']#033[00m Dec 6 05:16:23 localhost sshd[315708]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:16:23 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:16:23 localhost nova_compute[282193]: 2025-12-06 10:16:23.821 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:23 localhost podman[241090]: time="2025-12-06T10:16:23Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:16:23 localhost podman[241090]: @ - - [06/Dec/2025:10:16:23 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157928 "" "Go-http-client/1.1" Dec 6 05:16:23 localhost podman[241090]: @ - - [06/Dec/2025:10:16:23 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19739 "" "Go-http-client/1.1" Dec 6 05:16:24 localhost dnsmasq[315430]: read /var/lib/neutron/dhcp/a1e70fff-f7c1-4a44-8853-ff024a9f780b/addn_hosts - 0 addresses Dec 6 05:16:24 localhost podman[315726]: 2025-12-06 10:16:24.070194316 +0000 UTC m=+0.044576463 container kill ab9a2c7f430a211bca8b7e65add9f1fb30eab1d81430fd08f590978abe7e2bd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a1e70fff-f7c1-4a44-8853-ff024a9f780b, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 6 05:16:24 localhost dnsmasq-dhcp[315430]: read /var/lib/neutron/dhcp/a1e70fff-f7c1-4a44-8853-ff024a9f780b/host Dec 6 05:16:24 localhost dnsmasq-dhcp[315430]: read /var/lib/neutron/dhcp/a1e70fff-f7c1-4a44-8853-ff024a9f780b/opts Dec 6 05:16:24 localhost nova_compute[282193]: 2025-12-06 10:16:24.252 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:24 localhost ovn_controller[154851]: 2025-12-06T10:16:24Z|00176|binding|INFO|Releasing lport da02d3d2-692f-455e-be00-1cf20526dba9 from this chassis (sb_readonly=0) Dec 6 05:16:24 localhost kernel: device tapda02d3d2-69 left promiscuous mode Dec 6 05:16:24 localhost ovn_controller[154851]: 2025-12-06T10:16:24Z|00177|binding|INFO|Setting lport da02d3d2-692f-455e-be00-1cf20526dba9 down in Southbound Dec 6 05:16:24 localhost nova_compute[282193]: 2025-12-06 10:16:24.279 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:24 localhost ovn_metadata_agent[160504]: 2025-12-06 10:16:24.282 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-a1e70fff-f7c1-4a44-8853-ff024a9f780b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a1e70fff-f7c1-4a44-8853-ff024a9f780b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7435808e897043e08b27fd5dcaabc003', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548789.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ccc8a57e-8463-42c0-9469-317af07ded18, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=da02d3d2-692f-455e-be00-1cf20526dba9) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:16:24 localhost ovn_metadata_agent[160504]: 2025-12-06 10:16:24.284 160509 INFO neutron.agent.ovn.metadata.agent [-] Port da02d3d2-692f-455e-be00-1cf20526dba9 in datapath a1e70fff-f7c1-4a44-8853-ff024a9f780b unbound from our chassis#033[00m Dec 6 05:16:24 localhost ovn_metadata_agent[160504]: 2025-12-06 10:16:24.286 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a1e70fff-f7c1-4a44-8853-ff024a9f780b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:16:24 localhost ovn_metadata_agent[160504]: 2025-12-06 10:16:24.287 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[5100c729-bcfb-4dd2-b089-33e9173f1b57]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:16:25 localhost neutron_sriov_agent[256690]: 2025-12-06 10:16:25.247 2 INFO neutron.agent.securitygroups_rpc [None req-2f0fe649-a0ce-475a-a444-c6db3fc27153 13b250438f8e49ee9d0d9f0fe4791c05 a22ced63e346459ab637424ae7833af7 - - default default] Security group member updated ['55c805cd-9bbe-4434-83af-206ee080e6b9']#033[00m Dec 6 05:16:25 localhost sshd[315749]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:16:27 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 3 addresses Dec 6 05:16:27 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:16:27 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:16:27 localhost podman[315768]: 2025-12-06 10:16:27.103575465 +0000 UTC m=+0.059014251 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:16:27 localhost nova_compute[282193]: 2025-12-06 10:16:27.254 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:27 localhost ovn_controller[154851]: 2025-12-06T10:16:27Z|00178|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:16:27 localhost nova_compute[282193]: 2025-12-06 10:16:27.281 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:27 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 2 addresses Dec 6 05:16:27 localhost podman[315806]: 2025-12-06 10:16:27.492563723 +0000 UTC m=+0.056130603 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Dec 6 05:16:27 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:16:27 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:16:28 localhost dnsmasq[315430]: exiting on receipt of SIGTERM Dec 6 05:16:28 localhost systemd[1]: tmp-crun.9cPnXm.mount: Deactivated successfully. Dec 6 05:16:28 localhost podman[315844]: 2025-12-06 10:16:28.055117817 +0000 UTC m=+0.059262039 container kill ab9a2c7f430a211bca8b7e65add9f1fb30eab1d81430fd08f590978abe7e2bd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a1e70fff-f7c1-4a44-8853-ff024a9f780b, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 6 05:16:28 localhost systemd[1]: libpod-ab9a2c7f430a211bca8b7e65add9f1fb30eab1d81430fd08f590978abe7e2bd2.scope: Deactivated successfully. Dec 6 05:16:28 localhost sshd[315859]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:16:28 localhost podman[315858]: 2025-12-06 10:16:28.113960511 +0000 UTC m=+0.038036815 container died ab9a2c7f430a211bca8b7e65add9f1fb30eab1d81430fd08f590978abe7e2bd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a1e70fff-f7c1-4a44-8853-ff024a9f780b, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:16:28 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ab9a2c7f430a211bca8b7e65add9f1fb30eab1d81430fd08f590978abe7e2bd2-userdata-shm.mount: Deactivated successfully. Dec 6 05:16:28 localhost systemd[1]: var-lib-containers-storage-overlay-10ea93285ee6bf33fabd8991e6c84ec56f594aa69c471e85c2d0afdfeabfc6e8-merged.mount: Deactivated successfully. Dec 6 05:16:28 localhost podman[315858]: 2025-12-06 10:16:28.159355368 +0000 UTC m=+0.083431682 container remove ab9a2c7f430a211bca8b7e65add9f1fb30eab1d81430fd08f590978abe7e2bd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a1e70fff-f7c1-4a44-8853-ff024a9f780b, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 6 05:16:28 localhost systemd[1]: libpod-conmon-ab9a2c7f430a211bca8b7e65add9f1fb30eab1d81430fd08f590978abe7e2bd2.scope: Deactivated successfully. Dec 6 05:16:28 localhost systemd[1]: run-netns-qdhcp\x2da1e70fff\x2df7c1\x2d4a44\x2d8853\x2dff024a9f780b.mount: Deactivated successfully. Dec 6 05:16:28 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:28.191 263652 INFO neutron.agent.dhcp.agent [None req-dd00c1b1-d1b6-4fb0-ba4e-35de2f788451 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:16:28 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:16:28 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:28.261 263652 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:16:28 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:28.275 263652 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:16:28 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:28.739 263652 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:16:28 localhost nova_compute[282193]: 2025-12-06 10:16:28.823 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:29 localhost neutron_sriov_agent[256690]: 2025-12-06 10:16:29.310 2 INFO neutron.agent.securitygroups_rpc [None req-64ece17b-51fa-4f7d-ac9f-f7ae51f6ef1a 13b250438f8e49ee9d0d9f0fe4791c05 a22ced63e346459ab637424ae7833af7 - - default default] Security group member updated ['55c805cd-9bbe-4434-83af-206ee080e6b9']#033[00m Dec 6 05:16:29 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:29.331 263652 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:16:29 localhost neutron_sriov_agent[256690]: 2025-12-06 10:16:29.839 2 INFO neutron.agent.securitygroups_rpc [None req-e4d175d7-f151-45a2-bfa9-dd114b2ac98c 13b250438f8e49ee9d0d9f0fe4791c05 a22ced63e346459ab637424ae7833af7 - - default default] Security group member updated ['55c805cd-9bbe-4434-83af-206ee080e6b9']#033[00m Dec 6 05:16:29 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:29.851 263652 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:16:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:16:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:16:30 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:30.640 263652 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:16:30 localhost systemd[1]: tmp-crun.IjYzlV.mount: Deactivated successfully. Dec 6 05:16:30 localhost podman[315887]: 2025-12-06 10:16:30.664239185 +0000 UTC m=+0.100390836 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 6 05:16:30 localhost sshd[315915]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:16:30 localhost podman[315888]: 2025-12-06 10:16:30.705108155 +0000 UTC m=+0.138392939 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 6 05:16:30 localhost podman[315888]: 2025-12-06 10:16:30.711056655 +0000 UTC m=+0.144341409 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 6 05:16:30 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:16:30 localhost podman[315887]: 2025-12-06 10:16:30.796393413 +0000 UTC m=+0.232545094 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:16:30 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:16:30 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:30.883 263652 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:16:31 localhost nova_compute[282193]: 2025-12-06 10:16:31.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:16:31 localhost nova_compute[282193]: 2025-12-06 10:16:31.207 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:16:31 localhost nova_compute[282193]: 2025-12-06 10:16:31.208 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:16:31 localhost nova_compute[282193]: 2025-12-06 10:16:31.208 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:16:31 localhost nova_compute[282193]: 2025-12-06 10:16:31.208 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Auditing locally available compute resources for np0005548789.localdomain (node: np0005548789.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 6 05:16:31 localhost nova_compute[282193]: 2025-12-06 10:16:31.209 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:16:31 localhost systemd[1]: tmp-crun.Mwh73r.mount: Deactivated successfully. Dec 6 05:16:31 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:16:31 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2582979284' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:16:31 localhost nova_compute[282193]: 2025-12-06 10:16:31.678 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:16:31 localhost nova_compute[282193]: 2025-12-06 10:16:31.891 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:16:31 localhost nova_compute[282193]: 2025-12-06 10:16:31.892 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:16:32 localhost nova_compute[282193]: 2025-12-06 10:16:32.119 282197 WARNING nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 6 05:16:32 localhost nova_compute[282193]: 2025-12-06 10:16:32.121 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Hypervisor/Node resource view: name=np0005548789.localdomain free_ram=11302MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 6 05:16:32 localhost nova_compute[282193]: 2025-12-06 10:16:32.122 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:16:32 localhost nova_compute[282193]: 2025-12-06 10:16:32.122 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:16:32 localhost nova_compute[282193]: 2025-12-06 10:16:32.290 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 6 05:16:32 localhost nova_compute[282193]: 2025-12-06 10:16:32.291 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 6 05:16:32 localhost nova_compute[282193]: 2025-12-06 10:16:32.291 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Final resource view: name=np0005548789.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 6 05:16:32 localhost nova_compute[282193]: 2025-12-06 10:16:32.295 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:32 localhost nova_compute[282193]: 2025-12-06 10:16:32.348 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:16:32 localhost dnsmasq-dhcp[314636]: DHCPRELEASE(tape1277966-bb) 192.168.122.174 fa:16:3e:71:8d:2e Dec 6 05:16:32 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:16:32 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/4066488691' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:16:32 localhost nova_compute[282193]: 2025-12-06 10:16:32.818 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:16:32 localhost nova_compute[282193]: 2025-12-06 10:16:32.826 282197 DEBUG nova.compute.provider_tree [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed in ProviderTree for provider: 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 6 05:16:32 localhost nova_compute[282193]: 2025-12-06 10:16:32.847 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 6 05:16:32 localhost nova_compute[282193]: 2025-12-06 10:16:32.888 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Compute_service record updated for np0005548789.localdomain:np0005548789.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 6 05:16:32 localhost nova_compute[282193]: 2025-12-06 10:16:32.888 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:16:32 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 1 addresses Dec 6 05:16:32 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:16:32 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:16:32 localhost podman[315991]: 2025-12-06 10:16:32.907356603 +0000 UTC m=+0.052390621 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 6 05:16:32 localhost systemd[1]: tmp-crun.kClgm7.mount: Deactivated successfully. Dec 6 05:16:33 localhost sshd[316007]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:16:33 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:33.207 263652 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:16:33 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:16:33 localhost nova_compute[282193]: 2025-12-06 10:16:33.827 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:34 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:34.930 263652 INFO neutron.agent.linux.ip_lib [None req-a8c65535-c5ff-4d4b-91d3-bf259fd36a37 - - - - - -] Device tap9fc3daab-2b cannot be used as it has no MAC address#033[00m Dec 6 05:16:34 localhost nova_compute[282193]: 2025-12-06 10:16:34.957 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:34 localhost kernel: device tap9fc3daab-2b entered promiscuous mode Dec 6 05:16:34 localhost NetworkManager[5973]: [1765016194.9686] manager: (tap9fc3daab-2b): new Generic device (/org/freedesktop/NetworkManager/Devices/32) Dec 6 05:16:34 localhost nova_compute[282193]: 2025-12-06 10:16:34.972 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:34 localhost ovn_controller[154851]: 2025-12-06T10:16:34Z|00179|binding|INFO|Claiming lport 9fc3daab-2b42-430e-915a-f1ee9d25ffbe for this chassis. Dec 6 05:16:34 localhost ovn_controller[154851]: 2025-12-06T10:16:34Z|00180|binding|INFO|9fc3daab-2b42-430e-915a-f1ee9d25ffbe: Claiming unknown Dec 6 05:16:34 localhost systemd-udevd[316023]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:16:34 localhost ovn_metadata_agent[160504]: 2025-12-06 10:16:34.991 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-b7a42283-4c55-4c11-8e24-f6394c9a461a', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b7a42283-4c55-4c11-8e24-f6394c9a461a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '550d07fdc38d491ba10875a25f95fdea', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc90d7d0-806b-4760-9447-b6831c3346a6, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=9fc3daab-2b42-430e-915a-f1ee9d25ffbe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:16:34 localhost ovn_metadata_agent[160504]: 2025-12-06 10:16:34.992 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 9fc3daab-2b42-430e-915a-f1ee9d25ffbe in datapath b7a42283-4c55-4c11-8e24-f6394c9a461a bound to our chassis#033[00m Dec 6 05:16:34 localhost ovn_metadata_agent[160504]: 2025-12-06 10:16:34.994 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network b7a42283-4c55-4c11-8e24-f6394c9a461a or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:16:34 localhost ovn_metadata_agent[160504]: 2025-12-06 10:16:34.995 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[84d63ba4-738f-46b0-a32e-dc93bf900f82]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:16:35 localhost journal[230404]: ethtool ioctl error on tap9fc3daab-2b: No such device Dec 6 05:16:35 localhost journal[230404]: ethtool ioctl error on tap9fc3daab-2b: No such device Dec 6 05:16:35 localhost ovn_controller[154851]: 2025-12-06T10:16:35Z|00181|binding|INFO|Setting lport 9fc3daab-2b42-430e-915a-f1ee9d25ffbe ovn-installed in OVS Dec 6 05:16:35 localhost ovn_controller[154851]: 2025-12-06T10:16:35Z|00182|binding|INFO|Setting lport 9fc3daab-2b42-430e-915a-f1ee9d25ffbe up in Southbound Dec 6 05:16:35 localhost journal[230404]: ethtool ioctl error on tap9fc3daab-2b: No such device Dec 6 05:16:35 localhost nova_compute[282193]: 2025-12-06 10:16:35.022 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:35 localhost journal[230404]: ethtool ioctl error on tap9fc3daab-2b: No such device Dec 6 05:16:35 localhost journal[230404]: ethtool ioctl error on tap9fc3daab-2b: No such device Dec 6 05:16:35 localhost journal[230404]: ethtool ioctl error on tap9fc3daab-2b: No such device Dec 6 05:16:35 localhost journal[230404]: ethtool ioctl error on tap9fc3daab-2b: No such device Dec 6 05:16:35 localhost journal[230404]: ethtool ioctl error on tap9fc3daab-2b: No such device Dec 6 05:16:35 localhost nova_compute[282193]: 2025-12-06 10:16:35.063 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:35 localhost nova_compute[282193]: 2025-12-06 10:16:35.098 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:36 localhost podman[316094]: Dec 6 05:16:36 localhost podman[316094]: 2025-12-06 10:16:36.007495755 +0000 UTC m=+0.080204133 container create f5b9c00fd0fe99e78a6bb373d0c252ad67c3ab760edeaee288dfd88518b3f399 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b7a42283-4c55-4c11-8e24-f6394c9a461a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2) Dec 6 05:16:36 localhost systemd[1]: Started libpod-conmon-f5b9c00fd0fe99e78a6bb373d0c252ad67c3ab760edeaee288dfd88518b3f399.scope. Dec 6 05:16:36 localhost systemd[1]: tmp-crun.bHz8WB.mount: Deactivated successfully. Dec 6 05:16:36 localhost podman[316094]: 2025-12-06 10:16:35.965307606 +0000 UTC m=+0.038016014 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:16:36 localhost systemd[1]: Started libcrun container. Dec 6 05:16:36 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93508883cd7c788e7b11fc41a7148ccecde4258bfe0c638def17c492548919fa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:16:36 localhost podman[316094]: 2025-12-06 10:16:36.088001077 +0000 UTC m=+0.160709455 container init f5b9c00fd0fe99e78a6bb373d0c252ad67c3ab760edeaee288dfd88518b3f399 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b7a42283-4c55-4c11-8e24-f6394c9a461a, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Dec 6 05:16:36 localhost podman[316094]: 2025-12-06 10:16:36.098043112 +0000 UTC m=+0.170751490 container start f5b9c00fd0fe99e78a6bb373d0c252ad67c3ab760edeaee288dfd88518b3f399 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b7a42283-4c55-4c11-8e24-f6394c9a461a, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 6 05:16:36 localhost dnsmasq[316112]: started, version 2.85 cachesize 150 Dec 6 05:16:36 localhost dnsmasq[316112]: DNS service limited to local subnets Dec 6 05:16:36 localhost dnsmasq[316112]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:16:36 localhost dnsmasq[316112]: warning: no upstream servers configured Dec 6 05:16:36 localhost dnsmasq-dhcp[316112]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 6 05:16:36 localhost dnsmasq[316112]: read /var/lib/neutron/dhcp/b7a42283-4c55-4c11-8e24-f6394c9a461a/addn_hosts - 0 addresses Dec 6 05:16:36 localhost dnsmasq-dhcp[316112]: read /var/lib/neutron/dhcp/b7a42283-4c55-4c11-8e24-f6394c9a461a/host Dec 6 05:16:36 localhost dnsmasq-dhcp[316112]: read /var/lib/neutron/dhcp/b7a42283-4c55-4c11-8e24-f6394c9a461a/opts Dec 6 05:16:36 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:36.483 263652 INFO neutron.agent.dhcp.agent [None req-a4abf64e-7bd6-4c9d-ba32-2827ff25b811 - - - - - -] DHCP configuration for ports {'1cba0605-9994-45f3-b711-845fbf180ceb'} is completed#033[00m Dec 6 05:16:36 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:36.592 263652 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:16:36 localhost nova_compute[282193]: 2025-12-06 10:16:36.884 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:16:36 localhost nova_compute[282193]: 2025-12-06 10:16:36.885 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:16:36 localhost nova_compute[282193]: 2025-12-06 10:16:36.885 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 6 05:16:36 localhost nova_compute[282193]: 2025-12-06 10:16:36.885 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 6 05:16:36 localhost nova_compute[282193]: 2025-12-06 10:16:36.951 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 6 05:16:36 localhost nova_compute[282193]: 2025-12-06 10:16:36.952 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquired lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 6 05:16:36 localhost nova_compute[282193]: 2025-12-06 10:16:36.953 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 6 05:16:36 localhost nova_compute[282193]: 2025-12-06 10:16:36.953 282197 DEBUG nova.objects.instance [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 6 05:16:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:16:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:16:37 localhost podman[316113]: 2025-12-06 10:16:37.162339093 +0000 UTC m=+0.111796442 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, distribution-scope=public, config_id=edpm, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.buildah.version=1.33.7, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9) Dec 6 05:16:37 localhost podman[316113]: 2025-12-06 10:16:37.179487433 +0000 UTC m=+0.128944792 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.33.7, vcs-type=git, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., config_id=edpm, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 6 05:16:37 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:16:37 localhost podman[316131]: 2025-12-06 10:16:37.255742556 +0000 UTC m=+0.084205235 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS) Dec 6 05:16:37 localhost podman[316131]: 2025-12-06 10:16:37.272140984 +0000 UTC m=+0.100603663 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 6 05:16:37 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:37.278 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:16:37Z, description=, device_id=84276e13-7738-4c28-b592-6465bc338221, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=0cf1dea2-757c-46ff-a408-efd0a5e6423c, ip_allocation=immediate, mac_address=fa:16:3e:90:dc:c6, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1116, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-06T10:16:37Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:16:37 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:16:37 localhost nova_compute[282193]: 2025-12-06 10:16:37.296 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:37 localhost podman[316168]: 2025-12-06 10:16:37.466829538 +0000 UTC m=+0.049471440 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 6 05:16:37 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 2 addresses Dec 6 05:16:37 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:16:37 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:16:37 localhost nova_compute[282193]: 2025-12-06 10:16:37.520 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updating instance_info_cache with network_info: [{"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 6 05:16:37 localhost nova_compute[282193]: 2025-12-06 10:16:37.541 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Releasing lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 6 05:16:37 localhost nova_compute[282193]: 2025-12-06 10:16:37.542 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 6 05:16:37 localhost nova_compute[282193]: 2025-12-06 10:16:37.542 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:16:37 localhost nova_compute[282193]: 2025-12-06 10:16:37.542 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:16:37 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:37.752 263652 INFO neutron.agent.dhcp.agent [None req-33c52649-a362-4103-8ab9-88a0c093f703 - - - - - -] DHCP configuration for ports {'0cf1dea2-757c-46ff-a408-efd0a5e6423c'} is completed#033[00m Dec 6 05:16:38 localhost nova_compute[282193]: 2025-12-06 10:16:38.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:16:38 localhost nova_compute[282193]: 2025-12-06 10:16:38.182 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:16:38 localhost nova_compute[282193]: 2025-12-06 10:16:38.183 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:16:38 localhost nova_compute[282193]: 2025-12-06 10:16:38.185 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 6 05:16:38 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:16:38 localhost nova_compute[282193]: 2025-12-06 10:16:38.563 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:38 localhost nova_compute[282193]: 2025-12-06 10:16:38.828 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:40 localhost nova_compute[282193]: 2025-12-06 10:16:40.186 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:16:40 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:40.652 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:16:40Z, description=, device_id=84276e13-7738-4c28-b592-6465bc338221, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=fcfaf919-b8a6-49c6-94f9-46161c17ee32, ip_allocation=immediate, mac_address=fa:16:3e:39:c5:87, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:16:32Z, description=, dns_domain=, id=b7a42283-4c55-4c11-8e24-f6394c9a461a, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServerGroupTestJSON-300055235-network, port_security_enabled=True, project_id=550d07fdc38d491ba10875a25f95fdea, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=7958, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1078, status=ACTIVE, subnets=['57051124-797e-44d0-8e7b-649184ccc4f4'], tags=[], tenant_id=550d07fdc38d491ba10875a25f95fdea, updated_at=2025-12-06T10:16:33Z, vlan_transparent=None, network_id=b7a42283-4c55-4c11-8e24-f6394c9a461a, port_security_enabled=False, project_id=550d07fdc38d491ba10875a25f95fdea, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1136, status=DOWN, tags=[], tenant_id=550d07fdc38d491ba10875a25f95fdea, updated_at=2025-12-06T10:16:40Z on network b7a42283-4c55-4c11-8e24-f6394c9a461a#033[00m Dec 6 05:16:40 localhost dnsmasq[316112]: read /var/lib/neutron/dhcp/b7a42283-4c55-4c11-8e24-f6394c9a461a/addn_hosts - 1 addresses Dec 6 05:16:40 localhost dnsmasq-dhcp[316112]: read /var/lib/neutron/dhcp/b7a42283-4c55-4c11-8e24-f6394c9a461a/host Dec 6 05:16:40 localhost dnsmasq-dhcp[316112]: read /var/lib/neutron/dhcp/b7a42283-4c55-4c11-8e24-f6394c9a461a/opts Dec 6 05:16:40 localhost podman[316206]: 2025-12-06 10:16:40.898394283 +0000 UTC m=+0.066579830 container kill f5b9c00fd0fe99e78a6bb373d0c252ad67c3ab760edeaee288dfd88518b3f399 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b7a42283-4c55-4c11-8e24-f6394c9a461a, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:16:41 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:41.242 263652 INFO neutron.agent.dhcp.agent [None req-2c27286b-39fc-4b71-ad5e-df52125c6453 - - - - - -] DHCP configuration for ports {'fcfaf919-b8a6-49c6-94f9-46161c17ee32'} is completed#033[00m Dec 6 05:16:41 localhost sshd[316227]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:16:42 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:42.337 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:16:40Z, description=, device_id=84276e13-7738-4c28-b592-6465bc338221, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=fcfaf919-b8a6-49c6-94f9-46161c17ee32, ip_allocation=immediate, mac_address=fa:16:3e:39:c5:87, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:16:32Z, description=, dns_domain=, id=b7a42283-4c55-4c11-8e24-f6394c9a461a, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServerGroupTestJSON-300055235-network, port_security_enabled=True, project_id=550d07fdc38d491ba10875a25f95fdea, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=7958, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1078, status=ACTIVE, subnets=['57051124-797e-44d0-8e7b-649184ccc4f4'], tags=[], tenant_id=550d07fdc38d491ba10875a25f95fdea, updated_at=2025-12-06T10:16:33Z, vlan_transparent=None, network_id=b7a42283-4c55-4c11-8e24-f6394c9a461a, port_security_enabled=False, project_id=550d07fdc38d491ba10875a25f95fdea, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1136, status=DOWN, tags=[], tenant_id=550d07fdc38d491ba10875a25f95fdea, updated_at=2025-12-06T10:16:40Z on network b7a42283-4c55-4c11-8e24-f6394c9a461a#033[00m Dec 6 05:16:42 localhost nova_compute[282193]: 2025-12-06 10:16:42.340 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:42 localhost dnsmasq[316112]: read /var/lib/neutron/dhcp/b7a42283-4c55-4c11-8e24-f6394c9a461a/addn_hosts - 1 addresses Dec 6 05:16:42 localhost dnsmasq-dhcp[316112]: read /var/lib/neutron/dhcp/b7a42283-4c55-4c11-8e24-f6394c9a461a/host Dec 6 05:16:42 localhost dnsmasq-dhcp[316112]: read /var/lib/neutron/dhcp/b7a42283-4c55-4c11-8e24-f6394c9a461a/opts Dec 6 05:16:42 localhost podman[316245]: 2025-12-06 10:16:42.535680205 +0000 UTC m=+0.063223229 container kill f5b9c00fd0fe99e78a6bb373d0c252ad67c3ab760edeaee288dfd88518b3f399 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b7a42283-4c55-4c11-8e24-f6394c9a461a, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:16:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:16:42 localhost podman[316260]: 2025-12-06 10:16:42.65617716 +0000 UTC m=+0.090042212 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 6 05:16:42 localhost podman[316260]: 2025-12-06 10:16:42.694245014 +0000 UTC m=+0.128110076 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd) Dec 6 05:16:42 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:16:42 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:42.817 263652 INFO neutron.agent.dhcp.agent [None req-7b166144-93f5-46af-8092-4b08555fc5d7 - - - - - -] DHCP configuration for ports {'fcfaf919-b8a6-49c6-94f9-46161c17ee32'} is completed#033[00m Dec 6 05:16:43 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:16:43 localhost nova_compute[282193]: 2025-12-06 10:16:43.830 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:16:45 localhost podman[316286]: 2025-12-06 10:16:45.918260173 +0000 UTC m=+0.081169203 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 6 05:16:45 localhost podman[316286]: 2025-12-06 10:16:45.927887666 +0000 UTC m=+0.090796686 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:16:45 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:16:46 localhost nova_compute[282193]: 2025-12-06 10:16:46.178 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:16:46 localhost openstack_network_exporter[243110]: ERROR 10:16:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:16:46 localhost openstack_network_exporter[243110]: ERROR 10:16:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:16:46 localhost openstack_network_exporter[243110]: ERROR 10:16:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:16:46 localhost openstack_network_exporter[243110]: ERROR 10:16:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:16:46 localhost openstack_network_exporter[243110]: Dec 6 05:16:46 localhost openstack_network_exporter[243110]: ERROR 10:16:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:16:46 localhost openstack_network_exporter[243110]: Dec 6 05:16:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:16:47.306 160509 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:16:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:16:47.306 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:16:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:16:47.307 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:16:47 localhost nova_compute[282193]: 2025-12-06 10:16:47.385 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:48 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:16:48 localhost nova_compute[282193]: 2025-12-06 10:16:48.833 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:48 localhost dnsmasq[316112]: read /var/lib/neutron/dhcp/b7a42283-4c55-4c11-8e24-f6394c9a461a/addn_hosts - 0 addresses Dec 6 05:16:48 localhost dnsmasq-dhcp[316112]: read /var/lib/neutron/dhcp/b7a42283-4c55-4c11-8e24-f6394c9a461a/host Dec 6 05:16:48 localhost podman[316329]: 2025-12-06 10:16:48.896823267 +0000 UTC m=+0.067515568 container kill f5b9c00fd0fe99e78a6bb373d0c252ad67c3ab760edeaee288dfd88518b3f399 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b7a42283-4c55-4c11-8e24-f6394c9a461a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:16:48 localhost dnsmasq-dhcp[316112]: read /var/lib/neutron/dhcp/b7a42283-4c55-4c11-8e24-f6394c9a461a/opts Dec 6 05:16:48 localhost systemd[1]: tmp-crun.5H2ZGE.mount: Deactivated successfully. Dec 6 05:16:49 localhost ovn_controller[154851]: 2025-12-06T10:16:49Z|00183|binding|INFO|Releasing lport 9fc3daab-2b42-430e-915a-f1ee9d25ffbe from this chassis (sb_readonly=0) Dec 6 05:16:49 localhost kernel: device tap9fc3daab-2b left promiscuous mode Dec 6 05:16:49 localhost nova_compute[282193]: 2025-12-06 10:16:49.279 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:49 localhost ovn_controller[154851]: 2025-12-06T10:16:49Z|00184|binding|INFO|Setting lport 9fc3daab-2b42-430e-915a-f1ee9d25ffbe down in Southbound Dec 6 05:16:49 localhost ovn_metadata_agent[160504]: 2025-12-06 10:16:49.292 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-b7a42283-4c55-4c11-8e24-f6394c9a461a', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b7a42283-4c55-4c11-8e24-f6394c9a461a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '550d07fdc38d491ba10875a25f95fdea', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548789.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc90d7d0-806b-4760-9447-b6831c3346a6, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=9fc3daab-2b42-430e-915a-f1ee9d25ffbe) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:16:49 localhost ovn_metadata_agent[160504]: 2025-12-06 10:16:49.294 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 9fc3daab-2b42-430e-915a-f1ee9d25ffbe in datapath b7a42283-4c55-4c11-8e24-f6394c9a461a unbound from our chassis#033[00m Dec 6 05:16:49 localhost ovn_metadata_agent[160504]: 2025-12-06 10:16:49.297 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b7a42283-4c55-4c11-8e24-f6394c9a461a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:16:49 localhost ovn_metadata_agent[160504]: 2025-12-06 10:16:49.298 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[705bc57b-0970-42ce-b803-7dfa180f526c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:16:49 localhost nova_compute[282193]: 2025-12-06 10:16:49.299 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:16:49 localhost podman[316351]: 2025-12-06 10:16:49.928395997 +0000 UTC m=+0.087348000 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 6 05:16:49 localhost podman[316351]: 2025-12-06 10:16:49.996259935 +0000 UTC m=+0.155211948 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:16:50 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:16:52 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e116 e116: 6 total, 6 up, 6 in Dec 6 05:16:52 localhost nova_compute[282193]: 2025-12-06 10:16:52.388 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:52 localhost podman[316392]: 2025-12-06 10:16:52.493996307 +0000 UTC m=+0.069565101 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:16:52 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 1 addresses Dec 6 05:16:52 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:16:52 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:16:52 localhost ovn_controller[154851]: 2025-12-06T10:16:52Z|00185|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:16:52 localhost nova_compute[282193]: 2025-12-06 10:16:52.702 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:53 localhost neutron_sriov_agent[256690]: 2025-12-06 10:16:53.165 2 INFO neutron.agent.securitygroups_rpc [None req-26ae0ef5-9433-41c4-a064-a0d5d3110043 183487bfea4148c8bd274489b01ac583 290c121e7a5344fea2a32f4e64e74fb4 - - default default] Security group member updated ['7248d87f-aba7-4d7e-b680-1fdbc4f1cdd3']#033[00m Dec 6 05:16:53 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e117 e117: 6 total, 6 up, 6 in Dec 6 05:16:53 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:16:53 localhost dnsmasq[316112]: exiting on receipt of SIGTERM Dec 6 05:16:53 localhost podman[316429]: 2025-12-06 10:16:53.528347281 +0000 UTC m=+0.058888268 container kill f5b9c00fd0fe99e78a6bb373d0c252ad67c3ab760edeaee288dfd88518b3f399 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b7a42283-4c55-4c11-8e24-f6394c9a461a, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 6 05:16:53 localhost systemd[1]: libpod-f5b9c00fd0fe99e78a6bb373d0c252ad67c3ab760edeaee288dfd88518b3f399.scope: Deactivated successfully. Dec 6 05:16:53 localhost podman[316444]: 2025-12-06 10:16:53.588067212 +0000 UTC m=+0.040318144 container died f5b9c00fd0fe99e78a6bb373d0c252ad67c3ab760edeaee288dfd88518b3f399 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b7a42283-4c55-4c11-8e24-f6394c9a461a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 6 05:16:53 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f5b9c00fd0fe99e78a6bb373d0c252ad67c3ab760edeaee288dfd88518b3f399-userdata-shm.mount: Deactivated successfully. Dec 6 05:16:53 localhost systemd[1]: var-lib-containers-storage-overlay-93508883cd7c788e7b11fc41a7148ccecde4258bfe0c638def17c492548919fa-merged.mount: Deactivated successfully. Dec 6 05:16:53 localhost podman[316444]: 2025-12-06 10:16:53.634837651 +0000 UTC m=+0.087088493 container remove f5b9c00fd0fe99e78a6bb373d0c252ad67c3ab760edeaee288dfd88518b3f399 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b7a42283-4c55-4c11-8e24-f6394c9a461a, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:16:53 localhost systemd[1]: libpod-conmon-f5b9c00fd0fe99e78a6bb373d0c252ad67c3ab760edeaee288dfd88518b3f399.scope: Deactivated successfully. Dec 6 05:16:53 localhost nova_compute[282193]: 2025-12-06 10:16:53.835 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:53 localhost podman[241090]: time="2025-12-06T10:16:53Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:16:53 localhost podman[241090]: @ - - [06/Dec/2025:10:16:53 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156104 "" "Go-http-client/1.1" Dec 6 05:16:53 localhost podman[241090]: @ - - [06/Dec/2025:10:16:53 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19259 "" "Go-http-client/1.1" Dec 6 05:16:54 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e118 e118: 6 total, 6 up, 6 in Dec 6 05:16:54 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:54.712 263652 INFO neutron.agent.linux.ip_lib [None req-b37a7ebe-4db7-4b9d-b1da-15af0596f8e7 - - - - - -] Device tap949a183f-bf cannot be used as it has no MAC address#033[00m Dec 6 05:16:54 localhost nova_compute[282193]: 2025-12-06 10:16:54.778 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:54 localhost kernel: device tap949a183f-bf entered promiscuous mode Dec 6 05:16:54 localhost NetworkManager[5973]: [1765016214.7917] manager: (tap949a183f-bf): new Generic device (/org/freedesktop/NetworkManager/Devices/33) Dec 6 05:16:54 localhost ovn_controller[154851]: 2025-12-06T10:16:54Z|00186|binding|INFO|Claiming lport 949a183f-bfda-4354-9310-98929388f22d for this chassis. Dec 6 05:16:54 localhost ovn_controller[154851]: 2025-12-06T10:16:54Z|00187|binding|INFO|949a183f-bfda-4354-9310-98929388f22d: Claiming unknown Dec 6 05:16:54 localhost nova_compute[282193]: 2025-12-06 10:16:54.794 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:54 localhost systemd-udevd[316478]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:16:54 localhost journal[230404]: ethtool ioctl error on tap949a183f-bf: No such device Dec 6 05:16:54 localhost nova_compute[282193]: 2025-12-06 10:16:54.828 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:54 localhost ovn_controller[154851]: 2025-12-06T10:16:54Z|00188|binding|INFO|Setting lport 949a183f-bfda-4354-9310-98929388f22d ovn-installed in OVS Dec 6 05:16:54 localhost nova_compute[282193]: 2025-12-06 10:16:54.831 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:54 localhost journal[230404]: ethtool ioctl error on tap949a183f-bf: No such device Dec 6 05:16:54 localhost journal[230404]: ethtool ioctl error on tap949a183f-bf: No such device Dec 6 05:16:54 localhost journal[230404]: ethtool ioctl error on tap949a183f-bf: No such device Dec 6 05:16:54 localhost journal[230404]: ethtool ioctl error on tap949a183f-bf: No such device Dec 6 05:16:54 localhost journal[230404]: ethtool ioctl error on tap949a183f-bf: No such device Dec 6 05:16:54 localhost journal[230404]: ethtool ioctl error on tap949a183f-bf: No such device Dec 6 05:16:54 localhost journal[230404]: ethtool ioctl error on tap949a183f-bf: No such device Dec 6 05:16:54 localhost nova_compute[282193]: 2025-12-06 10:16:54.866 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:54 localhost nova_compute[282193]: 2025-12-06 10:16:54.897 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:55 localhost systemd[1]: run-netns-qdhcp\x2db7a42283\x2d4c55\x2d4c11\x2d8e24\x2df6394c9a461a.mount: Deactivated successfully. Dec 6 05:16:55 localhost ovn_controller[154851]: 2025-12-06T10:16:55Z|00189|binding|INFO|Setting lport 949a183f-bfda-4354-9310-98929388f22d up in Southbound Dec 6 05:16:55 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:55.055 263652 INFO neutron.agent.dhcp.agent [None req-a4220a31-8e2e-4ffd-8f77-aa2b0e7c33ae - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:16:55 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:55.056 263652 INFO neutron.agent.dhcp.agent [None req-a4220a31-8e2e-4ffd-8f77-aa2b0e7c33ae - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:16:55 localhost ovn_metadata_agent[160504]: 2025-12-06 10:16:55.056 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-8fb7fee7-47f3-496e-84a0-2200c47dea55', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8fb7fee7-47f3-496e-84a0-2200c47dea55', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '290c121e7a5344fea2a32f4e64e74fb4', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d6e7c85-e1e8-4901-8e98-f2cdf448ee9d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=949a183f-bfda-4354-9310-98929388f22d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:16:55 localhost ovn_metadata_agent[160504]: 2025-12-06 10:16:55.058 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 949a183f-bfda-4354-9310-98929388f22d in datapath 8fb7fee7-47f3-496e-84a0-2200c47dea55 bound to our chassis#033[00m Dec 6 05:16:55 localhost ovn_metadata_agent[160504]: 2025-12-06 10:16:55.059 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Port 781a5cd9-f731-448d-91bd-ddefbb48ec27 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 6 05:16:55 localhost ovn_metadata_agent[160504]: 2025-12-06 10:16:55.059 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8fb7fee7-47f3-496e-84a0-2200c47dea55, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:16:55 localhost ovn_metadata_agent[160504]: 2025-12-06 10:16:55.060 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[3899be47-97e7-4b47-a480-cc478683ecb3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:16:55 localhost neutron_sriov_agent[256690]: 2025-12-06 10:16:55.063 2 INFO neutron.agent.securitygroups_rpc [None req-6596b1da-4291-462f-a9bc-899ad3053051 183487bfea4148c8bd274489b01ac583 290c121e7a5344fea2a32f4e64e74fb4 - - default default] Security group member updated ['7248d87f-aba7-4d7e-b680-1fdbc4f1cdd3']#033[00m Dec 6 05:16:55 localhost ovn_controller[154851]: 2025-12-06T10:16:55Z|00190|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:16:55 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e119 e119: 6 total, 6 up, 6 in Dec 6 05:16:55 localhost podman[316603]: Dec 6 05:16:55 localhost podman[316603]: 2025-12-06 10:16:55.837266083 +0000 UTC m=+0.089264458 container create 31508468c3e2cd2720accdb917799d9a6e9d1e3826de33ddb026a829aff69936 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8fb7fee7-47f3-496e-84a0-2200c47dea55, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:16:55 localhost systemd[1]: Started libpod-conmon-31508468c3e2cd2720accdb917799d9a6e9d1e3826de33ddb026a829aff69936.scope. Dec 6 05:16:55 localhost systemd[1]: tmp-crun.4BDBs9.mount: Deactivated successfully. Dec 6 05:16:55 localhost systemd[1]: Started libcrun container. Dec 6 05:16:55 localhost podman[316603]: 2025-12-06 10:16:55.799107686 +0000 UTC m=+0.051106091 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:16:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e623c434dbbedf87878fe7b9c9b2c78a861ad29951b1d46175e6c1c0d161e920/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:16:55 localhost podman[316603]: 2025-12-06 10:16:55.910104433 +0000 UTC m=+0.162102828 container init 31508468c3e2cd2720accdb917799d9a6e9d1e3826de33ddb026a829aff69936 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8fb7fee7-47f3-496e-84a0-2200c47dea55, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:16:55 localhost podman[316603]: 2025-12-06 10:16:55.919839297 +0000 UTC m=+0.171837692 container start 31508468c3e2cd2720accdb917799d9a6e9d1e3826de33ddb026a829aff69936 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8fb7fee7-47f3-496e-84a0-2200c47dea55, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:16:55 localhost dnsmasq[316635]: started, version 2.85 cachesize 150 Dec 6 05:16:55 localhost dnsmasq[316635]: DNS service limited to local subnets Dec 6 05:16:55 localhost dnsmasq[316635]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:16:55 localhost dnsmasq[316635]: warning: no upstream servers configured Dec 6 05:16:55 localhost dnsmasq-dhcp[316635]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 6 05:16:55 localhost dnsmasq[316635]: read /var/lib/neutron/dhcp/8fb7fee7-47f3-496e-84a0-2200c47dea55/addn_hosts - 0 addresses Dec 6 05:16:55 localhost dnsmasq-dhcp[316635]: read /var/lib/neutron/dhcp/8fb7fee7-47f3-496e-84a0-2200c47dea55/host Dec 6 05:16:55 localhost dnsmasq-dhcp[316635]: read /var/lib/neutron/dhcp/8fb7fee7-47f3-496e-84a0-2200c47dea55/opts Dec 6 05:16:55 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:55.979 263652 INFO neutron.agent.dhcp.agent [None req-589c5935-71cc-458d-b186-8ee13fc0a445 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:16:52Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=a70a96bc-9485-4b3a-ad28-6f527d77539d, ip_allocation=immediate, mac_address=fa:16:3e:1e:0c:70, name=tempest-AllowedAddressPairTestJSON-810561683, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:16:50Z, description=, dns_domain=, id=8fb7fee7-47f3-496e-84a0-2200c47dea55, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairTestJSON-test-network-162135958, port_security_enabled=True, project_id=290c121e7a5344fea2a32f4e64e74fb4, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=55801, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1197, status=ACTIVE, subnets=['c51e2e1d-7019-4112-9f0b-cec61466e763'], tags=[], tenant_id=290c121e7a5344fea2a32f4e64e74fb4, updated_at=2025-12-06T10:16:51Z, vlan_transparent=None, network_id=8fb7fee7-47f3-496e-84a0-2200c47dea55, port_security_enabled=True, project_id=290c121e7a5344fea2a32f4e64e74fb4, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['7248d87f-aba7-4d7e-b680-1fdbc4f1cdd3'], standard_attr_id=1220, status=DOWN, tags=[], tenant_id=290c121e7a5344fea2a32f4e64e74fb4, updated_at=2025-12-06T10:16:52Z on network 8fb7fee7-47f3-496e-84a0-2200c47dea55#033[00m Dec 6 05:16:56 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:56.146 263652 INFO neutron.agent.dhcp.agent [None req-68f3686e-892d-4d1d-bf9e-1e34c1151c74 - - - - - -] DHCP configuration for ports {'ecf433c2-2ffc-446b-b4a2-27567af57062'} is completed#033[00m Dec 6 05:16:56 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:56.291 263652 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:16:56 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e120 e120: 6 total, 6 up, 6 in Dec 6 05:16:56 localhost dnsmasq[316635]: read /var/lib/neutron/dhcp/8fb7fee7-47f3-496e-84a0-2200c47dea55/addn_hosts - 1 addresses Dec 6 05:16:56 localhost dnsmasq-dhcp[316635]: read /var/lib/neutron/dhcp/8fb7fee7-47f3-496e-84a0-2200c47dea55/host Dec 6 05:16:56 localhost dnsmasq-dhcp[316635]: read /var/lib/neutron/dhcp/8fb7fee7-47f3-496e-84a0-2200c47dea55/opts Dec 6 05:16:56 localhost podman[316668]: 2025-12-06 10:16:56.301231797 +0000 UTC m=+0.062023783 container kill 31508468c3e2cd2720accdb917799d9a6e9d1e3826de33ddb026a829aff69936 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8fb7fee7-47f3-496e-84a0-2200c47dea55, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:16:56 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:16:56 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:16:56 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:56.443 263652 INFO neutron.agent.dhcp.agent [None req-d4098ffc-31f7-4403-b8a5-1fdda8dfb471 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:16:54Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=74b11fa3-cd87-45bf-856b-6e392660b0b7, ip_allocation=immediate, mac_address=fa:16:3e:23:43:84, name=tempest-AllowedAddressPairTestJSON-1448303374, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:16:50Z, description=, dns_domain=, id=8fb7fee7-47f3-496e-84a0-2200c47dea55, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairTestJSON-test-network-162135958, port_security_enabled=True, project_id=290c121e7a5344fea2a32f4e64e74fb4, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=55801, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1197, status=ACTIVE, subnets=['c51e2e1d-7019-4112-9f0b-cec61466e763'], tags=[], tenant_id=290c121e7a5344fea2a32f4e64e74fb4, updated_at=2025-12-06T10:16:51Z, vlan_transparent=None, network_id=8fb7fee7-47f3-496e-84a0-2200c47dea55, port_security_enabled=True, project_id=290c121e7a5344fea2a32f4e64e74fb4, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['7248d87f-aba7-4d7e-b680-1fdbc4f1cdd3'], standard_attr_id=1223, status=DOWN, tags=[], tenant_id=290c121e7a5344fea2a32f4e64e74fb4, updated_at=2025-12-06T10:16:54Z on network 8fb7fee7-47f3-496e-84a0-2200c47dea55#033[00m Dec 6 05:16:56 localhost neutron_sriov_agent[256690]: 2025-12-06 10:16:56.532 2 INFO neutron.agent.securitygroups_rpc [None req-c7b28b51-59d5-4f0a-ad7d-a932cd0ad09d 183487bfea4148c8bd274489b01ac583 290c121e7a5344fea2a32f4e64e74fb4 - - default default] Security group member updated ['7248d87f-aba7-4d7e-b680-1fdbc4f1cdd3']#033[00m Dec 6 05:16:56 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:56.565 263652 INFO neutron.agent.dhcp.agent [None req-3a0f99e8-5145-49e3-8652-6b9d2dd414c1 - - - - - -] DHCP configuration for ports {'a70a96bc-9485-4b3a-ad28-6f527d77539d'} is completed#033[00m Dec 6 05:16:56 localhost podman[316707]: 2025-12-06 10:16:56.6716062 +0000 UTC m=+0.047574294 container kill 31508468c3e2cd2720accdb917799d9a6e9d1e3826de33ddb026a829aff69936 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8fb7fee7-47f3-496e-84a0-2200c47dea55, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 6 05:16:56 localhost dnsmasq[316635]: read /var/lib/neutron/dhcp/8fb7fee7-47f3-496e-84a0-2200c47dea55/addn_hosts - 2 addresses Dec 6 05:16:56 localhost dnsmasq-dhcp[316635]: read /var/lib/neutron/dhcp/8fb7fee7-47f3-496e-84a0-2200c47dea55/host Dec 6 05:16:56 localhost dnsmasq-dhcp[316635]: read /var/lib/neutron/dhcp/8fb7fee7-47f3-496e-84a0-2200c47dea55/opts Dec 6 05:16:56 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:56.892 263652 INFO neutron.agent.dhcp.agent [None req-b88dea56-c5e2-4d82-aff8-e854ae30bd99 - - - - - -] DHCP configuration for ports {'74b11fa3-cd87-45bf-856b-6e392660b0b7'} is completed#033[00m Dec 6 05:16:57 localhost dnsmasq[316635]: read /var/lib/neutron/dhcp/8fb7fee7-47f3-496e-84a0-2200c47dea55/addn_hosts - 1 addresses Dec 6 05:16:57 localhost dnsmasq-dhcp[316635]: read /var/lib/neutron/dhcp/8fb7fee7-47f3-496e-84a0-2200c47dea55/host Dec 6 05:16:57 localhost dnsmasq-dhcp[316635]: read /var/lib/neutron/dhcp/8fb7fee7-47f3-496e-84a0-2200c47dea55/opts Dec 6 05:16:57 localhost podman[316746]: 2025-12-06 10:16:57.070831689 +0000 UTC m=+0.067004693 container kill 31508468c3e2cd2720accdb917799d9a6e9d1e3826de33ddb026a829aff69936 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8fb7fee7-47f3-496e-84a0-2200c47dea55, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Dec 6 05:16:57 localhost neutron_sriov_agent[256690]: 2025-12-06 10:16:57.215 2 INFO neutron.agent.securitygroups_rpc [None req-dca415e6-2c01-4081-a144-3151bae67c51 183487bfea4148c8bd274489b01ac583 290c121e7a5344fea2a32f4e64e74fb4 - - default default] Security group member updated ['7248d87f-aba7-4d7e-b680-1fdbc4f1cdd3']#033[00m Dec 6 05:16:57 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:57.259 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:16:56Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=488fdf8a-7f71-4571-b31d-c641a9b76ebd, ip_allocation=immediate, mac_address=fa:16:3e:d4:d4:d9, name=tempest-AllowedAddressPairTestJSON-1094459018, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:16:50Z, description=, dns_domain=, id=8fb7fee7-47f3-496e-84a0-2200c47dea55, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairTestJSON-test-network-162135958, port_security_enabled=True, project_id=290c121e7a5344fea2a32f4e64e74fb4, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=55801, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1197, status=ACTIVE, subnets=['c51e2e1d-7019-4112-9f0b-cec61466e763'], tags=[], tenant_id=290c121e7a5344fea2a32f4e64e74fb4, updated_at=2025-12-06T10:16:51Z, vlan_transparent=None, network_id=8fb7fee7-47f3-496e-84a0-2200c47dea55, port_security_enabled=True, project_id=290c121e7a5344fea2a32f4e64e74fb4, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['7248d87f-aba7-4d7e-b680-1fdbc4f1cdd3'], standard_attr_id=1227, status=DOWN, tags=[], tenant_id=290c121e7a5344fea2a32f4e64e74fb4, updated_at=2025-12-06T10:16:57Z on network 8fb7fee7-47f3-496e-84a0-2200c47dea55#033[00m Dec 6 05:16:57 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:16:57 localhost nova_compute[282193]: 2025-12-06 10:16:57.439 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:57 localhost dnsmasq[316635]: read /var/lib/neutron/dhcp/8fb7fee7-47f3-496e-84a0-2200c47dea55/addn_hosts - 2 addresses Dec 6 05:16:57 localhost dnsmasq-dhcp[316635]: read /var/lib/neutron/dhcp/8fb7fee7-47f3-496e-84a0-2200c47dea55/host Dec 6 05:16:57 localhost dnsmasq-dhcp[316635]: read /var/lib/neutron/dhcp/8fb7fee7-47f3-496e-84a0-2200c47dea55/opts Dec 6 05:16:57 localhost podman[316784]: 2025-12-06 10:16:57.519070805 +0000 UTC m=+0.063779435 container kill 31508468c3e2cd2720accdb917799d9a6e9d1e3826de33ddb026a829aff69936 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8fb7fee7-47f3-496e-84a0-2200c47dea55, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:16:57 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:57.749 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:16:57Z, description=, device_id=f3eed2e0-6009-48cb-b29a-fc71e49972a4, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=4b7d8e10-916e-45ff-a07c-b4f9174b6f3d, ip_allocation=immediate, mac_address=fa:16:3e:ef:32:22, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1233, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-06T10:16:57Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:16:57 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:57.755 263652 INFO neutron.agent.dhcp.agent [None req-766521d5-53da-40c0-99ee-ad5c2892c84b - - - - - -] DHCP configuration for ports {'488fdf8a-7f71-4571-b31d-c641a9b76ebd'} is completed#033[00m Dec 6 05:16:57 localhost systemd[1]: tmp-crun.qGMtZj.mount: Deactivated successfully. Dec 6 05:16:57 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 2 addresses Dec 6 05:16:57 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:16:57 localhost podman[316824]: 2025-12-06 10:16:57.991185325 +0000 UTC m=+0.066672633 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 6 05:16:57 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:16:58 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e121 e121: 6 total, 6 up, 6 in Dec 6 05:16:58 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:16:58 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:58.310 263652 INFO neutron.agent.dhcp.agent [None req-1a9a464f-1034-422c-b4ec-9f0fdda03eea - - - - - -] DHCP configuration for ports {'4b7d8e10-916e-45ff-a07c-b4f9174b6f3d'} is completed#033[00m Dec 6 05:16:58 localhost neutron_sriov_agent[256690]: 2025-12-06 10:16:58.381 2 INFO neutron.agent.securitygroups_rpc [None req-97e21b62-44c1-4fc5-958c-bcc0268c52d3 183487bfea4148c8bd274489b01ac583 290c121e7a5344fea2a32f4e64e74fb4 - - default default] Security group member updated ['7248d87f-aba7-4d7e-b680-1fdbc4f1cdd3']#033[00m Dec 6 05:16:58 localhost dnsmasq[316635]: read /var/lib/neutron/dhcp/8fb7fee7-47f3-496e-84a0-2200c47dea55/addn_hosts - 1 addresses Dec 6 05:16:58 localhost dnsmasq-dhcp[316635]: read /var/lib/neutron/dhcp/8fb7fee7-47f3-496e-84a0-2200c47dea55/host Dec 6 05:16:58 localhost dnsmasq-dhcp[316635]: read /var/lib/neutron/dhcp/8fb7fee7-47f3-496e-84a0-2200c47dea55/opts Dec 6 05:16:58 localhost podman[316860]: 2025-12-06 10:16:58.619343308 +0000 UTC m=+0.065745695 container kill 31508468c3e2cd2720accdb917799d9a6e9d1e3826de33ddb026a829aff69936 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8fb7fee7-47f3-496e-84a0-2200c47dea55, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 6 05:16:58 localhost nova_compute[282193]: 2025-12-06 10:16:58.838 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:16:59 localhost neutron_sriov_agent[256690]: 2025-12-06 10:16:59.203 2 INFO neutron.agent.securitygroups_rpc [None req-a68c2924-ed4d-4682-a596-626401a139a3 183487bfea4148c8bd274489b01ac583 290c121e7a5344fea2a32f4e64e74fb4 - - default default] Security group member updated ['7248d87f-aba7-4d7e-b680-1fdbc4f1cdd3']#033[00m Dec 6 05:16:59 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:59.232 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:16:58Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=d1855335-d570-45f4-bc5e-1b4a2f0ee869, ip_allocation=immediate, mac_address=fa:16:3e:43:c9:d2, name=tempest-AllowedAddressPairTestJSON-1276619202, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:16:50Z, description=, dns_domain=, id=8fb7fee7-47f3-496e-84a0-2200c47dea55, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairTestJSON-test-network-162135958, port_security_enabled=True, project_id=290c121e7a5344fea2a32f4e64e74fb4, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=55801, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1197, status=ACTIVE, subnets=['c51e2e1d-7019-4112-9f0b-cec61466e763'], tags=[], tenant_id=290c121e7a5344fea2a32f4e64e74fb4, updated_at=2025-12-06T10:16:51Z, vlan_transparent=None, network_id=8fb7fee7-47f3-496e-84a0-2200c47dea55, port_security_enabled=True, project_id=290c121e7a5344fea2a32f4e64e74fb4, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['7248d87f-aba7-4d7e-b680-1fdbc4f1cdd3'], standard_attr_id=1244, status=DOWN, tags=[], tenant_id=290c121e7a5344fea2a32f4e64e74fb4, updated_at=2025-12-06T10:16:59Z on network 8fb7fee7-47f3-496e-84a0-2200c47dea55#033[00m Dec 6 05:16:59 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e122 e122: 6 total, 6 up, 6 in Dec 6 05:16:59 localhost dnsmasq[316635]: read /var/lib/neutron/dhcp/8fb7fee7-47f3-496e-84a0-2200c47dea55/addn_hosts - 2 addresses Dec 6 05:16:59 localhost dnsmasq-dhcp[316635]: read /var/lib/neutron/dhcp/8fb7fee7-47f3-496e-84a0-2200c47dea55/host Dec 6 05:16:59 localhost dnsmasq-dhcp[316635]: read /var/lib/neutron/dhcp/8fb7fee7-47f3-496e-84a0-2200c47dea55/opts Dec 6 05:16:59 localhost podman[316899]: 2025-12-06 10:16:59.457843881 +0000 UTC m=+0.066000972 container kill 31508468c3e2cd2720accdb917799d9a6e9d1e3826de33ddb026a829aff69936 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8fb7fee7-47f3-496e-84a0-2200c47dea55, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 6 05:16:59 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:16:59.658 263652 INFO neutron.agent.dhcp.agent [None req-bd56f61b-821a-47a5-b328-cbbe45697b34 - - - - - -] DHCP configuration for ports {'d1855335-d570-45f4-bc5e-1b4a2f0ee869'} is completed#033[00m Dec 6 05:17:00 localhost neutron_sriov_agent[256690]: 2025-12-06 10:17:00.351 2 INFO neutron.agent.securitygroups_rpc [None req-1e316261-d213-40cc-b644-592e2d6242e7 183487bfea4148c8bd274489b01ac583 290c121e7a5344fea2a32f4e64e74fb4 - - default default] Security group member updated ['7248d87f-aba7-4d7e-b680-1fdbc4f1cdd3']#033[00m Dec 6 05:17:00 localhost dnsmasq[316635]: read /var/lib/neutron/dhcp/8fb7fee7-47f3-496e-84a0-2200c47dea55/addn_hosts - 1 addresses Dec 6 05:17:00 localhost podman[316937]: 2025-12-06 10:17:00.62028722 +0000 UTC m=+0.052126911 container kill 31508468c3e2cd2720accdb917799d9a6e9d1e3826de33ddb026a829aff69936 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8fb7fee7-47f3-496e-84a0-2200c47dea55, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3) Dec 6 05:17:00 localhost dnsmasq-dhcp[316635]: read /var/lib/neutron/dhcp/8fb7fee7-47f3-496e-84a0-2200c47dea55/host Dec 6 05:17:00 localhost dnsmasq-dhcp[316635]: read /var/lib/neutron/dhcp/8fb7fee7-47f3-496e-84a0-2200c47dea55/opts Dec 6 05:17:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:17:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:17:00 localhost podman[316958]: 2025-12-06 10:17:00.914058131 +0000 UTC m=+0.074119649 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 6 05:17:00 localhost podman[316958]: 2025-12-06 10:17:00.928349274 +0000 UTC m=+0.088410762 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 6 05:17:00 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:17:00 localhost podman[316957]: 2025-12-06 10:17:00.974886796 +0000 UTC m=+0.137212413 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent) Dec 6 05:17:01 localhost podman[316957]: 2025-12-06 10:17:01.003900476 +0000 UTC m=+0.166226053 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 6 05:17:01 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:17:01 localhost neutron_sriov_agent[256690]: 2025-12-06 10:17:01.030 2 INFO neutron.agent.securitygroups_rpc [None req-af1607ac-cf21-43e9-9dc2-41d6c40546b7 183487bfea4148c8bd274489b01ac583 290c121e7a5344fea2a32f4e64e74fb4 - - default default] Security group member updated ['7248d87f-aba7-4d7e-b680-1fdbc4f1cdd3']#033[00m Dec 6 05:17:01 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:01.062 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:17:00Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=63b78d82-3c66-4f23-abda-cb052c5a7880, ip_allocation=immediate, mac_address=fa:16:3e:7d:59:5d, name=tempest-AllowedAddressPairTestJSON-1183397262, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:16:50Z, description=, dns_domain=, id=8fb7fee7-47f3-496e-84a0-2200c47dea55, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairTestJSON-test-network-162135958, port_security_enabled=True, project_id=290c121e7a5344fea2a32f4e64e74fb4, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=55801, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1197, status=ACTIVE, subnets=['c51e2e1d-7019-4112-9f0b-cec61466e763'], tags=[], tenant_id=290c121e7a5344fea2a32f4e64e74fb4, updated_at=2025-12-06T10:16:51Z, vlan_transparent=None, network_id=8fb7fee7-47f3-496e-84a0-2200c47dea55, port_security_enabled=True, project_id=290c121e7a5344fea2a32f4e64e74fb4, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['7248d87f-aba7-4d7e-b680-1fdbc4f1cdd3'], standard_attr_id=1246, status=DOWN, tags=[], tenant_id=290c121e7a5344fea2a32f4e64e74fb4, updated_at=2025-12-06T10:17:00Z on network 8fb7fee7-47f3-496e-84a0-2200c47dea55#033[00m Dec 6 05:17:01 localhost podman[317013]: 2025-12-06 10:17:01.265596244 +0000 UTC m=+0.035081535 container kill 31508468c3e2cd2720accdb917799d9a6e9d1e3826de33ddb026a829aff69936 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8fb7fee7-47f3-496e-84a0-2200c47dea55, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 6 05:17:01 localhost dnsmasq[316635]: read /var/lib/neutron/dhcp/8fb7fee7-47f3-496e-84a0-2200c47dea55/addn_hosts - 2 addresses Dec 6 05:17:01 localhost dnsmasq-dhcp[316635]: read /var/lib/neutron/dhcp/8fb7fee7-47f3-496e-84a0-2200c47dea55/host Dec 6 05:17:01 localhost dnsmasq-dhcp[316635]: read /var/lib/neutron/dhcp/8fb7fee7-47f3-496e-84a0-2200c47dea55/opts Dec 6 05:17:02 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e123 e123: 6 total, 6 up, 6 in Dec 6 05:17:02 localhost nova_compute[282193]: 2025-12-06 10:17:02.479 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:02 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:02.936 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:17:00Z, description=, device_id=0df74357-660e-4fa2-9159-46e39f559540, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=1b9f45f8-f0b0-429e-9cca-a3658963a54f, ip_allocation=immediate, mac_address=fa:16:3e:50:8d:54, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1247, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-06T10:17:00Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:17:03 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:03.026 263652 INFO neutron.agent.dhcp.agent [None req-ed2dbc9c-136b-4261-a697-cdf6831354c8 - - - - - -] DHCP configuration for ports {'63b78d82-3c66-4f23-abda-cb052c5a7880'} is completed#033[00m Dec 6 05:17:03 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 6 05:17:03 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2522 writes, 23K keys, 2522 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.08 MB/s#012Cumulative WAL: 2522 writes, 2522 syncs, 1.00 writes per sync, written: 0.05 GB, 0.08 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2522 writes, 23K keys, 2522 commit groups, 1.0 writes per commit group, ingest: 46.29 MB, 0.08 MB/s#012Interval WAL: 2522 writes, 2522 syncs, 1.00 writes per sync, written: 0.05 GB, 0.08 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 169.0 0.19 0.06 9 0.021 0 0 0.0 0.0#012 L6 1/0 18.39 MB 0.0 0.2 0.0 0.1 0.1 0.0 0.0 4.4 174.5 159.1 0.88 0.38 8 0.111 96K 3989 0.0 0.0#012 Sum 1/0 18.39 MB 0.0 0.2 0.0 0.1 0.2 0.0 0.0 5.4 143.7 160.8 1.07 0.44 17 0.063 96K 3989 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.2 0.0 0.1 0.2 0.0 0.0 5.4 144.1 161.2 1.07 0.44 16 0.067 96K 3989 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low 0/0 0.00 KB 0.0 0.2 0.0 0.1 0.1 0.0 0.0 0.0 174.5 159.1 0.88 0.38 8 0.111 96K 3989 0.0 0.0#012High 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 171.2 0.19 0.06 8 0.023 0 0 0.0 0.0#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.7 0.00 0.00 1 0.002 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.031, interval 0.031#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.17 GB write, 0.29 MB/s write, 0.15 GB read, 0.26 MB/s read, 1.1 seconds#012Interval compaction: 0.17 GB write, 0.29 MB/s write, 0.15 GB read, 0.26 MB/s read, 1.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b608f29350#2 capacity: 308.00 MB usage: 15.11 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 0.000113 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(762,14.45 MB,4.69297%) FilterBlock(17,291.61 KB,0.0924593%) IndexBlock(17,382.36 KB,0.121233%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] ** Dec 6 05:17:03 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 3 addresses Dec 6 05:17:03 localhost podman[317050]: 2025-12-06 10:17:03.191284913 +0000 UTC m=+0.062899709 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:17:03 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:17:03 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:17:03 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:17:03 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:03.471 263652 INFO neutron.agent.dhcp.agent [None req-691db5f4-e8cc-4ab0-a2e4-0a707959f49e - - - - - -] DHCP configuration for ports {'1b9f45f8-f0b0-429e-9cca-a3658963a54f'} is completed#033[00m Dec 6 05:17:03 localhost nova_compute[282193]: 2025-12-06 10:17:03.841 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:04 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e124 e124: 6 total, 6 up, 6 in Dec 6 05:17:04 localhost neutron_sriov_agent[256690]: 2025-12-06 10:17:04.261 2 INFO neutron.agent.securitygroups_rpc [None req-585d973c-6716-4802-939d-774d36a541bf 183487bfea4148c8bd274489b01ac583 290c121e7a5344fea2a32f4e64e74fb4 - - default default] Security group member updated ['7248d87f-aba7-4d7e-b680-1fdbc4f1cdd3']#033[00m Dec 6 05:17:04 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:04.410 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:17:03Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=620b3ac0-01e9-4c71-8ddf-fdc64ae29c4e, ip_allocation=immediate, mac_address=fa:16:3e:3a:76:0f, name=tempest-AllowedAddressPairTestJSON-2140749850, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:16:50Z, description=, dns_domain=, id=8fb7fee7-47f3-496e-84a0-2200c47dea55, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairTestJSON-test-network-162135958, port_security_enabled=True, project_id=290c121e7a5344fea2a32f4e64e74fb4, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=55801, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1197, status=ACTIVE, subnets=['c51e2e1d-7019-4112-9f0b-cec61466e763'], tags=[], tenant_id=290c121e7a5344fea2a32f4e64e74fb4, updated_at=2025-12-06T10:16:51Z, vlan_transparent=None, network_id=8fb7fee7-47f3-496e-84a0-2200c47dea55, port_security_enabled=True, project_id=290c121e7a5344fea2a32f4e64e74fb4, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['7248d87f-aba7-4d7e-b680-1fdbc4f1cdd3'], standard_attr_id=1249, status=DOWN, tags=[], tenant_id=290c121e7a5344fea2a32f4e64e74fb4, updated_at=2025-12-06T10:17:03Z on network 8fb7fee7-47f3-496e-84a0-2200c47dea55#033[00m Dec 6 05:17:04 localhost dnsmasq[316635]: read /var/lib/neutron/dhcp/8fb7fee7-47f3-496e-84a0-2200c47dea55/addn_hosts - 3 addresses Dec 6 05:17:04 localhost dnsmasq-dhcp[316635]: read /var/lib/neutron/dhcp/8fb7fee7-47f3-496e-84a0-2200c47dea55/host Dec 6 05:17:04 localhost dnsmasq-dhcp[316635]: read /var/lib/neutron/dhcp/8fb7fee7-47f3-496e-84a0-2200c47dea55/opts Dec 6 05:17:04 localhost podman[317089]: 2025-12-06 10:17:04.620642518 +0000 UTC m=+0.060584208 container kill 31508468c3e2cd2720accdb917799d9a6e9d1e3826de33ddb026a829aff69936 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8fb7fee7-47f3-496e-84a0-2200c47dea55, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 6 05:17:04 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:04.994 263652 INFO neutron.agent.dhcp.agent [None req-f7b443e5-dde0-4f2b-a20b-093fe62f1547 - - - - - -] DHCP configuration for ports {'620b3ac0-01e9-4c71-8ddf-fdc64ae29c4e'} is completed#033[00m Dec 6 05:17:05 localhost nova_compute[282193]: 2025-12-06 10:17:05.331 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:06 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e125 e125: 6 total, 6 up, 6 in Dec 6 05:17:06 localhost neutron_sriov_agent[256690]: 2025-12-06 10:17:06.955 2 INFO neutron.agent.securitygroups_rpc [None req-0061dfd5-bb12-495d-9673-65d6ef0bbdf6 183487bfea4148c8bd274489b01ac583 290c121e7a5344fea2a32f4e64e74fb4 - - default default] Security group member updated ['7248d87f-aba7-4d7e-b680-1fdbc4f1cdd3']#033[00m Dec 6 05:17:06 localhost neutron_sriov_agent[256690]: 2025-12-06 10:17:06.960 2 INFO neutron.agent.securitygroups_rpc [None req-113b5acf-416d-4ae9-b224-76f1b565f762 7dcd2b11aeb4499894c7ac7c29cb6997 d6a02136413f4ad3ac51d2c4ffdad3d4 - - default default] Security group member updated ['58296f43-3702-412f-8387-07510507ed41']#033[00m Dec 6 05:17:07 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e126 e126: 6 total, 6 up, 6 in Dec 6 05:17:07 localhost dnsmasq[316635]: read /var/lib/neutron/dhcp/8fb7fee7-47f3-496e-84a0-2200c47dea55/addn_hosts - 2 addresses Dec 6 05:17:07 localhost dnsmasq-dhcp[316635]: read /var/lib/neutron/dhcp/8fb7fee7-47f3-496e-84a0-2200c47dea55/host Dec 6 05:17:07 localhost dnsmasq-dhcp[316635]: read /var/lib/neutron/dhcp/8fb7fee7-47f3-496e-84a0-2200c47dea55/opts Dec 6 05:17:07 localhost podman[317128]: 2025-12-06 10:17:07.185707981 +0000 UTC m=+0.045188222 container kill 31508468c3e2cd2720accdb917799d9a6e9d1e3826de33ddb026a829aff69936 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8fb7fee7-47f3-496e-84a0-2200c47dea55, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Dec 6 05:17:07 localhost nova_compute[282193]: 2025-12-06 10:17:07.483 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:17:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.917 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'name': 'test', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005548789.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'hostId': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.918 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.927 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '399a6c3c-4bfc-462b-aae9-35cbb645f894', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:17:07.919015', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'b863a9d4-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12646.168387908, 'message_signature': 'f2111306e1257bfd985fc463689344c44d8b1179b564792c18f10b80f1be0957'}]}, 'timestamp': '2025-12-06 10:17:07.928298', '_unique_id': '48b1f1bbd37e42c19c9cdf5033786768'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.931 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.932 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 6 05:17:07 localhost podman[317151]: 2025-12-06 10:17:07.937119592 +0000 UTC m=+0.089827266 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.951 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.951 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '752bc1e0-382e-457a-a222-b0dead1e437e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:17:07.932683', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b867488c-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12646.182092584, 'message_signature': 'd33a4c449948e38f0debd6a1b8139ff90890a0e806e2597f03c2479a7df0bc83'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:17:07.932683', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b8675e58-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12646.182092584, 'message_signature': '7847419181d271e6485d5a081bbb934e8d42be6b0446511dd1c0e9a0ad987edb'}]}, 'timestamp': '2025-12-06 10:17:07.952412', '_unique_id': '4ed5ac9b46f9446b87d49c1d417f3314'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.953 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.955 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.955 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.955 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.bytes volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '679ffbb2-867d-4e07-a0a4-ab9ace7a4e23', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:17:07.955464', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'b867e7d8-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12646.168387908, 'message_signature': 'c52588fa6eff569eba2ecedb1dcbcedd508f2b8e816c160fd95ed92117fb87db'}]}, 'timestamp': '2025-12-06 10:17:07.956036', '_unique_id': 'a36b939187464f53b69bffe5547f54e3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.957 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.958 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.958 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'badc2661-f82d-4ded-89ca-36798cf75c1b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:17:07.958510', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'b8685eca-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12646.168387908, 'message_signature': '2667be29f62006f16ef9812165093074a37d00d293e88e7173b5e3110670fc4f'}]}, 'timestamp': '2025-12-06 10:17:07.959143', '_unique_id': '9cf0564cdbe745a5b2e8af45c212b7b5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.960 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.961 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.961 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6bea14d3-a872-4b2e-9be3-fc9c3acaae77', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:17:07.961521', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'b868d5d0-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12646.168387908, 'message_signature': '8b3953280b7d197f25d9b9c11ba14200f1b7d381f3f81382c86d181bd718600d'}]}, 'timestamp': '2025-12-06 10:17:07.962049', '_unique_id': '595cc331881b4dbfae6cf9595747b2cd'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.963 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:07.964 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 6 05:17:07 localhost podman[317150]: 2025-12-06 10:17:07.991466851 +0000 UTC m=+0.147842296 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, config_id=edpm, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, release=1755695350, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64) Dec 6 05:17:08 localhost podman[317151]: 2025-12-06 10:17:08.00035062 +0000 UTC m=+0.153058274 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.001 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.requests volume: 47 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.002 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b687cdc1-675c-4534-a2ac-1b30773db161', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 47, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:17:07.964442', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b86eead8-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12646.213841076, 'message_signature': '1e3eee8ac1b2fc62ec18f3ede434fd22e1b1e52fbfd9b904eef0e1b4d67eba2f'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:17:07.964442', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b86f04aa-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12646.213841076, 'message_signature': '4b6996662ee2b927e9831c21b3aba7f75a656ac1a311f89d7d7165889c6b6c66'}]}, 'timestamp': '2025-12-06 10:17:08.002643', '_unique_id': 'edb1af72fbc44923800b4d15893bb603'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.004 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.006 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.006 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.007 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4efed2de-41d7-41fc-8279-d8147f880f74', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:17:08.006446', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b86fb350-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12646.182092584, 'message_signature': '45a05f9b6b448f89b45dbe74baef80864e6abf3e4b38f5f53672a80790baaacc'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:17:08.006446', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b86fc840-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12646.182092584, 'message_signature': '8e5f70e76e79b217c85da8711b36c0851d87bc0462ea1fd45ddb2d04b858a8ee'}]}, 'timestamp': '2025-12-06 10:17:08.007544', '_unique_id': '8fb82c3a411047f98ac9944548ecec6b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.008 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.009 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.009 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets volume: 60 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:17:08 localhost podman[317150]: 2025-12-06 10:17:08.01224131 +0000 UTC m=+0.168616785 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, build-date=2025-08-20T13:12:41, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=minimal rhel9, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-type=git) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6585df6f-e727-40eb-9177-6fab047affbe', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 60, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:17:08.009923', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'b8703744-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12646.168387908, 'message_signature': '73ae9d3838e28cc4fe2aeeda6295d78227c5bfd98c1d30ac507bda9858154a2c'}]}, 'timestamp': '2025-12-06 10:17:08.010446', '_unique_id': 'ddbd988f246c45af8201f966767cac9d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.011 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.012 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.012 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets volume: 114 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '40938c5d-8545-4f6f-af1a-abdf4a61a548', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 114, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:17:08.012664', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'b870a2c4-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12646.168387908, 'message_signature': '517915c279a58cc59a66e566f8db149b3abd184c98b4f8797f445bb961c95cb1'}]}, 'timestamp': '2025-12-06 10:17:08.013174', '_unique_id': 'cdf3492e83b24fb5bbb1e43daf063e89'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.014 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.015 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.015 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.015 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'fdc30977-586a-4a27-8d1e-597d7b222ff1', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:17:08.015489', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b8710f84-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12646.182092584, 'message_signature': 'a53046bfbb22905a3257d25290fb3f97df0e4bb905ca4e16b992b6209be57d15'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:17:08.015489', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b87120c8-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12646.182092584, 'message_signature': 'e624872abf6348c1b5c0193d8647126b13e957b0c8f1229a5006c8424463133e'}]}, 'timestamp': '2025-12-06 10:17:08.016644', '_unique_id': 'c29c14eb66624af8a17a96867e344af4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.017 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.018 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.018 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.019 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2190a60d-af47-4316-90bb-e85727e16afa', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:17:08.018846', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b87193d2-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12646.213841076, 'message_signature': '21475f39b2cac19b8e44a651b6ba45160b38f4907e58e4f415db3a983d1980f7'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:17:08.018846', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b871a412-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12646.213841076, 'message_signature': 'f318dd026494c5dedd2e3d533d9786bc433faab2926ea77d5ccd02ce7ccac063'}]}, 'timestamp': '2025-12-06 10:17:08.019730', '_unique_id': '55e8084b46034cff8c143ab39c10530f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.020 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.021 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 6 05:17:08 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:17:08 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.038 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/memory.usage volume: 51.80859375 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0a397c47-eab9-43ca-8399-de10cb347571', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.80859375, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'timestamp': '2025-12-06T10:17:08.021938', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': 'b8749b86-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12646.287440688, 'message_signature': 'b4e947e2007eb3b51856f9c43dc00340ea69c9de081c826d92190d1a5b4c0297'}]}, 'timestamp': '2025-12-06 10:17:08.039342', '_unique_id': '8bf255170ea94b1fb42fef3c7b37d3d0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.041 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.042 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.042 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.043 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '16643b7c-0864-46f8-b545-c28fa48d8b4a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:17:08.042886', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b8753fa0-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12646.213841076, 'message_signature': '466a7da3f6bae4e4527184d39b822ed3cfa3620cb5d4faacca29161c43e9f587'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:17:08.042886', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b8755166-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12646.213841076, 'message_signature': 'f913331cd93feae86d1fe49f7f54e424e17f447bf4d5ed940ee74074155ac137'}]}, 'timestamp': '2025-12-06 10:17:08.043839', '_unique_id': '60f509ae6e314587b27f5d32e45db9db'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.044 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.046 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.046 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.bytes volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '03c1b836-4850-4456-8e9f-982a70a03bc5', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:17:08.046162', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'b875be12-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12646.168387908, 'message_signature': 'a66d5697e6b8dcd010cc8c9bf07a18361fed191081d61e30f5c6fc1c1a98b03e'}]}, 'timestamp': '2025-12-06 10:17:08.046621', '_unique_id': '97a53fd34b0c4178af8dc29a8859a713'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.047 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.048 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.048 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.048 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c236c281-338b-470c-8e6a-b9ba0c4b52c4', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:17:08.048890', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'b87628b6-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12646.168387908, 'message_signature': '5e6e54e98b8d606a5de94da48cf662826c0e49ebc6e364cc2a4c8e4da0a56af4'}]}, 'timestamp': '2025-12-06 10:17:08.049346', '_unique_id': 'ded72ee178e94f9cb0ba4d0306b9d769'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.050 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.051 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.051 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.latency volume: 1252245154 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.052 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.latency volume: 27668224 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e6d8a3a1-7255-4689-88b8-f7b16bc76f5b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1252245154, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:17:08.051517', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b8768eb4-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12646.213841076, 'message_signature': 'ea6edc5d50c6eb6ba1cbe6895f59b31b8d30a53d8b1adbf0611f233379ee2461'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 27668224, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:17:08.051517', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b876a340-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12646.213841076, 'message_signature': 'f2d9322dceb7a8bf1a83ab4d60ac05254bb9d43c0d70e5886fb41c919bfb102a'}]}, 'timestamp': '2025-12-06 10:17:08.052464', '_unique_id': '3f8047fbbe13494ab129aa0000e37ab9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.053 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.054 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.054 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.bytes volume: 389120 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.055 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '40d6387d-c714-411d-9c1b-29b79d2e82eb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 389120, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:17:08.054644', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b8770c86-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12646.213841076, 'message_signature': 'efbd9214f5ca818911e8b67e2e05cb3158be306c79081374f910c7d4332a8e4d'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:17:08.054644', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b87723c4-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12646.213841076, 'message_signature': '32f7098e80c0899a8ca35b8afd62bb13095e8f5e9382f15d534c3617df3dc9a1'}]}, 'timestamp': '2025-12-06 10:17:08.055873', '_unique_id': 'd274687dc31d44aea43e238e3d006230'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.057 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.058 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.058 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'be81a68e-2d89-460f-82e7-ef49a4e00f5a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:17:08.058819', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'b877af74-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12646.168387908, 'message_signature': 'e2f92c992bddb1bb72f75a3008e03d055dfdd96d8a318ee3ed5bc87b8d977003'}]}, 'timestamp': '2025-12-06 10:17:08.059356', '_unique_id': 'fd77dfa8c4574fb7bacd766da08006e5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.060 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.061 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.061 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.061 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f25947dc-9b03-43f7-88be-4b21092eccd1', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:17:08.061600', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'b87819e6-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12646.168387908, 'message_signature': 'a91a0350b8ba036a896185982a66953d0ab3a93c78105e0f21d1f502e3c6065a'}]}, 'timestamp': '2025-12-06 10:17:08.062081', '_unique_id': '0ac109510a174eabbf3837d056a4fc61'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.063 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.064 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.064 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.064 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/cpu volume: 16800000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'dcfa7939-6ba2-4e50-bdbd-8becb0f0757e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 16800000000, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'timestamp': '2025-12-06T10:17:08.064505', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': 'b8788a02-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12646.287440688, 'message_signature': 'e7fcf639eeab7ba0729f7b3af067b8bd929c11cbdae12d2c22c80e609dc4d2a8'}]}, 'timestamp': '2025-12-06 10:17:08.064967', '_unique_id': '558a01ed3e524e948cf674851acc29bc'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.065 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.067 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.067 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.latency volume: 1525105336 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.067 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.latency volume: 106716064 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e5dc6c6e-6c44-4da0-9701-302bb2f2c2ce', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1525105336, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:17:08.067343', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'b878f762-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12646.213841076, 'message_signature': 'c2e4f964e039e2896e1a2fe7723b102cfb3d4cae073d864ef5c49be197d75c0a'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 106716064, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:17:08.067343', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'b8790b1c-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12646.213841076, 'message_signature': 'be4ecaf8e5bb1e0fd153d497279b9933fa984eeb8485d963a91df8981e3afc8b'}]}, 'timestamp': '2025-12-06 10:17:08.068161', '_unique_id': '1a01c615fd84412991695b117ed93f4f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:17:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:17:08.068 12 ERROR oslo_messaging.notify.messaging Dec 6 05:17:08 localhost neutron_sriov_agent[256690]: 2025-12-06 10:17:08.211 2 INFO neutron.agent.securitygroups_rpc [None req-7b711491-888b-4783-949a-3ad1e34a6987 183487bfea4148c8bd274489b01ac583 290c121e7a5344fea2a32f4e64e74fb4 - - default default] Security group member updated ['7248d87f-aba7-4d7e-b680-1fdbc4f1cdd3']#033[00m Dec 6 05:17:08 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e127 e127: 6 total, 6 up, 6 in Dec 6 05:17:08 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:17:08 localhost dnsmasq[316635]: read /var/lib/neutron/dhcp/8fb7fee7-47f3-496e-84a0-2200c47dea55/addn_hosts - 1 addresses Dec 6 05:17:08 localhost podman[317207]: 2025-12-06 10:17:08.561383127 +0000 UTC m=+0.071137499 container kill 31508468c3e2cd2720accdb917799d9a6e9d1e3826de33ddb026a829aff69936 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8fb7fee7-47f3-496e-84a0-2200c47dea55, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3) Dec 6 05:17:08 localhost dnsmasq-dhcp[316635]: read /var/lib/neutron/dhcp/8fb7fee7-47f3-496e-84a0-2200c47dea55/host Dec 6 05:17:08 localhost dnsmasq-dhcp[316635]: read /var/lib/neutron/dhcp/8fb7fee7-47f3-496e-84a0-2200c47dea55/opts Dec 6 05:17:08 localhost nova_compute[282193]: 2025-12-06 10:17:08.843 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:08 localhost neutron_sriov_agent[256690]: 2025-12-06 10:17:08.859 2 INFO neutron.agent.securitygroups_rpc [None req-5531eff7-1536-47cb-87b6-fc07778b8cfc 183487bfea4148c8bd274489b01ac583 290c121e7a5344fea2a32f4e64e74fb4 - - default default] Security group member updated ['7248d87f-aba7-4d7e-b680-1fdbc4f1cdd3']#033[00m Dec 6 05:17:09 localhost dnsmasq[316635]: read /var/lib/neutron/dhcp/8fb7fee7-47f3-496e-84a0-2200c47dea55/addn_hosts - 0 addresses Dec 6 05:17:09 localhost dnsmasq-dhcp[316635]: read /var/lib/neutron/dhcp/8fb7fee7-47f3-496e-84a0-2200c47dea55/host Dec 6 05:17:09 localhost dnsmasq-dhcp[316635]: read /var/lib/neutron/dhcp/8fb7fee7-47f3-496e-84a0-2200c47dea55/opts Dec 6 05:17:09 localhost podman[317246]: 2025-12-06 10:17:09.085901916 +0000 UTC m=+0.060968891 container kill 31508468c3e2cd2720accdb917799d9a6e9d1e3826de33ddb026a829aff69936 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8fb7fee7-47f3-496e-84a0-2200c47dea55, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS) Dec 6 05:17:09 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e128 e128: 6 total, 6 up, 6 in Dec 6 05:17:10 localhost dnsmasq[316635]: exiting on receipt of SIGTERM Dec 6 05:17:10 localhost podman[317283]: 2025-12-06 10:17:10.048030839 +0000 UTC m=+0.059788135 container kill 31508468c3e2cd2720accdb917799d9a6e9d1e3826de33ddb026a829aff69936 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8fb7fee7-47f3-496e-84a0-2200c47dea55, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 6 05:17:10 localhost systemd[1]: libpod-31508468c3e2cd2720accdb917799d9a6e9d1e3826de33ddb026a829aff69936.scope: Deactivated successfully. Dec 6 05:17:10 localhost podman[317296]: 2025-12-06 10:17:10.118387663 +0000 UTC m=+0.053103051 container died 31508468c3e2cd2720accdb917799d9a6e9d1e3826de33ddb026a829aff69936 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8fb7fee7-47f3-496e-84a0-2200c47dea55, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 6 05:17:10 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-31508468c3e2cd2720accdb917799d9a6e9d1e3826de33ddb026a829aff69936-userdata-shm.mount: Deactivated successfully. Dec 6 05:17:10 localhost podman[317296]: 2025-12-06 10:17:10.159004555 +0000 UTC m=+0.093719883 container cleanup 31508468c3e2cd2720accdb917799d9a6e9d1e3826de33ddb026a829aff69936 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8fb7fee7-47f3-496e-84a0-2200c47dea55, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:17:10 localhost systemd[1]: libpod-conmon-31508468c3e2cd2720accdb917799d9a6e9d1e3826de33ddb026a829aff69936.scope: Deactivated successfully. Dec 6 05:17:10 localhost podman[317297]: 2025-12-06 10:17:10.203732721 +0000 UTC m=+0.132462178 container remove 31508468c3e2cd2720accdb917799d9a6e9d1e3826de33ddb026a829aff69936 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8fb7fee7-47f3-496e-84a0-2200c47dea55, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:17:10 localhost nova_compute[282193]: 2025-12-06 10:17:10.215 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:10 localhost ovn_controller[154851]: 2025-12-06T10:17:10Z|00191|binding|INFO|Releasing lport 949a183f-bfda-4354-9310-98929388f22d from this chassis (sb_readonly=0) Dec 6 05:17:10 localhost ovn_controller[154851]: 2025-12-06T10:17:10Z|00192|binding|INFO|Setting lport 949a183f-bfda-4354-9310-98929388f22d down in Southbound Dec 6 05:17:10 localhost kernel: device tap949a183f-bf left promiscuous mode Dec 6 05:17:10 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:10.227 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-8fb7fee7-47f3-496e-84a0-2200c47dea55', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8fb7fee7-47f3-496e-84a0-2200c47dea55', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '290c121e7a5344fea2a32f4e64e74fb4', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548789.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d6e7c85-e1e8-4901-8e98-f2cdf448ee9d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=949a183f-bfda-4354-9310-98929388f22d) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:17:10 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:10.230 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 949a183f-bfda-4354-9310-98929388f22d in datapath 8fb7fee7-47f3-496e-84a0-2200c47dea55 unbound from our chassis#033[00m Dec 6 05:17:10 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:10.232 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8fb7fee7-47f3-496e-84a0-2200c47dea55, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:17:10 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:10.233 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[e1bae5e5-3c00-4a46-8624-06382b6e8637]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:17:10 localhost nova_compute[282193]: 2025-12-06 10:17:10.236 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:10 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e129 e129: 6 total, 6 up, 6 in Dec 6 05:17:10 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:10.340 263652 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:17:10 localhost nova_compute[282193]: 2025-12-06 10:17:10.414 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:10 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:10.747 263652 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:17:11 localhost systemd[1]: var-lib-containers-storage-overlay-e623c434dbbedf87878fe7b9c9b2c78a861ad29951b1d46175e6c1c0d161e920-merged.mount: Deactivated successfully. Dec 6 05:17:11 localhost systemd[1]: run-netns-qdhcp\x2d8fb7fee7\x2d47f3\x2d496e\x2d84a0\x2d2200c47dea55.mount: Deactivated successfully. Dec 6 05:17:11 localhost ovn_controller[154851]: 2025-12-06T10:17:11Z|00193|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:17:11 localhost nova_compute[282193]: 2025-12-06 10:17:11.412 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:11 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:11.613 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:6c:02', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:a8:2f:0c:cb:a1'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:17:11 localhost nova_compute[282193]: 2025-12-06 10:17:11.613 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:11 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:11.616 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 6 05:17:12 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e130 e130: 6 total, 6 up, 6 in Dec 6 05:17:12 localhost nova_compute[282193]: 2025-12-06 10:17:12.534 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:12 localhost neutron_sriov_agent[256690]: 2025-12-06 10:17:12.610 2 INFO neutron.agent.securitygroups_rpc [req-0f5bcb52-14d6-4090-84d5-2a6fc264a912 req-f6b94b35-a5a9-45fc-80c3-03af12f9ebaa b4f9b4e4cabd4b079cb8c31c22004b7a 37dcf5204733427ebb8bdbe574dca584 - - default default] Security group rule updated ['1e2df8fe-9d93-4483-a509-0caee18c220e']#033[00m Dec 6 05:17:12 localhost sshd[317325]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:17:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:17:12 localhost podman[317326]: 2025-12-06 10:17:12.905361205 +0000 UTC m=+0.070289163 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2) Dec 6 05:17:12 localhost podman[317326]: 2025-12-06 10:17:12.912974836 +0000 UTC m=+0.077902804 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 6 05:17:12 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:17:12 localhost neutron_sriov_agent[256690]: 2025-12-06 10:17:12.963 2 INFO neutron.agent.securitygroups_rpc [None req-5b35cf9b-2c10-4acb-804d-e7f71d7bfae3 7dcd2b11aeb4499894c7ac7c29cb6997 d6a02136413f4ad3ac51d2c4ffdad3d4 - - default default] Security group member updated ['58296f43-3702-412f-8387-07510507ed41']#033[00m Dec 6 05:17:13 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:17:13 localhost nova_compute[282193]: 2025-12-06 10:17:13.847 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:13 localhost neutron_sriov_agent[256690]: 2025-12-06 10:17:13.898 2 INFO neutron.agent.securitygroups_rpc [req-01e69ff7-4c57-4a62-a8e2-72eac205e556 req-eb6ec33a-4a21-4246-bab7-a4fceda1903a b4f9b4e4cabd4b079cb8c31c22004b7a 37dcf5204733427ebb8bdbe574dca584 - - default default] Security group rule updated ['73772eb3-7feb-4994-9518-58f9e6d5a8ed']#033[00m Dec 6 05:17:14 localhost neutron_sriov_agent[256690]: 2025-12-06 10:17:14.676 2 INFO neutron.agent.securitygroups_rpc [None req-591e6c3a-21e9-4cb6-8654-ee5dfe5ee17d f89e0038548e41fa9a8202b7a7e9ade1 49bb78ce003e4bec87707ab7af03ae7e - - default default] Security group rule updated ['7d9717d3-d014-450e-9e8d-c62143b51d32']#033[00m Dec 6 05:17:15 localhost neutron_sriov_agent[256690]: 2025-12-06 10:17:15.529 2 INFO neutron.agent.securitygroups_rpc [req-802ef8ad-2f30-424a-8810-ccf196e89ec8 req-2c9ca4ac-9b05-42f0-9546-b86c6383ded6 b4f9b4e4cabd4b079cb8c31c22004b7a 37dcf5204733427ebb8bdbe574dca584 - - default default] Security group rule updated ['80cd7ff3-0b8b-4d61-9358-b2f28d5f4668']#033[00m Dec 6 05:17:15 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 2 addresses Dec 6 05:17:15 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:17:15 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:17:15 localhost podman[317362]: 2025-12-06 10:17:15.898432218 +0000 UTC m=+0.062550358 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:17:16 localhost openstack_network_exporter[243110]: ERROR 10:17:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:17:16 localhost openstack_network_exporter[243110]: ERROR 10:17:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:17:16 localhost openstack_network_exporter[243110]: ERROR 10:17:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:17:16 localhost openstack_network_exporter[243110]: ERROR 10:17:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:17:16 localhost openstack_network_exporter[243110]: Dec 6 05:17:16 localhost openstack_network_exporter[243110]: ERROR 10:17:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:17:16 localhost openstack_network_exporter[243110]: Dec 6 05:17:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:17:16 localhost podman[317384]: 2025-12-06 10:17:16.924116228 +0000 UTC m=+0.085654998 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 6 05:17:16 localhost podman[317384]: 2025-12-06 10:17:16.934083181 +0000 UTC m=+0.095621941 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 6 05:17:16 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:17:17 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e131 e131: 6 total, 6 up, 6 in Dec 6 05:17:17 localhost neutron_sriov_agent[256690]: 2025-12-06 10:17:17.224 2 INFO neutron.agent.securitygroups_rpc [req-d0c022c7-5c29-48d9-b4af-ef083b33fa00 req-5f51dca4-f136-4f7f-a521-ca766171afcb b4f9b4e4cabd4b079cb8c31c22004b7a 37dcf5204733427ebb8bdbe574dca584 - - default default] Security group rule updated ['48d24f9a-1de0-4ca7-bff4-bdd00474b49e']#033[00m Dec 6 05:17:17 localhost nova_compute[282193]: 2025-12-06 10:17:17.571 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:17 localhost neutron_sriov_agent[256690]: 2025-12-06 10:17:17.657 2 INFO neutron.agent.securitygroups_rpc [None req-2df808f7-3669-4bdd-a1f6-a6327b63c196 406e5cc53df749808a8770da68d7033d 64b9b91747c648148f6dd23ce81ceb80 - - default default] Security group member updated ['592d46d3-9a35-48d4-b6ca-ba1068626b4d']#033[00m Dec 6 05:17:18 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:18.077 263652 INFO neutron.agent.linux.ip_lib [None req-5affb593-f8b2-4f20-8c0e-96fe8e5951d1 - - - - - -] Device tap809f0ef4-0c cannot be used as it has no MAC address#033[00m Dec 6 05:17:18 localhost nova_compute[282193]: 2025-12-06 10:17:18.109 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:18 localhost kernel: device tap809f0ef4-0c entered promiscuous mode Dec 6 05:17:18 localhost NetworkManager[5973]: [1765016238.1198] manager: (tap809f0ef4-0c): new Generic device (/org/freedesktop/NetworkManager/Devices/34) Dec 6 05:17:18 localhost ovn_controller[154851]: 2025-12-06T10:17:18Z|00194|binding|INFO|Claiming lport 809f0ef4-0cca-474a-984b-630935d33748 for this chassis. Dec 6 05:17:18 localhost nova_compute[282193]: 2025-12-06 10:17:18.120 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:18 localhost ovn_controller[154851]: 2025-12-06T10:17:18Z|00195|binding|INFO|809f0ef4-0cca-474a-984b-630935d33748: Claiming unknown Dec 6 05:17:18 localhost systemd-udevd[317417]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:17:18 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:18.133 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-e709cdf3-3894-4310-9fed-c1671aabae61', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e709cdf3-3894-4310-9fed-c1671aabae61', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '64b9b91747c648148f6dd23ce81ceb80', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd31e5c5-52c8-4ae1-8e71-d675fcdc4430, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=809f0ef4-0cca-474a-984b-630935d33748) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:17:18 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:18.135 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 809f0ef4-0cca-474a-984b-630935d33748 in datapath e709cdf3-3894-4310-9fed-c1671aabae61 bound to our chassis#033[00m Dec 6 05:17:18 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:18.137 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network e709cdf3-3894-4310-9fed-c1671aabae61 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:17:18 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:18.137 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[0fd631c9-91e2-4cd3-b453-5ce2dfe70f81]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:17:18 localhost journal[230404]: ethtool ioctl error on tap809f0ef4-0c: No such device Dec 6 05:17:18 localhost nova_compute[282193]: 2025-12-06 10:17:18.154 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:18 localhost journal[230404]: ethtool ioctl error on tap809f0ef4-0c: No such device Dec 6 05:17:18 localhost ovn_controller[154851]: 2025-12-06T10:17:18Z|00196|binding|INFO|Setting lport 809f0ef4-0cca-474a-984b-630935d33748 ovn-installed in OVS Dec 6 05:17:18 localhost journal[230404]: ethtool ioctl error on tap809f0ef4-0c: No such device Dec 6 05:17:18 localhost nova_compute[282193]: 2025-12-06 10:17:18.163 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:18 localhost ovn_controller[154851]: 2025-12-06T10:17:18Z|00197|binding|INFO|Setting lport 809f0ef4-0cca-474a-984b-630935d33748 up in Southbound Dec 6 05:17:18 localhost nova_compute[282193]: 2025-12-06 10:17:18.165 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:18 localhost journal[230404]: ethtool ioctl error on tap809f0ef4-0c: No such device Dec 6 05:17:18 localhost journal[230404]: ethtool ioctl error on tap809f0ef4-0c: No such device Dec 6 05:17:18 localhost journal[230404]: ethtool ioctl error on tap809f0ef4-0c: No such device Dec 6 05:17:18 localhost journal[230404]: ethtool ioctl error on tap809f0ef4-0c: No such device Dec 6 05:17:18 localhost journal[230404]: ethtool ioctl error on tap809f0ef4-0c: No such device Dec 6 05:17:18 localhost nova_compute[282193]: 2025-12-06 10:17:18.192 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:18 localhost nova_compute[282193]: 2025-12-06 10:17:18.219 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:18 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:17:18 localhost nova_compute[282193]: 2025-12-06 10:17:18.848 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:18 localhost neutron_sriov_agent[256690]: 2025-12-06 10:17:18.887 2 INFO neutron.agent.securitygroups_rpc [None req-a4d21316-b177-48b7-92ec-319ed42d1b0b 406e5cc53df749808a8770da68d7033d 64b9b91747c648148f6dd23ce81ceb80 - - default default] Security group member updated ['592d46d3-9a35-48d4-b6ca-ba1068626b4d']#033[00m Dec 6 05:17:19 localhost podman[317489]: Dec 6 05:17:19 localhost podman[317489]: 2025-12-06 10:17:19.071575915 +0000 UTC m=+0.088478575 container create bdd630025009e2e44fbf04be47c5d7ffaef46590b0c0efcadf90ee243c00b629 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e709cdf3-3894-4310-9fed-c1671aabae61, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:17:19 localhost systemd[1]: Started libpod-conmon-bdd630025009e2e44fbf04be47c5d7ffaef46590b0c0efcadf90ee243c00b629.scope. Dec 6 05:17:19 localhost systemd[1]: tmp-crun.d4ESPW.mount: Deactivated successfully. Dec 6 05:17:19 localhost podman[317489]: 2025-12-06 10:17:19.02856727 +0000 UTC m=+0.045469960 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:17:19 localhost systemd[1]: Started libcrun container. Dec 6 05:17:19 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f971ae176a97e0304dfc9b45d1512a47d408b01268faabd5b4da043741f7c056/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:17:19 localhost podman[317489]: 2025-12-06 10:17:19.151681714 +0000 UTC m=+0.168584374 container init bdd630025009e2e44fbf04be47c5d7ffaef46590b0c0efcadf90ee243c00b629 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e709cdf3-3894-4310-9fed-c1671aabae61, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:17:19 localhost podman[317489]: 2025-12-06 10:17:19.160350587 +0000 UTC m=+0.177253257 container start bdd630025009e2e44fbf04be47c5d7ffaef46590b0c0efcadf90ee243c00b629 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e709cdf3-3894-4310-9fed-c1671aabae61, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 6 05:17:19 localhost dnsmasq[317507]: started, version 2.85 cachesize 150 Dec 6 05:17:19 localhost dnsmasq[317507]: DNS service limited to local subnets Dec 6 05:17:19 localhost dnsmasq[317507]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:17:19 localhost dnsmasq[317507]: warning: no upstream servers configured Dec 6 05:17:19 localhost dnsmasq-dhcp[317507]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 6 05:17:19 localhost dnsmasq[317507]: read /var/lib/neutron/dhcp/e709cdf3-3894-4310-9fed-c1671aabae61/addn_hosts - 0 addresses Dec 6 05:17:19 localhost dnsmasq-dhcp[317507]: read /var/lib/neutron/dhcp/e709cdf3-3894-4310-9fed-c1671aabae61/host Dec 6 05:17:19 localhost dnsmasq-dhcp[317507]: read /var/lib/neutron/dhcp/e709cdf3-3894-4310-9fed-c1671aabae61/opts Dec 6 05:17:19 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:19.220 263652 INFO neutron.agent.dhcp.agent [None req-5affb593-f8b2-4f20-8c0e-96fe8e5951d1 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:17:17Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=b93a6690-afbd-469d-bbc9-a5932ffd807d, ip_allocation=immediate, mac_address=fa:16:3e:b3:34:ce, name=tempest-ExtraDHCPOptionsIpV6TestJSON-1911397592, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:17:15Z, description=, dns_domain=, id=e709cdf3-3894-4310-9fed-c1671aabae61, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ExtraDHCPOptionsIpV6TestJSON-test-network-140163012, port_security_enabled=True, project_id=64b9b91747c648148f6dd23ce81ceb80, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=54995, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1327, status=ACTIVE, subnets=['bb46347f-24c7-43e9-9180-2da434974c29'], tags=[], tenant_id=64b9b91747c648148f6dd23ce81ceb80, updated_at=2025-12-06T10:17:16Z, vlan_transparent=None, network_id=e709cdf3-3894-4310-9fed-c1671aabae61, port_security_enabled=True, project_id=64b9b91747c648148f6dd23ce81ceb80, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['592d46d3-9a35-48d4-b6ca-ba1068626b4d'], standard_attr_id=1350, status=DOWN, tags=[], tenant_id=64b9b91747c648148f6dd23ce81ceb80, updated_at=2025-12-06T10:17:17Z on network e709cdf3-3894-4310-9fed-c1671aabae61#033[00m Dec 6 05:17:19 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:19.317 263652 INFO neutron.agent.dhcp.agent [None req-0099a7fb-e025-473b-ad90-745e6e5c3e62 - - - - - -] DHCP configuration for ports {'b171ac9f-0dc2-448f-bf80-30471cfeee04'} is completed#033[00m Dec 6 05:17:19 localhost dnsmasq[317507]: read /var/lib/neutron/dhcp/e709cdf3-3894-4310-9fed-c1671aabae61/addn_hosts - 1 addresses Dec 6 05:17:19 localhost dnsmasq-dhcp[317507]: read /var/lib/neutron/dhcp/e709cdf3-3894-4310-9fed-c1671aabae61/host Dec 6 05:17:19 localhost dnsmasq-dhcp[317507]: read /var/lib/neutron/dhcp/e709cdf3-3894-4310-9fed-c1671aabae61/opts Dec 6 05:17:19 localhost podman[317524]: 2025-12-06 10:17:19.415120814 +0000 UTC m=+0.058336059 container kill bdd630025009e2e44fbf04be47c5d7ffaef46590b0c0efcadf90ee243c00b629 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e709cdf3-3894-4310-9fed-c1671aabae61, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:17:19 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:19.573 263652 INFO neutron.agent.dhcp.agent [None req-5affb593-f8b2-4f20-8c0e-96fe8e5951d1 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:17:18Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[, , ], fixed_ips=[], id=105decea-a722-4708-998a-413f0ec23ccd, ip_allocation=immediate, mac_address=fa:16:3e:bf:1d:ef, name=tempest-ExtraDHCPOptionsIpV6TestJSON-611288529, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:17:15Z, description=, dns_domain=, id=e709cdf3-3894-4310-9fed-c1671aabae61, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ExtraDHCPOptionsIpV6TestJSON-test-network-140163012, port_security_enabled=True, project_id=64b9b91747c648148f6dd23ce81ceb80, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=54995, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1327, status=ACTIVE, subnets=['bb46347f-24c7-43e9-9180-2da434974c29'], tags=[], tenant_id=64b9b91747c648148f6dd23ce81ceb80, updated_at=2025-12-06T10:17:16Z, vlan_transparent=None, network_id=e709cdf3-3894-4310-9fed-c1671aabae61, port_security_enabled=True, project_id=64b9b91747c648148f6dd23ce81ceb80, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['592d46d3-9a35-48d4-b6ca-ba1068626b4d'], standard_attr_id=1356, status=DOWN, tags=[], tenant_id=64b9b91747c648148f6dd23ce81ceb80, updated_at=2025-12-06T10:17:18Z on network e709cdf3-3894-4310-9fed-c1671aabae61#033[00m Dec 6 05:17:19 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:19.594 263652 INFO neutron.agent.linux.dhcp [None req-5affb593-f8b2-4f20-8c0e-96fe8e5951d1 - - - - - -] Cannot apply dhcp option tftp-server because it's ip_version 4 is not in port's address IP versions#033[00m Dec 6 05:17:19 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:19.595 263652 INFO neutron.agent.linux.dhcp [None req-5affb593-f8b2-4f20-8c0e-96fe8e5951d1 - - - - - -] Cannot apply dhcp option server-ip-address because it's ip_version 4 is not in port's address IP versions#033[00m Dec 6 05:17:19 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:19.595 263652 INFO neutron.agent.linux.dhcp [None req-5affb593-f8b2-4f20-8c0e-96fe8e5951d1 - - - - - -] Cannot apply dhcp option bootfile-name because it's ip_version 4 is not in port's address IP versions#033[00m Dec 6 05:17:19 localhost neutron_sriov_agent[256690]: 2025-12-06 10:17:19.632 2 INFO neutron.agent.securitygroups_rpc [None req-de533a6a-08ae-42c2-b158-11c15e64ecbf 406e5cc53df749808a8770da68d7033d 64b9b91747c648148f6dd23ce81ceb80 - - default default] Security group member updated ['592d46d3-9a35-48d4-b6ca-ba1068626b4d']#033[00m Dec 6 05:17:19 localhost neutron_sriov_agent[256690]: 2025-12-06 10:17:19.699 2 INFO neutron.agent.securitygroups_rpc [req-5de42e7e-0662-4156-9401-22106a567059 req-ed4ee54f-d494-4326-9cfe-66d7201bb9f8 b4f9b4e4cabd4b079cb8c31c22004b7a 37dcf5204733427ebb8bdbe574dca584 - - default default] Security group rule updated ['9b6ed766-684e-4de1-9195-49dc13639cf2']#033[00m Dec 6 05:17:19 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:19.751 263652 INFO neutron.agent.dhcp.agent [None req-403a028f-089d-4a9b-b31d-59c08b8c8fc5 - - - - - -] DHCP configuration for ports {'b93a6690-afbd-469d-bbc9-a5932ffd807d'} is completed#033[00m Dec 6 05:17:19 localhost dnsmasq[317507]: read /var/lib/neutron/dhcp/e709cdf3-3894-4310-9fed-c1671aabae61/addn_hosts - 2 addresses Dec 6 05:17:19 localhost dnsmasq-dhcp[317507]: read /var/lib/neutron/dhcp/e709cdf3-3894-4310-9fed-c1671aabae61/host Dec 6 05:17:19 localhost dnsmasq-dhcp[317507]: read /var/lib/neutron/dhcp/e709cdf3-3894-4310-9fed-c1671aabae61/opts Dec 6 05:17:19 localhost podman[317562]: 2025-12-06 10:17:19.779090964 +0000 UTC m=+0.065736844 container kill bdd630025009e2e44fbf04be47c5d7ffaef46590b0c0efcadf90ee243c00b629 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e709cdf3-3894-4310-9fed-c1671aabae61, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 6 05:17:19 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:19.972 263652 INFO neutron.agent.dhcp.agent [None req-244ee3ae-dbbf-490b-86f7-19454eb179cb - - - - - -] DHCP configuration for ports {'105decea-a722-4708-998a-413f0ec23ccd'} is completed#033[00m Dec 6 05:17:19 localhost sshd[317589]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:17:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:17:20 localhost dnsmasq[317507]: read /var/lib/neutron/dhcp/e709cdf3-3894-4310-9fed-c1671aabae61/addn_hosts - 1 addresses Dec 6 05:17:20 localhost dnsmasq-dhcp[317507]: read /var/lib/neutron/dhcp/e709cdf3-3894-4310-9fed-c1671aabae61/host Dec 6 05:17:20 localhost dnsmasq-dhcp[317507]: read /var/lib/neutron/dhcp/e709cdf3-3894-4310-9fed-c1671aabae61/opts Dec 6 05:17:20 localhost podman[317604]: 2025-12-06 10:17:20.144378545 +0000 UTC m=+0.071305305 container kill bdd630025009e2e44fbf04be47c5d7ffaef46590b0c0efcadf90ee243c00b629 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e709cdf3-3894-4310-9fed-c1671aabae61, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Dec 6 05:17:20 localhost podman[317610]: 2025-12-06 10:17:20.197012741 +0000 UTC m=+0.104335986 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller) Dec 6 05:17:20 localhost podman[317610]: 2025-12-06 10:17:20.22437187 +0000 UTC m=+0.131695115 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true) Dec 6 05:17:20 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:17:20 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:20.355 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:17:17Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[, , ], fixed_ips=[], id=b93a6690-afbd-469d-bbc9-a5932ffd807d, ip_allocation=immediate, mac_address=fa:16:3e:b3:34:ce, name=tempest-new-port-name-1100173450, network_id=e709cdf3-3894-4310-9fed-c1671aabae61, port_security_enabled=True, project_id=64b9b91747c648148f6dd23ce81ceb80, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['592d46d3-9a35-48d4-b6ca-ba1068626b4d'], standard_attr_id=1350, status=DOWN, tags=[], tenant_id=64b9b91747c648148f6dd23ce81ceb80, updated_at=2025-12-06T10:17:19Z on network e709cdf3-3894-4310-9fed-c1671aabae61#033[00m Dec 6 05:17:20 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:20.370 263652 INFO neutron.agent.linux.dhcp [-] Cannot apply dhcp option bootfile-name because it's ip_version 4 is not in port's address IP versions#033[00m Dec 6 05:17:20 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:20.371 263652 INFO neutron.agent.linux.dhcp [-] Cannot apply dhcp option tftp-server because it's ip_version 4 is not in port's address IP versions#033[00m Dec 6 05:17:20 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:20.371 263652 INFO neutron.agent.linux.dhcp [-] Cannot apply dhcp option server-ip-address because it's ip_version 4 is not in port's address IP versions#033[00m Dec 6 05:17:20 localhost dnsmasq[317507]: read /var/lib/neutron/dhcp/e709cdf3-3894-4310-9fed-c1671aabae61/addn_hosts - 1 addresses Dec 6 05:17:20 localhost dnsmasq-dhcp[317507]: read /var/lib/neutron/dhcp/e709cdf3-3894-4310-9fed-c1671aabae61/host Dec 6 05:17:20 localhost podman[317669]: 2025-12-06 10:17:20.529171386 +0000 UTC m=+0.058950939 container kill bdd630025009e2e44fbf04be47c5d7ffaef46590b0c0efcadf90ee243c00b629 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e709cdf3-3894-4310-9fed-c1671aabae61, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:17:20 localhost dnsmasq-dhcp[317507]: read /var/lib/neutron/dhcp/e709cdf3-3894-4310-9fed-c1671aabae61/opts Dec 6 05:17:20 localhost neutron_sriov_agent[256690]: 2025-12-06 10:17:20.530 2 INFO neutron.agent.securitygroups_rpc [req-0deed905-7e01-4df3-9b96-c6dd2bc740af req-aa574686-cd75-40ee-9098-a2781b4cfdf3 b4f9b4e4cabd4b079cb8c31c22004b7a 37dcf5204733427ebb8bdbe574dca584 - - default default] Security group rule updated ['9b6ed766-684e-4de1-9195-49dc13639cf2']#033[00m Dec 6 05:17:20 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:20.618 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b142a5ef-fbed-4e92-aa78-e3ad080c6370, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 05:17:20 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:20.833 263652 INFO neutron.agent.dhcp.agent [None req-ae631429-7ca7-4709-b5e7-03d9ac315beb - - - - - -] DHCP configuration for ports {'b93a6690-afbd-469d-bbc9-a5932ffd807d'} is completed#033[00m Dec 6 05:17:21 localhost neutron_sriov_agent[256690]: 2025-12-06 10:17:21.274 2 INFO neutron.agent.securitygroups_rpc [None req-a16f2b30-088a-4292-a104-7f6939a88353 406e5cc53df749808a8770da68d7033d 64b9b91747c648148f6dd23ce81ceb80 - - default default] Security group member updated ['592d46d3-9a35-48d4-b6ca-ba1068626b4d']#033[00m Dec 6 05:17:21 localhost neutron_sriov_agent[256690]: 2025-12-06 10:17:21.331 2 INFO neutron.agent.securitygroups_rpc [req-eb1d2fcf-8073-401d-9c1d-cc925d78bfca req-148b7f3e-4a54-4ceb-8b18-a63fa0926a26 b4f9b4e4cabd4b079cb8c31c22004b7a 37dcf5204733427ebb8bdbe574dca584 - - default default] Security group rule updated ['9b6ed766-684e-4de1-9195-49dc13639cf2']#033[00m Dec 6 05:17:21 localhost dnsmasq[317507]: read /var/lib/neutron/dhcp/e709cdf3-3894-4310-9fed-c1671aabae61/addn_hosts - 0 addresses Dec 6 05:17:21 localhost dnsmasq-dhcp[317507]: read /var/lib/neutron/dhcp/e709cdf3-3894-4310-9fed-c1671aabae61/host Dec 6 05:17:21 localhost dnsmasq-dhcp[317507]: read /var/lib/neutron/dhcp/e709cdf3-3894-4310-9fed-c1671aabae61/opts Dec 6 05:17:21 localhost podman[317707]: 2025-12-06 10:17:21.488830284 +0000 UTC m=+0.065375454 container kill bdd630025009e2e44fbf04be47c5d7ffaef46590b0c0efcadf90ee243c00b629 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e709cdf3-3894-4310-9fed-c1671aabae61, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 6 05:17:22 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #37. Immutable memtables: 0. Dec 6 05:17:22 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:17:22.095509) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 6 05:17:22 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 37 Dec 6 05:17:22 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016242095590, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 2578, "num_deletes": 264, "total_data_size": 3568876, "memory_usage": 3624720, "flush_reason": "Manual Compaction"} Dec 6 05:17:22 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #38: started Dec 6 05:17:22 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016242108432, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 38, "file_size": 2275089, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21900, "largest_seqno": 24472, "table_properties": {"data_size": 2265916, "index_size": 5678, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 20684, "raw_average_key_size": 21, "raw_value_size": 2246918, "raw_average_value_size": 2321, "num_data_blocks": 247, "num_entries": 968, "num_filter_entries": 968, "num_deletions": 264, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765016079, "oldest_key_time": 1765016079, "file_creation_time": 1765016242, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ec2a3a6-c7d0-4f1e-9923-5a3b782725ae", "db_session_id": "JMBO5KX1IJCJ8FWC64EX", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}} Dec 6 05:17:22 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 12967 microseconds, and 6082 cpu microseconds. Dec 6 05:17:22 localhost ceph-mon[298582]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 6 05:17:22 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:17:22.108483) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #38: 2275089 bytes OK Dec 6 05:17:22 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:17:22.108506) [db/memtable_list.cc:519] [default] Level-0 commit table #38 started Dec 6 05:17:22 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:17:22.110155) [db/memtable_list.cc:722] [default] Level-0 commit table #38: memtable #1 done Dec 6 05:17:22 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:17:22.110176) EVENT_LOG_v1 {"time_micros": 1765016242110170, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 6 05:17:22 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:17:22.110199) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 6 05:17:22 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 3557478, prev total WAL file size 3557478, number of live WAL files 2. Dec 6 05:17:22 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000034.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:17:22 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:17:22.111176) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131373937' seq:72057594037927935, type:22 .. '7061786F73003132303439' seq:0, type:0; will stop at (end) Dec 6 05:17:22 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 6 05:17:22 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [38(2221KB)], [36(18MB)] Dec 6 05:17:22 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016242111228, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [38], "files_L6": [36], "score": -1, "input_data_size": 21558254, "oldest_snapshot_seqno": -1} Dec 6 05:17:22 localhost dnsmasq[317507]: exiting on receipt of SIGTERM Dec 6 05:17:22 localhost podman[317743]: 2025-12-06 10:17:22.151020069 +0000 UTC m=+0.062532267 container kill bdd630025009e2e44fbf04be47c5d7ffaef46590b0c0efcadf90ee243c00b629 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e709cdf3-3894-4310-9fed-c1671aabae61, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Dec 6 05:17:22 localhost systemd[1]: libpod-bdd630025009e2e44fbf04be47c5d7ffaef46590b0c0efcadf90ee243c00b629.scope: Deactivated successfully. Dec 6 05:17:22 localhost podman[317757]: 2025-12-06 10:17:22.205593905 +0000 UTC m=+0.046164072 container died bdd630025009e2e44fbf04be47c5d7ffaef46590b0c0efcadf90ee243c00b629 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e709cdf3-3894-4310-9fed-c1671aabae61, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 6 05:17:22 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #39: 12583 keys, 17569151 bytes, temperature: kUnknown Dec 6 05:17:22 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016242206254, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 39, "file_size": 17569151, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17497684, "index_size": 38918, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31493, "raw_key_size": 336683, "raw_average_key_size": 26, "raw_value_size": 17283747, "raw_average_value_size": 1373, "num_data_blocks": 1480, "num_entries": 12583, "num_filter_entries": 12583, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015623, "oldest_key_time": 0, "file_creation_time": 1765016242, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ec2a3a6-c7d0-4f1e-9923-5a3b782725ae", "db_session_id": "JMBO5KX1IJCJ8FWC64EX", "orig_file_number": 39, "seqno_to_time_mapping": "N/A"}} Dec 6 05:17:22 localhost ceph-mon[298582]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 6 05:17:22 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:17:22.206681) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 17569151 bytes Dec 6 05:17:22 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:17:22.209528) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 226.5 rd, 184.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 18.4 +0.0 blob) out(16.8 +0.0 blob), read-write-amplify(17.2) write-amplify(7.7) OK, records in: 13121, records dropped: 538 output_compression: NoCompression Dec 6 05:17:22 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:17:22.209570) EVENT_LOG_v1 {"time_micros": 1765016242209551, "job": 20, "event": "compaction_finished", "compaction_time_micros": 95186, "compaction_time_cpu_micros": 46164, "output_level": 6, "num_output_files": 1, "total_output_size": 17569151, "num_input_records": 13121, "num_output_records": 12583, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 6 05:17:22 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:17:22 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016242210163, "job": 20, "event": "table_file_deletion", "file_number": 38} Dec 6 05:17:22 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000036.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:17:22 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016242213743, "job": 20, "event": "table_file_deletion", "file_number": 36} Dec 6 05:17:22 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:17:22.111050) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:17:22 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:17:22.213844) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:17:22 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:17:22.213850) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:17:22 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:17:22.213852) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:17:22 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:17:22.213854) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:17:22 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:17:22.213856) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:17:22 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bdd630025009e2e44fbf04be47c5d7ffaef46590b0c0efcadf90ee243c00b629-userdata-shm.mount: Deactivated successfully. Dec 6 05:17:22 localhost podman[317757]: 2025-12-06 10:17:22.243431802 +0000 UTC m=+0.084001889 container cleanup bdd630025009e2e44fbf04be47c5d7ffaef46590b0c0efcadf90ee243c00b629 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e709cdf3-3894-4310-9fed-c1671aabae61, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 6 05:17:22 localhost systemd[1]: libpod-conmon-bdd630025009e2e44fbf04be47c5d7ffaef46590b0c0efcadf90ee243c00b629.scope: Deactivated successfully. Dec 6 05:17:22 localhost podman[317764]: 2025-12-06 10:17:22.313273331 +0000 UTC m=+0.139347288 container remove bdd630025009e2e44fbf04be47c5d7ffaef46590b0c0efcadf90ee243c00b629 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e709cdf3-3894-4310-9fed-c1671aabae61, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 6 05:17:22 localhost nova_compute[282193]: 2025-12-06 10:17:22.359 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:22 localhost ovn_controller[154851]: 2025-12-06T10:17:22Z|00198|binding|INFO|Releasing lport 809f0ef4-0cca-474a-984b-630935d33748 from this chassis (sb_readonly=0) Dec 6 05:17:22 localhost kernel: device tap809f0ef4-0c left promiscuous mode Dec 6 05:17:22 localhost ovn_controller[154851]: 2025-12-06T10:17:22Z|00199|binding|INFO|Setting lport 809f0ef4-0cca-474a-984b-630935d33748 down in Southbound Dec 6 05:17:22 localhost nova_compute[282193]: 2025-12-06 10:17:22.380 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:22 localhost systemd[1]: var-lib-containers-storage-overlay-f971ae176a97e0304dfc9b45d1512a47d408b01268faabd5b4da043741f7c056-merged.mount: Deactivated successfully. Dec 6 05:17:22 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:22.491 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-e709cdf3-3894-4310-9fed-c1671aabae61', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e709cdf3-3894-4310-9fed-c1671aabae61', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '64b9b91747c648148f6dd23ce81ceb80', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548789.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd31e5c5-52c8-4ae1-8e71-d675fcdc4430, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=809f0ef4-0cca-474a-984b-630935d33748) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:17:22 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:22.493 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 809f0ef4-0cca-474a-984b-630935d33748 in datapath e709cdf3-3894-4310-9fed-c1671aabae61 unbound from our chassis#033[00m Dec 6 05:17:22 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:22.494 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network e709cdf3-3894-4310-9fed-c1671aabae61 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:17:22 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:22.495 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[a69111f9-d8c9-4ae8-9c7f-07f4ed97ac54]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:17:22 localhost nova_compute[282193]: 2025-12-06 10:17:22.573 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:22 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:22.706 263652 INFO neutron.agent.dhcp.agent [None req-747ed302-3a3d-4852-ad67-7e34b8bd675e - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:17:22 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:22.707 263652 INFO neutron.agent.dhcp.agent [None req-747ed302-3a3d-4852-ad67-7e34b8bd675e - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:17:22 localhost systemd[1]: run-netns-qdhcp\x2de709cdf3\x2d3894\x2d4310\x2d9fed\x2dc1671aabae61.mount: Deactivated successfully. Dec 6 05:17:23 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:23.002 263652 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:17:23 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:17:23 localhost ovn_controller[154851]: 2025-12-06T10:17:23Z|00200|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:17:23 localhost nova_compute[282193]: 2025-12-06 10:17:23.305 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:23 localhost nova_compute[282193]: 2025-12-06 10:17:23.850 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:23 localhost podman[241090]: time="2025-12-06T10:17:23Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:17:23 localhost podman[241090]: @ - - [06/Dec/2025:10:17:23 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156104 "" "Go-http-client/1.1" Dec 6 05:17:23 localhost podman[241090]: @ - - [06/Dec/2025:10:17:23 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19266 "" "Go-http-client/1.1" Dec 6 05:17:26 localhost ovn_controller[154851]: 2025-12-06T10:17:26Z|00201|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:17:26 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 1 addresses Dec 6 05:17:26 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:17:26 localhost podman[317807]: 2025-12-06 10:17:26.440833666 +0000 UTC m=+0.070120148 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 6 05:17:26 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:17:26 localhost nova_compute[282193]: 2025-12-06 10:17:26.447 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:27 localhost nova_compute[282193]: 2025-12-06 10:17:27.576 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:28 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:17:28 localhost nova_compute[282193]: 2025-12-06 10:17:28.862 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:29 localhost neutron_sriov_agent[256690]: 2025-12-06 10:17:29.499 2 INFO neutron.agent.securitygroups_rpc [None req-b4a3dd75-3886-433c-a68a-5b82ba491223 a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']#033[00m Dec 6 05:17:29 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:29.934 263652 INFO neutron.agent.linux.ip_lib [None req-5a0982c0-0ffd-4ef1-87ce-b364c336c465 - - - - - -] Device tap1154309d-20 cannot be used as it has no MAC address#033[00m Dec 6 05:17:29 localhost nova_compute[282193]: 2025-12-06 10:17:29.991 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:29 localhost kernel: device tap1154309d-20 entered promiscuous mode Dec 6 05:17:30 localhost NetworkManager[5973]: [1765016250.0001] manager: (tap1154309d-20): new Generic device (/org/freedesktop/NetworkManager/Devices/35) Dec 6 05:17:30 localhost ovn_controller[154851]: 2025-12-06T10:17:30Z|00202|binding|INFO|Claiming lport 1154309d-2092-44e6-a8a3-8b5f18384543 for this chassis. Dec 6 05:17:30 localhost ovn_controller[154851]: 2025-12-06T10:17:30Z|00203|binding|INFO|1154309d-2092-44e6-a8a3-8b5f18384543: Claiming unknown Dec 6 05:17:30 localhost nova_compute[282193]: 2025-12-06 10:17:30.001 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:30 localhost systemd-udevd[317838]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:17:30 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:30.015 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=1154309d-2092-44e6-a8a3-8b5f18384543) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:17:30 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:30.018 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 1154309d-2092-44e6-a8a3-8b5f18384543 in datapath 43883dce-1590-48c4-987c-a21b63b82a1c bound to our chassis#033[00m Dec 6 05:17:30 localhost systemd-journald[47810]: Data hash table of /run/log/journal/4b30904fc4748c16d0c72dbebcabab49/system.journal has a fill level at 75.0 (53723 of 71630 items, 25165824 file size, 468 bytes per hash table item), suggesting rotation. Dec 6 05:17:30 localhost systemd-journald[47810]: /run/log/journal/4b30904fc4748c16d0c72dbebcabab49/system.journal: Journal header limits reached or header out-of-date, rotating. Dec 6 05:17:30 localhost rsyslogd[760]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 6 05:17:30 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:30.020 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 43883dce-1590-48c4-987c-a21b63b82a1c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:17:30 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:30.024 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[a6c46ac7-5337-4d33-904d-adf6d24c9cef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:17:30 localhost journal[230404]: ethtool ioctl error on tap1154309d-20: No such device Dec 6 05:17:30 localhost nova_compute[282193]: 2025-12-06 10:17:30.032 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:30 localhost journal[230404]: ethtool ioctl error on tap1154309d-20: No such device Dec 6 05:17:30 localhost ovn_controller[154851]: 2025-12-06T10:17:30Z|00204|binding|INFO|Setting lport 1154309d-2092-44e6-a8a3-8b5f18384543 ovn-installed in OVS Dec 6 05:17:30 localhost ovn_controller[154851]: 2025-12-06T10:17:30Z|00205|binding|INFO|Setting lport 1154309d-2092-44e6-a8a3-8b5f18384543 up in Southbound Dec 6 05:17:30 localhost nova_compute[282193]: 2025-12-06 10:17:30.036 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:30 localhost journal[230404]: ethtool ioctl error on tap1154309d-20: No such device Dec 6 05:17:30 localhost journal[230404]: ethtool ioctl error on tap1154309d-20: No such device Dec 6 05:17:30 localhost journal[230404]: ethtool ioctl error on tap1154309d-20: No such device Dec 6 05:17:30 localhost journal[230404]: ethtool ioctl error on tap1154309d-20: No such device Dec 6 05:17:30 localhost journal[230404]: ethtool ioctl error on tap1154309d-20: No such device Dec 6 05:17:30 localhost journal[230404]: ethtool ioctl error on tap1154309d-20: No such device Dec 6 05:17:30 localhost neutron_sriov_agent[256690]: 2025-12-06 10:17:30.064 2 INFO neutron.agent.securitygroups_rpc [None req-77263169-ab43-473e-a592-07200b19e18c a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']#033[00m Dec 6 05:17:30 localhost nova_compute[282193]: 2025-12-06 10:17:30.076 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:30 localhost nova_compute[282193]: 2025-12-06 10:17:30.107 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:30 localhost rsyslogd[760]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Dec 6 05:17:30 localhost neutron_sriov_agent[256690]: 2025-12-06 10:17:30.255 2 INFO neutron.agent.securitygroups_rpc [None req-b0fdf288-4ef8-4212-8aee-98bfee473c24 8eeb1ce8ea6f4981a55c23fbea57f4cb f9595f0635f14c2196533c0f5ee5dc3b - - default default] Security group member updated ['cab1d39e-aba5-4938-880e-87b80fed90d0']#033[00m Dec 6 05:17:30 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:30.321 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:17:29Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=7ea19ce8-d67f-4375-adea-622ee0a8cf03, ip_allocation=immediate, mac_address=fa:16:3e:38:89:d8, name=tempest-RoutersAdminNegativeIpV6Test-251011938, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=True, project_id=f9595f0635f14c2196533c0f5ee5dc3b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['cab1d39e-aba5-4938-880e-87b80fed90d0'], standard_attr_id=1437, status=DOWN, tags=[], tenant_id=f9595f0635f14c2196533c0f5ee5dc3b, updated_at=2025-12-06T10:17:29Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:17:30 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 2 addresses Dec 6 05:17:30 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:17:30 localhost podman[317902]: 2025-12-06 10:17:30.555148231 +0000 UTC m=+0.055501425 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Dec 6 05:17:30 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:17:30 localhost podman[317947]: Dec 6 05:17:30 localhost podman[317947]: 2025-12-06 10:17:30.908427596 +0000 UTC m=+0.090424833 container create fcd5fcaf1d18edd42f83cf2c3f5954ccc0712db0917ab07da489c141bedf3a44 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 6 05:17:30 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:30.913 263652 INFO neutron.agent.dhcp.agent [None req-fe2f1bd1-05bf-479e-85ed-4391ea10d1cc - - - - - -] DHCP configuration for ports {'7ea19ce8-d67f-4375-adea-622ee0a8cf03'} is completed#033[00m Dec 6 05:17:30 localhost systemd[1]: Started libpod-conmon-fcd5fcaf1d18edd42f83cf2c3f5954ccc0712db0917ab07da489c141bedf3a44.scope. Dec 6 05:17:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:17:30 localhost podman[317947]: 2025-12-06 10:17:30.8633832 +0000 UTC m=+0.045380477 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:17:30 localhost systemd[1]: Started libcrun container. Dec 6 05:17:30 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/319bd8eac1224c0d5ec3129b67444d61663b4bfc2a5a1b52547071c047a6e30a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:17:30 localhost podman[317947]: 2025-12-06 10:17:30.998424056 +0000 UTC m=+0.180421293 container init fcd5fcaf1d18edd42f83cf2c3f5954ccc0712db0917ab07da489c141bedf3a44 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Dec 6 05:17:31 localhost podman[317947]: 2025-12-06 10:17:31.01174498 +0000 UTC m=+0.193742217 container start fcd5fcaf1d18edd42f83cf2c3f5954ccc0712db0917ab07da489c141bedf3a44 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS) Dec 6 05:17:31 localhost dnsmasq[317974]: started, version 2.85 cachesize 150 Dec 6 05:17:31 localhost dnsmasq[317974]: DNS service limited to local subnets Dec 6 05:17:31 localhost dnsmasq[317974]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:17:31 localhost dnsmasq[317974]: warning: no upstream servers configured Dec 6 05:17:31 localhost dnsmasq-dhcp[317974]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 6 05:17:31 localhost dnsmasq[317974]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 0 addresses Dec 6 05:17:31 localhost dnsmasq-dhcp[317974]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:17:31 localhost dnsmasq-dhcp[317974]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:17:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:17:31 localhost podman[317963]: 2025-12-06 10:17:31.086679913 +0000 UTC m=+0.114853064 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 6 05:17:31 localhost podman[317963]: 2025-12-06 10:17:31.097077698 +0000 UTC m=+0.125250839 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 6 05:17:31 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:17:31 localhost podman[317982]: 2025-12-06 10:17:31.170680461 +0000 UTC m=+0.074720218 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 6 05:17:31 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:31.182 263652 INFO neutron.agent.dhcp.agent [None req-26ffc36e-4a76-46e7-8151-c8233f3843fd - - - - - -] DHCP configuration for ports {'687d7abb-e6aa-4047-aa26-552c962fcc91'} is completed#033[00m Dec 6 05:17:31 localhost podman[317982]: 2025-12-06 10:17:31.207209739 +0000 UTC m=+0.111249526 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 6 05:17:31 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:17:31 localhost dnsmasq[317974]: exiting on receipt of SIGTERM Dec 6 05:17:31 localhost podman[318021]: 2025-12-06 10:17:31.345648078 +0000 UTC m=+0.049712969 container kill fcd5fcaf1d18edd42f83cf2c3f5954ccc0712db0917ab07da489c141bedf3a44 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS) Dec 6 05:17:31 localhost systemd[1]: libpod-fcd5fcaf1d18edd42f83cf2c3f5954ccc0712db0917ab07da489c141bedf3a44.scope: Deactivated successfully. Dec 6 05:17:31 localhost podman[318035]: 2025-12-06 10:17:31.415337842 +0000 UTC m=+0.054245746 container died fcd5fcaf1d18edd42f83cf2c3f5954ccc0712db0917ab07da489c141bedf3a44 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 6 05:17:31 localhost podman[318035]: 2025-12-06 10:17:31.447023562 +0000 UTC m=+0.085931426 container cleanup fcd5fcaf1d18edd42f83cf2c3f5954ccc0712db0917ab07da489c141bedf3a44 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2) Dec 6 05:17:31 localhost systemd[1]: libpod-conmon-fcd5fcaf1d18edd42f83cf2c3f5954ccc0712db0917ab07da489c141bedf3a44.scope: Deactivated successfully. Dec 6 05:17:31 localhost podman[318036]: 2025-12-06 10:17:31.497804884 +0000 UTC m=+0.129344365 container remove fcd5fcaf1d18edd42f83cf2c3f5954ccc0712db0917ab07da489c141bedf3a44 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 6 05:17:31 localhost nova_compute[282193]: 2025-12-06 10:17:31.510 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:31 localhost ovn_controller[154851]: 2025-12-06T10:17:31Z|00206|binding|INFO|Releasing lport 1154309d-2092-44e6-a8a3-8b5f18384543 from this chassis (sb_readonly=0) Dec 6 05:17:31 localhost kernel: device tap1154309d-20 left promiscuous mode Dec 6 05:17:31 localhost ovn_controller[154851]: 2025-12-06T10:17:31Z|00207|binding|INFO|Setting lport 1154309d-2092-44e6-a8a3-8b5f18384543 down in Southbound Dec 6 05:17:31 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:31.518 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548789.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=1154309d-2092-44e6-a8a3-8b5f18384543) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:17:31 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:31.519 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 1154309d-2092-44e6-a8a3-8b5f18384543 in datapath 43883dce-1590-48c4-987c-a21b63b82a1c unbound from our chassis#033[00m Dec 6 05:17:31 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:31.519 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 43883dce-1590-48c4-987c-a21b63b82a1c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:17:31 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:31.520 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[996f7547-2ddc-447a-bd0a-108384b6081d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:17:31 localhost nova_compute[282193]: 2025-12-06 10:17:31.534 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:31 localhost sshd[318063]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:17:31 localhost systemd[1]: tmp-crun.CpQLbr.mount: Deactivated successfully. Dec 6 05:17:31 localhost systemd[1]: var-lib-containers-storage-overlay-319bd8eac1224c0d5ec3129b67444d61663b4bfc2a5a1b52547071c047a6e30a-merged.mount: Deactivated successfully. Dec 6 05:17:31 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fcd5fcaf1d18edd42f83cf2c3f5954ccc0712db0917ab07da489c141bedf3a44-userdata-shm.mount: Deactivated successfully. Dec 6 05:17:32 localhost systemd[1]: run-netns-qdhcp\x2d43883dce\x2d1590\x2d48c4\x2d987c\x2da21b63b82a1c.mount: Deactivated successfully. Dec 6 05:17:32 localhost neutron_sriov_agent[256690]: 2025-12-06 10:17:32.191 2 INFO neutron.agent.securitygroups_rpc [None req-a2daea0b-127d-4cb1-8d58-679cf0ec3092 a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']#033[00m Dec 6 05:17:32 localhost nova_compute[282193]: 2025-12-06 10:17:32.577 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:32 localhost neutron_sriov_agent[256690]: 2025-12-06 10:17:32.682 2 INFO neutron.agent.securitygroups_rpc [None req-a950d9cb-4b90-43c7-9619-4f314921acec 8eeb1ce8ea6f4981a55c23fbea57f4cb f9595f0635f14c2196533c0f5ee5dc3b - - default default] Security group member updated ['cab1d39e-aba5-4938-880e-87b80fed90d0']#033[00m Dec 6 05:17:32 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:32.826 263652 INFO neutron.agent.linux.ip_lib [None req-eb7b53fc-c777-40a1-97d1-51b2015d260d - - - - - -] Device tap097310b2-f2 cannot be used as it has no MAC address#033[00m Dec 6 05:17:32 localhost nova_compute[282193]: 2025-12-06 10:17:32.852 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:32 localhost kernel: device tap097310b2-f2 entered promiscuous mode Dec 6 05:17:32 localhost systemd-udevd[317841]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:17:32 localhost NetworkManager[5973]: [1765016252.8625] manager: (tap097310b2-f2): new Generic device (/org/freedesktop/NetworkManager/Devices/36) Dec 6 05:17:32 localhost ovn_controller[154851]: 2025-12-06T10:17:32Z|00208|binding|INFO|Claiming lport 097310b2-f25c-43e0-9a4c-c7a1efaf80e5 for this chassis. Dec 6 05:17:32 localhost ovn_controller[154851]: 2025-12-06T10:17:32Z|00209|binding|INFO|097310b2-f25c-43e0-9a4c-c7a1efaf80e5: Claiming unknown Dec 6 05:17:32 localhost nova_compute[282193]: 2025-12-06 10:17:32.866 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:32 localhost ovn_controller[154851]: 2025-12-06T10:17:32Z|00210|binding|INFO|Setting lport 097310b2-f25c-43e0-9a4c-c7a1efaf80e5 up in Southbound Dec 6 05:17:32 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:32.870 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=097310b2-f25c-43e0-9a4c-c7a1efaf80e5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:17:32 localhost ovn_controller[154851]: 2025-12-06T10:17:32Z|00211|binding|INFO|Setting lport 097310b2-f25c-43e0-9a4c-c7a1efaf80e5 ovn-installed in OVS Dec 6 05:17:32 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:32.872 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 097310b2-f25c-43e0-9a4c-c7a1efaf80e5 in datapath 43883dce-1590-48c4-987c-a21b63b82a1c bound to our chassis#033[00m Dec 6 05:17:32 localhost nova_compute[282193]: 2025-12-06 10:17:32.871 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:32 localhost nova_compute[282193]: 2025-12-06 10:17:32.872 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:32 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:32.874 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 43883dce-1590-48c4-987c-a21b63b82a1c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:17:32 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:32.875 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[e99d277b-ee7e-4ac4-9c6e-4dd99f200c8d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:17:32 localhost nova_compute[282193]: 2025-12-06 10:17:32.877 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:32 localhost nova_compute[282193]: 2025-12-06 10:17:32.897 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:32 localhost nova_compute[282193]: 2025-12-06 10:17:32.928 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:32 localhost nova_compute[282193]: 2025-12-06 10:17:32.952 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:32 localhost podman[318093]: 2025-12-06 10:17:32.954739175 +0000 UTC m=+0.058274679 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 6 05:17:32 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 1 addresses Dec 6 05:17:32 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:17:32 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:17:33 localhost neutron_sriov_agent[256690]: 2025-12-06 10:17:33.078 2 INFO neutron.agent.securitygroups_rpc [None req-675c08cc-007c-4dc9-986b-f4514913c9a2 a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']#033[00m Dec 6 05:17:33 localhost nova_compute[282193]: 2025-12-06 10:17:33.180 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:17:33 localhost nova_compute[282193]: 2025-12-06 10:17:33.204 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:17:33 localhost nova_compute[282193]: 2025-12-06 10:17:33.205 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:17:33 localhost nova_compute[282193]: 2025-12-06 10:17:33.206 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:17:33 localhost nova_compute[282193]: 2025-12-06 10:17:33.206 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Auditing locally available compute resources for np0005548789.localdomain (node: np0005548789.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 6 05:17:33 localhost nova_compute[282193]: 2025-12-06 10:17:33.207 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:17:33 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:17:33 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:17:33 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1151500263' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:17:33 localhost nova_compute[282193]: 2025-12-06 10:17:33.599 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.392s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:17:33 localhost neutron_sriov_agent[256690]: 2025-12-06 10:17:33.675 2 INFO neutron.agent.securitygroups_rpc [None req-2bf571b9-2f59-4b7c-8546-bb481f9be7b1 3ea76362796945abb0389f60eab07566 23fdd860878442e1b8fc77e4ae3ef271 - - default default] Security group member updated ['dd9785c1-eb5d-4293-ac78-0fc1ce108f20']#033[00m Dec 6 05:17:33 localhost nova_compute[282193]: 2025-12-06 10:17:33.689 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:17:33 localhost nova_compute[282193]: 2025-12-06 10:17:33.690 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:17:33 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:33.733 263652 INFO neutron.agent.linux.ip_lib [None req-0f9ee326-3f39-460e-b859-ae70c9c792d7 - - - - - -] Device tapfd998f59-dd cannot be used as it has no MAC address#033[00m Dec 6 05:17:33 localhost podman[318193]: Dec 6 05:17:33 localhost podman[318193]: 2025-12-06 10:17:33.769798207 +0000 UTC m=+0.079634957 container create abdb41519be4cbaad7fa091b9fa225772f0e04a0030c5b35bf7043b4fcf8ed1d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:17:33 localhost nova_compute[282193]: 2025-12-06 10:17:33.797 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:33 localhost kernel: device tapfd998f59-dd entered promiscuous mode Dec 6 05:17:33 localhost ovn_controller[154851]: 2025-12-06T10:17:33Z|00212|binding|INFO|Claiming lport fd998f59-ddde-4bfa-95a4-6f61b1679474 for this chassis. Dec 6 05:17:33 localhost ovn_controller[154851]: 2025-12-06T10:17:33Z|00213|binding|INFO|fd998f59-ddde-4bfa-95a4-6f61b1679474: Claiming unknown Dec 6 05:17:33 localhost NetworkManager[5973]: [1765016253.8093] manager: (tapfd998f59-dd): new Generic device (/org/freedesktop/NetworkManager/Devices/37) Dec 6 05:17:33 localhost nova_compute[282193]: 2025-12-06 10:17:33.810 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:33 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:33.820 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-fb8c7162-302b-4277-a437-7090f604bfc2', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fb8c7162-302b-4277-a437-7090f604bfc2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23fdd860878442e1b8fc77e4ae3ef271', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=23107f01-722b-406d-a1a5-a58a3fd6433e, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=fd998f59-ddde-4bfa-95a4-6f61b1679474) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:17:33 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:33.824 160509 INFO neutron.agent.ovn.metadata.agent [-] Port fd998f59-ddde-4bfa-95a4-6f61b1679474 in datapath fb8c7162-302b-4277-a437-7090f604bfc2 bound to our chassis#033[00m Dec 6 05:17:33 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:33.826 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Port 84c07f9d-e9b9-4723-baa3-f24a875f62ef IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 6 05:17:33 localhost podman[318193]: 2025-12-06 10:17:33.727937407 +0000 UTC m=+0.037774157 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:17:33 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:33.827 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fb8c7162-302b-4277-a437-7090f604bfc2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:17:33 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:33.828 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[72664c25-416a-411e-8388-4e3de35d4cb6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:17:33 localhost ovn_controller[154851]: 2025-12-06T10:17:33Z|00214|binding|INFO|Setting lport fd998f59-ddde-4bfa-95a4-6f61b1679474 ovn-installed in OVS Dec 6 05:17:33 localhost systemd[1]: Started libpod-conmon-abdb41519be4cbaad7fa091b9fa225772f0e04a0030c5b35bf7043b4fcf8ed1d.scope. Dec 6 05:17:33 localhost ovn_controller[154851]: 2025-12-06T10:17:33Z|00215|binding|INFO|Setting lport fd998f59-ddde-4bfa-95a4-6f61b1679474 up in Southbound Dec 6 05:17:33 localhost nova_compute[282193]: 2025-12-06 10:17:33.853 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:33 localhost nova_compute[282193]: 2025-12-06 10:17:33.864 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:33 localhost systemd[1]: Started libcrun container. Dec 6 05:17:33 localhost nova_compute[282193]: 2025-12-06 10:17:33.889 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:33 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8258fd64577ef554a8e4f45c1379598573b236be0e821bee3caa7d20e35b8f8d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:17:33 localhost podman[318193]: 2025-12-06 10:17:33.902165151 +0000 UTC m=+0.212001921 container init abdb41519be4cbaad7fa091b9fa225772f0e04a0030c5b35bf7043b4fcf8ed1d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Dec 6 05:17:33 localhost podman[318193]: 2025-12-06 10:17:33.909616017 +0000 UTC m=+0.219452787 container start abdb41519be4cbaad7fa091b9fa225772f0e04a0030c5b35bf7043b4fcf8ed1d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125) Dec 6 05:17:33 localhost dnsmasq[318222]: started, version 2.85 cachesize 150 Dec 6 05:17:33 localhost dnsmasq[318222]: DNS service limited to local subnets Dec 6 05:17:33 localhost dnsmasq[318222]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:17:33 localhost dnsmasq[318222]: warning: no upstream servers configured Dec 6 05:17:33 localhost dnsmasq[318222]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 1 addresses Dec 6 05:17:33 localhost nova_compute[282193]: 2025-12-06 10:17:33.917 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:33 localhost nova_compute[282193]: 2025-12-06 10:17:33.981 282197 WARNING nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 6 05:17:33 localhost nova_compute[282193]: 2025-12-06 10:17:33.982 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Hypervisor/Node resource view: name=np0005548789.localdomain free_ram=11215MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 6 05:17:33 localhost nova_compute[282193]: 2025-12-06 10:17:33.983 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:17:33 localhost nova_compute[282193]: 2025-12-06 10:17:33.983 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:17:34 localhost nova_compute[282193]: 2025-12-06 10:17:34.040 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 6 05:17:34 localhost nova_compute[282193]: 2025-12-06 10:17:34.040 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 6 05:17:34 localhost nova_compute[282193]: 2025-12-06 10:17:34.040 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Final resource view: name=np0005548789.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 6 05:17:34 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:34.092 263652 INFO neutron.agent.dhcp.agent [None req-f71282a2-6faf-4109-9a6a-39296595016c - - - - - -] DHCP configuration for ports {'5bada2e5-c44e-42db-929a-1fcf2ed4098d', '687d7abb-e6aa-4047-aa26-552c962fcc91'} is completed#033[00m Dec 6 05:17:34 localhost nova_compute[282193]: 2025-12-06 10:17:34.100 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:17:34 localhost neutron_sriov_agent[256690]: 2025-12-06 10:17:34.248 2 INFO neutron.agent.securitygroups_rpc [None req-f3a7982c-6432-4aaa-a51f-6f45752d4aa1 440e57a58b9f4b64af7435927930ce6a 37eea2b31d9543b793c928d777810de4 - - default default] Security group member updated ['5bf6ab1c-c80a-456c-9ce8-d446d055d129']#033[00m Dec 6 05:17:34 localhost podman[318254]: 2025-12-06 10:17:34.299493253 +0000 UTC m=+0.081950647 container kill abdb41519be4cbaad7fa091b9fa225772f0e04a0030c5b35bf7043b4fcf8ed1d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 6 05:17:34 localhost systemd[1]: tmp-crun.LL7E8X.mount: Deactivated successfully. Dec 6 05:17:34 localhost dnsmasq[318222]: exiting on receipt of SIGTERM Dec 6 05:17:34 localhost systemd[1]: libpod-abdb41519be4cbaad7fa091b9fa225772f0e04a0030c5b35bf7043b4fcf8ed1d.scope: Deactivated successfully. Dec 6 05:17:34 localhost podman[318296]: 2025-12-06 10:17:34.408643434 +0000 UTC m=+0.081668419 container died abdb41519be4cbaad7fa091b9fa225772f0e04a0030c5b35bf7043b4fcf8ed1d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2) Dec 6 05:17:34 localhost podman[318296]: 2025-12-06 10:17:34.448961976 +0000 UTC m=+0.121986931 container remove abdb41519be4cbaad7fa091b9fa225772f0e04a0030c5b35bf7043b4fcf8ed1d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 6 05:17:34 localhost ovn_controller[154851]: 2025-12-06T10:17:34Z|00216|binding|INFO|Releasing lport 097310b2-f25c-43e0-9a4c-c7a1efaf80e5 from this chassis (sb_readonly=0) Dec 6 05:17:34 localhost kernel: device tap097310b2-f2 left promiscuous mode Dec 6 05:17:34 localhost ovn_controller[154851]: 2025-12-06T10:17:34Z|00217|binding|INFO|Setting lport 097310b2-f25c-43e0-9a4c-c7a1efaf80e5 down in Southbound Dec 6 05:17:34 localhost nova_compute[282193]: 2025-12-06 10:17:34.459 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:34 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:34.469 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548789.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=097310b2-f25c-43e0-9a4c-c7a1efaf80e5) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:17:34 localhost sshd[318323]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:17:34 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:34.473 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 097310b2-f25c-43e0-9a4c-c7a1efaf80e5 in datapath 43883dce-1590-48c4-987c-a21b63b82a1c unbound from our chassis#033[00m Dec 6 05:17:34 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:34.474 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 43883dce-1590-48c4-987c-a21b63b82a1c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:17:34 localhost systemd[1]: libpod-conmon-abdb41519be4cbaad7fa091b9fa225772f0e04a0030c5b35bf7043b4fcf8ed1d.scope: Deactivated successfully. Dec 6 05:17:34 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:34.475 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[dad0dc18-ab92-4cef-8e44-993fe5fc3211]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:17:34 localhost nova_compute[282193]: 2025-12-06 10:17:34.478 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:34 localhost nova_compute[282193]: 2025-12-06 10:17:34.586 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:17:34 localhost nova_compute[282193]: 2025-12-06 10:17:34.593 282197 DEBUG nova.compute.provider_tree [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed in ProviderTree for provider: 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 6 05:17:34 localhost nova_compute[282193]: 2025-12-06 10:17:34.617 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 6 05:17:34 localhost nova_compute[282193]: 2025-12-06 10:17:34.620 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Compute_service record updated for np0005548789.localdomain:np0005548789.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 6 05:17:34 localhost nova_compute[282193]: 2025-12-06 10:17:34.621 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:17:34 localhost neutron_sriov_agent[256690]: 2025-12-06 10:17:34.797 2 INFO neutron.agent.securitygroups_rpc [None req-b1db9883-f5c1-471b-9a07-cebf6b7ffba6 a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']#033[00m Dec 6 05:17:34 localhost podman[318349]: Dec 6 05:17:34 localhost podman[318349]: 2025-12-06 10:17:34.851019511 +0000 UTC m=+0.094169727 container create c4c8ff188d2142423ad18d025ac974ba3c2dd4c9e1be9e72713cefc79062138c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fb8c7162-302b-4277-a437-7090f604bfc2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:17:34 localhost systemd[1]: Started libpod-conmon-c4c8ff188d2142423ad18d025ac974ba3c2dd4c9e1be9e72713cefc79062138c.scope. Dec 6 05:17:34 localhost podman[318349]: 2025-12-06 10:17:34.806889703 +0000 UTC m=+0.050039959 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:17:34 localhost systemd[1]: Started libcrun container. Dec 6 05:17:34 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e8ffbe4ba68729562762de2b34e92b94090dcca76e856e5cd87d912c117f07d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:17:34 localhost podman[318349]: 2025-12-06 10:17:34.936683609 +0000 UTC m=+0.179833835 container init c4c8ff188d2142423ad18d025ac974ba3c2dd4c9e1be9e72713cefc79062138c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fb8c7162-302b-4277-a437-7090f604bfc2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 6 05:17:34 localhost podman[318349]: 2025-12-06 10:17:34.945243019 +0000 UTC m=+0.188393235 container start c4c8ff188d2142423ad18d025ac974ba3c2dd4c9e1be9e72713cefc79062138c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fb8c7162-302b-4277-a437-7090f604bfc2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 6 05:17:34 localhost dnsmasq[318368]: started, version 2.85 cachesize 150 Dec 6 05:17:34 localhost dnsmasq[318368]: DNS service limited to local subnets Dec 6 05:17:34 localhost dnsmasq[318368]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:17:34 localhost dnsmasq[318368]: warning: no upstream servers configured Dec 6 05:17:34 localhost dnsmasq-dhcp[318368]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 6 05:17:34 localhost dnsmasq[318368]: read /var/lib/neutron/dhcp/fb8c7162-302b-4277-a437-7090f604bfc2/addn_hosts - 0 addresses Dec 6 05:17:34 localhost dnsmasq-dhcp[318368]: read /var/lib/neutron/dhcp/fb8c7162-302b-4277-a437-7090f604bfc2/host Dec 6 05:17:34 localhost dnsmasq-dhcp[318368]: read /var/lib/neutron/dhcp/fb8c7162-302b-4277-a437-7090f604bfc2/opts Dec 6 05:17:34 localhost systemd[1]: var-lib-containers-storage-overlay-8258fd64577ef554a8e4f45c1379598573b236be0e821bee3caa7d20e35b8f8d-merged.mount: Deactivated successfully. Dec 6 05:17:34 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-abdb41519be4cbaad7fa091b9fa225772f0e04a0030c5b35bf7043b4fcf8ed1d-userdata-shm.mount: Deactivated successfully. Dec 6 05:17:34 localhost systemd[1]: run-netns-qdhcp\x2d43883dce\x2d1590\x2d48c4\x2d987c\x2da21b63b82a1c.mount: Deactivated successfully. Dec 6 05:17:35 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:35.076 263652 INFO neutron.agent.dhcp.agent [None req-b61c0392-90c1-45fa-b424-14f5a8b16f60 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:17:33Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=b850c0d9-77c9-4dd5-9ad4-2e5a440a1ba5, ip_allocation=immediate, mac_address=fa:16:3e:65:f6:03, name=tempest-TagsExtTest-884351016, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:17:29Z, description=, dns_domain=, id=fb8c7162-302b-4277-a437-7090f604bfc2, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TagsExtTest-test-network-640583948, port_security_enabled=True, project_id=23fdd860878442e1b8fc77e4ae3ef271, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=45649, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1436, status=ACTIVE, subnets=['1af85c38-1fa4-4964-bb73-fcbd8bb9b651'], tags=[], tenant_id=23fdd860878442e1b8fc77e4ae3ef271, updated_at=2025-12-06T10:17:31Z, vlan_transparent=None, network_id=fb8c7162-302b-4277-a437-7090f604bfc2, port_security_enabled=True, project_id=23fdd860878442e1b8fc77e4ae3ef271, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['dd9785c1-eb5d-4293-ac78-0fc1ce108f20'], standard_attr_id=1457, status=DOWN, tags=[], tenant_id=23fdd860878442e1b8fc77e4ae3ef271, updated_at=2025-12-06T10:17:33Z on network fb8c7162-302b-4277-a437-7090f604bfc2#033[00m Dec 6 05:17:35 localhost systemd[1]: virtsecretd.service: Deactivated successfully. Dec 6 05:17:35 localhost neutron_sriov_agent[256690]: 2025-12-06 10:17:35.197 2 INFO neutron.agent.securitygroups_rpc [None req-5b990af0-9142-4008-b949-8f1c6c9fa9d7 440e57a58b9f4b64af7435927930ce6a 37eea2b31d9543b793c928d777810de4 - - default default] Security group member updated ['5bf6ab1c-c80a-456c-9ce8-d446d055d129']#033[00m Dec 6 05:17:35 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:35.232 263652 INFO neutron.agent.dhcp.agent [None req-8b3f73e4-3f94-4385-9ba9-30d1f570a429 - - - - - -] DHCP configuration for ports {'132f8120-ff81-4b64-9ef9-4b612c95da6c'} is completed#033[00m Dec 6 05:17:35 localhost dnsmasq[318368]: read /var/lib/neutron/dhcp/fb8c7162-302b-4277-a437-7090f604bfc2/addn_hosts - 1 addresses Dec 6 05:17:35 localhost dnsmasq-dhcp[318368]: read /var/lib/neutron/dhcp/fb8c7162-302b-4277-a437-7090f604bfc2/host Dec 6 05:17:35 localhost dnsmasq-dhcp[318368]: read /var/lib/neutron/dhcp/fb8c7162-302b-4277-a437-7090f604bfc2/opts Dec 6 05:17:35 localhost podman[318387]: 2025-12-06 10:17:35.297657439 +0000 UTC m=+0.070495739 container kill c4c8ff188d2142423ad18d025ac974ba3c2dd4c9e1be9e72713cefc79062138c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fb8c7162-302b-4277-a437-7090f604bfc2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 6 05:17:35 localhost systemd[1]: tmp-crun.e9PE2o.mount: Deactivated successfully. Dec 6 05:17:35 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:35.505 263652 INFO neutron.agent.dhcp.agent [None req-6ab2c6ed-7134-46a9-9aa7-7ca216e2a1e7 - - - - - -] DHCP configuration for ports {'b850c0d9-77c9-4dd5-9ad4-2e5a440a1ba5'} is completed#033[00m Dec 6 05:17:35 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:35.610 263652 INFO neutron.agent.linux.ip_lib [None req-a4ad59cc-3b69-4ecb-9f10-469052654f2c - - - - - -] Device tap5e36b702-7f cannot be used as it has no MAC address#033[00m Dec 6 05:17:35 localhost nova_compute[282193]: 2025-12-06 10:17:35.639 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:35 localhost kernel: device tap5e36b702-7f entered promiscuous mode Dec 6 05:17:35 localhost ovn_controller[154851]: 2025-12-06T10:17:35Z|00218|binding|INFO|Claiming lport 5e36b702-7f25-4b55-969a-7996ee55fcd1 for this chassis. Dec 6 05:17:35 localhost ovn_controller[154851]: 2025-12-06T10:17:35Z|00219|binding|INFO|5e36b702-7f25-4b55-969a-7996ee55fcd1: Claiming unknown Dec 6 05:17:35 localhost NetworkManager[5973]: [1765016255.6503] manager: (tap5e36b702-7f): new Generic device (/org/freedesktop/NetworkManager/Devices/38) Dec 6 05:17:35 localhost nova_compute[282193]: 2025-12-06 10:17:35.649 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:35 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:35.660 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=5e36b702-7f25-4b55-969a-7996ee55fcd1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:17:35 localhost ovn_controller[154851]: 2025-12-06T10:17:35Z|00220|binding|INFO|Setting lport 5e36b702-7f25-4b55-969a-7996ee55fcd1 ovn-installed in OVS Dec 6 05:17:35 localhost ovn_controller[154851]: 2025-12-06T10:17:35Z|00221|binding|INFO|Setting lport 5e36b702-7f25-4b55-969a-7996ee55fcd1 up in Southbound Dec 6 05:17:35 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:35.663 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 5e36b702-7f25-4b55-969a-7996ee55fcd1 in datapath 43883dce-1590-48c4-987c-a21b63b82a1c bound to our chassis#033[00m Dec 6 05:17:35 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:35.664 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 43883dce-1590-48c4-987c-a21b63b82a1c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:17:35 localhost nova_compute[282193]: 2025-12-06 10:17:35.664 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:35 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:35.665 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[7ce52bab-fc70-4e13-beef-8c8714005f58]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:17:35 localhost nova_compute[282193]: 2025-12-06 10:17:35.676 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:35 localhost journal[230404]: ethtool ioctl error on tap5e36b702-7f: No such device Dec 6 05:17:35 localhost journal[230404]: ethtool ioctl error on tap5e36b702-7f: No such device Dec 6 05:17:35 localhost journal[230404]: ethtool ioctl error on tap5e36b702-7f: No such device Dec 6 05:17:35 localhost journal[230404]: ethtool ioctl error on tap5e36b702-7f: No such device Dec 6 05:17:35 localhost journal[230404]: ethtool ioctl error on tap5e36b702-7f: No such device Dec 6 05:17:35 localhost journal[230404]: ethtool ioctl error on tap5e36b702-7f: No such device Dec 6 05:17:35 localhost journal[230404]: ethtool ioctl error on tap5e36b702-7f: No such device Dec 6 05:17:35 localhost journal[230404]: ethtool ioctl error on tap5e36b702-7f: No such device Dec 6 05:17:35 localhost nova_compute[282193]: 2025-12-06 10:17:35.712 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:35 localhost nova_compute[282193]: 2025-12-06 10:17:35.741 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:35 localhost neutron_sriov_agent[256690]: 2025-12-06 10:17:35.856 2 INFO neutron.agent.securitygroups_rpc [None req-a5058513-5128-4405-b292-62b6045d3f2a a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']#033[00m Dec 6 05:17:36 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e132 e132: 6 total, 6 up, 6 in Dec 6 05:17:36 localhost podman[318487]: Dec 6 05:17:36 localhost podman[318487]: 2025-12-06 10:17:36.596244336 +0000 UTC m=+0.093521607 container create 0fba49061381186b00c01e0d848b7dc36e7749d4eef5b11601e32c45c06e59d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2) Dec 6 05:17:36 localhost systemd[1]: Started libpod-conmon-0fba49061381186b00c01e0d848b7dc36e7749d4eef5b11601e32c45c06e59d3.scope. Dec 6 05:17:36 localhost podman[318487]: 2025-12-06 10:17:36.55053222 +0000 UTC m=+0.047809491 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:17:36 localhost systemd[1]: Started libcrun container. Dec 6 05:17:36 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cea834eea21e8604cb1b513c17b5bce8499315738b89efdffa08927875e6fc9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:17:36 localhost podman[318487]: 2025-12-06 10:17:36.680861664 +0000 UTC m=+0.178138905 container init 0fba49061381186b00c01e0d848b7dc36e7749d4eef5b11601e32c45c06e59d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 6 05:17:36 localhost podman[318487]: 2025-12-06 10:17:36.692136836 +0000 UTC m=+0.189414067 container start 0fba49061381186b00c01e0d848b7dc36e7749d4eef5b11601e32c45c06e59d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125) Dec 6 05:17:36 localhost dnsmasq[318505]: started, version 2.85 cachesize 150 Dec 6 05:17:36 localhost dnsmasq[318505]: DNS service limited to local subnets Dec 6 05:17:36 localhost dnsmasq[318505]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:17:36 localhost dnsmasq[318505]: warning: no upstream servers configured Dec 6 05:17:36 localhost dnsmasq-dhcp[318505]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 6 05:17:36 localhost dnsmasq[318505]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 1 addresses Dec 6 05:17:36 localhost dnsmasq-dhcp[318505]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:17:36 localhost dnsmasq-dhcp[318505]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:17:37 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:37.069 263652 INFO neutron.agent.dhcp.agent [None req-8e9a43fc-a131-4331-93cb-4e1a04745d26 - - - - - -] DHCP configuration for ports {'4e06a687-1f49-4292-acf2-929e0eb84acf', '687d7abb-e6aa-4047-aa26-552c962fcc91'} is completed#033[00m Dec 6 05:17:37 localhost dnsmasq[318505]: exiting on receipt of SIGTERM Dec 6 05:17:37 localhost podman[318523]: 2025-12-06 10:17:37.255437832 +0000 UTC m=+0.059367302 container kill 0fba49061381186b00c01e0d848b7dc36e7749d4eef5b11601e32c45c06e59d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 6 05:17:37 localhost systemd[1]: libpod-0fba49061381186b00c01e0d848b7dc36e7749d4eef5b11601e32c45c06e59d3.scope: Deactivated successfully. Dec 6 05:17:37 localhost podman[318536]: 2025-12-06 10:17:37.333103628 +0000 UTC m=+0.066182829 container died 0fba49061381186b00c01e0d848b7dc36e7749d4eef5b11601e32c45c06e59d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 6 05:17:37 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e133 e133: 6 total, 6 up, 6 in Dec 6 05:17:37 localhost podman[318536]: 2025-12-06 10:17:37.412181316 +0000 UTC m=+0.145260457 container cleanup 0fba49061381186b00c01e0d848b7dc36e7749d4eef5b11601e32c45c06e59d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125) Dec 6 05:17:37 localhost systemd[1]: libpod-conmon-0fba49061381186b00c01e0d848b7dc36e7749d4eef5b11601e32c45c06e59d3.scope: Deactivated successfully. Dec 6 05:17:37 localhost podman[318538]: 2025-12-06 10:17:37.455943693 +0000 UTC m=+0.179115694 container remove 0fba49061381186b00c01e0d848b7dc36e7749d4eef5b11601e32c45c06e59d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 6 05:17:37 localhost kernel: device tap5e36b702-7f left promiscuous mode Dec 6 05:17:37 localhost nova_compute[282193]: 2025-12-06 10:17:37.469 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:37 localhost ovn_controller[154851]: 2025-12-06T10:17:37Z|00222|binding|INFO|Releasing lport 5e36b702-7f25-4b55-969a-7996ee55fcd1 from this chassis (sb_readonly=0) Dec 6 05:17:37 localhost ovn_controller[154851]: 2025-12-06T10:17:37Z|00223|binding|INFO|Setting lport 5e36b702-7f25-4b55-969a-7996ee55fcd1 down in Southbound Dec 6 05:17:37 localhost nova_compute[282193]: 2025-12-06 10:17:37.484 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:37 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:37.499 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548789.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=5e36b702-7f25-4b55-969a-7996ee55fcd1) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:17:37 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:37.502 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 5e36b702-7f25-4b55-969a-7996ee55fcd1 in datapath 43883dce-1590-48c4-987c-a21b63b82a1c unbound from our chassis#033[00m Dec 6 05:17:37 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:37.503 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 43883dce-1590-48c4-987c-a21b63b82a1c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:17:37 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:37.505 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[7d00bfb2-3e2a-495e-9143-4b952b821f28]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:17:37 localhost nova_compute[282193]: 2025-12-06 10:17:37.579 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:37 localhost systemd[1]: var-lib-containers-storage-overlay-3cea834eea21e8604cb1b513c17b5bce8499315738b89efdffa08927875e6fc9-merged.mount: Deactivated successfully. Dec 6 05:17:37 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0fba49061381186b00c01e0d848b7dc36e7749d4eef5b11601e32c45c06e59d3-userdata-shm.mount: Deactivated successfully. Dec 6 05:17:37 localhost nova_compute[282193]: 2025-12-06 10:17:37.617 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:17:37 localhost nova_compute[282193]: 2025-12-06 10:17:37.618 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:17:37 localhost nova_compute[282193]: 2025-12-06 10:17:37.618 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 6 05:17:37 localhost nova_compute[282193]: 2025-12-06 10:17:37.619 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 6 05:17:37 localhost nova_compute[282193]: 2025-12-06 10:17:37.731 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 6 05:17:37 localhost nova_compute[282193]: 2025-12-06 10:17:37.732 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquired lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 6 05:17:37 localhost nova_compute[282193]: 2025-12-06 10:17:37.732 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 6 05:17:37 localhost nova_compute[282193]: 2025-12-06 10:17:37.733 282197 DEBUG nova.objects.instance [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 6 05:17:37 localhost systemd[1]: run-netns-qdhcp\x2d43883dce\x2d1590\x2d48c4\x2d987c\x2da21b63b82a1c.mount: Deactivated successfully. Dec 6 05:17:38 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:17:38 localhost nova_compute[282193]: 2025-12-06 10:17:38.424 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updating instance_info_cache with network_info: [{"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 6 05:17:38 localhost nova_compute[282193]: 2025-12-06 10:17:38.451 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Releasing lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 6 05:17:38 localhost nova_compute[282193]: 2025-12-06 10:17:38.451 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 6 05:17:38 localhost nova_compute[282193]: 2025-12-06 10:17:38.452 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:17:38 localhost nova_compute[282193]: 2025-12-06 10:17:38.453 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:17:38 localhost neutron_sriov_agent[256690]: 2025-12-06 10:17:38.465 2 INFO neutron.agent.securitygroups_rpc [None req-ab57ea17-3add-445e-9d4b-332ca72ce0af a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']#033[00m Dec 6 05:17:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:17:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:17:38 localhost nova_compute[282193]: 2025-12-06 10:17:38.866 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:38 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:38.871 263652 INFO neutron.agent.linux.ip_lib [None req-9ab44985-bdfd-4f48-8db7-e6b4ad6177c4 - - - - - -] Device tap69eb5d2a-05 cannot be used as it has no MAC address#033[00m Dec 6 05:17:38 localhost nova_compute[282193]: 2025-12-06 10:17:38.895 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:38 localhost kernel: device tap69eb5d2a-05 entered promiscuous mode Dec 6 05:17:38 localhost systemd[1]: tmp-crun.6ZJTe0.mount: Deactivated successfully. Dec 6 05:17:38 localhost NetworkManager[5973]: [1765016258.9110] manager: (tap69eb5d2a-05): new Generic device (/org/freedesktop/NetworkManager/Devices/39) Dec 6 05:17:38 localhost ovn_controller[154851]: 2025-12-06T10:17:38Z|00224|binding|INFO|Claiming lport 69eb5d2a-055c-47ec-aa6f-2e93d626f115 for this chassis. Dec 6 05:17:38 localhost ovn_controller[154851]: 2025-12-06T10:17:38Z|00225|binding|INFO|69eb5d2a-055c-47ec-aa6f-2e93d626f115: Claiming unknown Dec 6 05:17:38 localhost nova_compute[282193]: 2025-12-06 10:17:38.915 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:38 localhost systemd-udevd[318604]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:17:38 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:38.924 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=69eb5d2a-055c-47ec-aa6f-2e93d626f115) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:17:38 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:38.926 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 69eb5d2a-055c-47ec-aa6f-2e93d626f115 in datapath 43883dce-1590-48c4-987c-a21b63b82a1c bound to our chassis#033[00m Dec 6 05:17:38 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:38.927 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 43883dce-1590-48c4-987c-a21b63b82a1c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:17:38 localhost ovn_controller[154851]: 2025-12-06T10:17:38Z|00226|binding|INFO|Setting lport 69eb5d2a-055c-47ec-aa6f-2e93d626f115 up in Southbound Dec 6 05:17:38 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:38.929 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[8a14d97b-404e-44eb-a9b6-2aba2b7c718e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:17:38 localhost ovn_controller[154851]: 2025-12-06T10:17:38Z|00227|binding|INFO|Setting lport 69eb5d2a-055c-47ec-aa6f-2e93d626f115 ovn-installed in OVS Dec 6 05:17:38 localhost nova_compute[282193]: 2025-12-06 10:17:38.932 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:38 localhost nova_compute[282193]: 2025-12-06 10:17:38.936 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:38 localhost journal[230404]: ethtool ioctl error on tap69eb5d2a-05: No such device Dec 6 05:17:38 localhost journal[230404]: ethtool ioctl error on tap69eb5d2a-05: No such device Dec 6 05:17:38 localhost nova_compute[282193]: 2025-12-06 10:17:38.950 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:38 localhost journal[230404]: ethtool ioctl error on tap69eb5d2a-05: No such device Dec 6 05:17:38 localhost journal[230404]: ethtool ioctl error on tap69eb5d2a-05: No such device Dec 6 05:17:38 localhost podman[318568]: 2025-12-06 10:17:38.969160692 +0000 UTC m=+0.162528171 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 6 05:17:38 localhost journal[230404]: ethtool ioctl error on tap69eb5d2a-05: No such device Dec 6 05:17:38 localhost podman[318567]: 2025-12-06 10:17:38.91700276 +0000 UTC m=+0.111097742 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, name=ubi9-minimal, release=1755695350, version=9.6, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.) Dec 6 05:17:38 localhost journal[230404]: ethtool ioctl error on tap69eb5d2a-05: No such device Dec 6 05:17:38 localhost journal[230404]: ethtool ioctl error on tap69eb5d2a-05: No such device Dec 6 05:17:38 localhost podman[318568]: 2025-12-06 10:17:38.985190678 +0000 UTC m=+0.178558127 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Dec 6 05:17:38 localhost journal[230404]: ethtool ioctl error on tap69eb5d2a-05: No such device Dec 6 05:17:38 localhost nova_compute[282193]: 2025-12-06 10:17:38.995 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:38 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:17:39 localhost podman[318567]: 2025-12-06 10:17:39.006433472 +0000 UTC m=+0.200528504 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, container_name=openstack_network_exporter, vcs-type=git, name=ubi9-minimal, release=1755695350, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, io.buildah.version=1.33.7, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=) Dec 6 05:17:39 localhost nova_compute[282193]: 2025-12-06 10:17:39.023 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:39 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:17:39 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e134 e134: 6 total, 6 up, 6 in Dec 6 05:17:39 localhost nova_compute[282193]: 2025-12-06 10:17:39.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:17:39 localhost nova_compute[282193]: 2025-12-06 10:17:39.182 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:17:39 localhost nova_compute[282193]: 2025-12-06 10:17:39.183 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 6 05:17:39 localhost neutron_sriov_agent[256690]: 2025-12-06 10:17:39.758 2 INFO neutron.agent.securitygroups_rpc [None req-24de80cf-8a07-42c3-8966-675d0403c3d2 a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']#033[00m Dec 6 05:17:39 localhost systemd[1]: tmp-crun.dQ9sJB.mount: Deactivated successfully. Dec 6 05:17:39 localhost podman[318685]: Dec 6 05:17:39 localhost podman[318685]: 2025-12-06 10:17:39.864913951 +0000 UTC m=+0.090520136 container create b32096f403d644e2f4c19ffec842afc03e7ccdcca601ee3d5fd53645a6657ddc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 6 05:17:39 localhost systemd[1]: Started libpod-conmon-b32096f403d644e2f4c19ffec842afc03e7ccdcca601ee3d5fd53645a6657ddc.scope. Dec 6 05:17:39 localhost podman[318685]: 2025-12-06 10:17:39.821189235 +0000 UTC m=+0.046795420 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:17:39 localhost systemd[1]: Started libcrun container. Dec 6 05:17:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d161c57bfce869ae5bb0e8067b7e7d5f20f25f550e88e581d171fd0c5e663098/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:17:39 localhost podman[318685]: 2025-12-06 10:17:39.93972891 +0000 UTC m=+0.165335055 container init b32096f403d644e2f4c19ffec842afc03e7ccdcca601ee3d5fd53645a6657ddc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125) Dec 6 05:17:39 localhost podman[318685]: 2025-12-06 10:17:39.953088456 +0000 UTC m=+0.178694611 container start b32096f403d644e2f4c19ffec842afc03e7ccdcca601ee3d5fd53645a6657ddc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:17:39 localhost dnsmasq[318703]: started, version 2.85 cachesize 150 Dec 6 05:17:39 localhost dnsmasq[318703]: DNS service limited to local subnets Dec 6 05:17:39 localhost dnsmasq[318703]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:17:39 localhost dnsmasq[318703]: warning: no upstream servers configured Dec 6 05:17:39 localhost dnsmasq-dhcp[318703]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 6 05:17:39 localhost dnsmasq[318703]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 0 addresses Dec 6 05:17:39 localhost dnsmasq-dhcp[318703]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:17:39 localhost dnsmasq-dhcp[318703]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:17:40 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:40.012 263652 INFO neutron.agent.dhcp.agent [None req-9ab44985-bdfd-4f48-8db7-e6b4ad6177c4 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:17:29Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=afe4ba38-14bd-4006-b873-2ed564ce569c, ip_allocation=immediate, mac_address=fa:16:3e:8f:b9:5b, name=tempest-NetworksTestDHCPv6-469175711, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:17:28Z, description=, dns_domain=, id=43883dce-1590-48c4-987c-a21b63b82a1c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-1975538139, port_security_enabled=True, project_id=34a17eee71de4bac8b71972a4b7b506c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=42818, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1415, status=ACTIVE, subnets=['79a21d6b-39ef-4420-bf12-860cde44033d'], tags=[], tenant_id=34a17eee71de4bac8b71972a4b7b506c, updated_at=2025-12-06T10:17:29Z, vlan_transparent=None, network_id=43883dce-1590-48c4-987c-a21b63b82a1c, port_security_enabled=True, project_id=34a17eee71de4bac8b71972a4b7b506c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['d618a097-5989-47aa-9263-1c8a114ad269'], standard_attr_id=1420, status=DOWN, tags=[], tenant_id=34a17eee71de4bac8b71972a4b7b506c, updated_at=2025-12-06T10:17:29Z on network 43883dce-1590-48c4-987c-a21b63b82a1c#033[00m Dec 6 05:17:40 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:40.153 263652 INFO neutron.agent.dhcp.agent [None req-3b069e9b-727e-42e6-8e76-cb6c2a44ee98 - - - - - -] DHCP configuration for ports {'687d7abb-e6aa-4047-aa26-552c962fcc91'} is completed#033[00m Dec 6 05:17:40 localhost nova_compute[282193]: 2025-12-06 10:17:40.182 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:17:40 localhost dnsmasq[318703]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 1 addresses Dec 6 05:17:40 localhost dnsmasq-dhcp[318703]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:17:40 localhost dnsmasq-dhcp[318703]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:17:40 localhost podman[318721]: 2025-12-06 10:17:40.188003601 +0000 UTC m=+0.050652137 container kill b32096f403d644e2f4c19ffec842afc03e7ccdcca601ee3d5fd53645a6657ddc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 6 05:17:40 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:40.308 263652 INFO neutron.agent.dhcp.agent [None req-9ab44985-bdfd-4f48-8db7-e6b4ad6177c4 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:17:31Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=5bada2e5-c44e-42db-929a-1fcf2ed4098d, ip_allocation=immediate, mac_address=fa:16:3e:b8:48:6b, name=tempest-NetworksTestDHCPv6-543457619, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:17:28Z, description=, dns_domain=, id=43883dce-1590-48c4-987c-a21b63b82a1c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-1975538139, port_security_enabled=True, project_id=34a17eee71de4bac8b71972a4b7b506c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=42818, qos_policy_id=None, revision_number=4, router:external=False, shared=False, standard_attr_id=1415, status=ACTIVE, subnets=['300b3e12-98b7-455f-9860-7b8899b81779'], tags=[], tenant_id=34a17eee71de4bac8b71972a4b7b506c, updated_at=2025-12-06T10:17:31Z, vlan_transparent=None, network_id=43883dce-1590-48c4-987c-a21b63b82a1c, port_security_enabled=True, project_id=34a17eee71de4bac8b71972a4b7b506c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['d618a097-5989-47aa-9263-1c8a114ad269'], standard_attr_id=1449, status=DOWN, tags=[], tenant_id=34a17eee71de4bac8b71972a4b7b506c, updated_at=2025-12-06T10:17:31Z on network 43883dce-1590-48c4-987c-a21b63b82a1c#033[00m Dec 6 05:17:40 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:40.369 263652 INFO neutron.agent.dhcp.agent [None req-2f61132c-5ca9-4775-bd00-7d23901b53f2 - - - - - -] DHCP configuration for ports {'afe4ba38-14bd-4006-b873-2ed564ce569c'} is completed#033[00m Dec 6 05:17:40 localhost dnsmasq[318703]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 2 addresses Dec 6 05:17:40 localhost dnsmasq-dhcp[318703]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:17:40 localhost podman[318758]: 2025-12-06 10:17:40.472114248 +0000 UTC m=+0.061227768 container kill b32096f403d644e2f4c19ffec842afc03e7ccdcca601ee3d5fd53645a6657ddc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 6 05:17:40 localhost dnsmasq-dhcp[318703]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:17:40 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:40.712 263652 INFO neutron.agent.dhcp.agent [None req-9a58f7a2-9cf5-46d9-b572-061b6162e310 - - - - - -] DHCP configuration for ports {'5bada2e5-c44e-42db-929a-1fcf2ed4098d'} is completed#033[00m Dec 6 05:17:40 localhost dnsmasq[318703]: exiting on receipt of SIGTERM Dec 6 05:17:40 localhost podman[318795]: 2025-12-06 10:17:40.903261256 +0000 UTC m=+0.066635652 container kill b32096f403d644e2f4c19ffec842afc03e7ccdcca601ee3d5fd53645a6657ddc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:17:40 localhost systemd[1]: libpod-b32096f403d644e2f4c19ffec842afc03e7ccdcca601ee3d5fd53645a6657ddc.scope: Deactivated successfully. Dec 6 05:17:40 localhost podman[318809]: 2025-12-06 10:17:40.983224571 +0000 UTC m=+0.064043173 container died b32096f403d644e2f4c19ffec842afc03e7ccdcca601ee3d5fd53645a6657ddc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Dec 6 05:17:41 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b32096f403d644e2f4c19ffec842afc03e7ccdcca601ee3d5fd53645a6657ddc-userdata-shm.mount: Deactivated successfully. Dec 6 05:17:41 localhost podman[318809]: 2025-12-06 10:17:41.020273475 +0000 UTC m=+0.101092037 container cleanup b32096f403d644e2f4c19ffec842afc03e7ccdcca601ee3d5fd53645a6657ddc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 6 05:17:41 localhost systemd[1]: libpod-conmon-b32096f403d644e2f4c19ffec842afc03e7ccdcca601ee3d5fd53645a6657ddc.scope: Deactivated successfully. Dec 6 05:17:41 localhost podman[318811]: 2025-12-06 10:17:41.062854837 +0000 UTC m=+0.136673227 container remove b32096f403d644e2f4c19ffec842afc03e7ccdcca601ee3d5fd53645a6657ddc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 6 05:17:41 localhost nova_compute[282193]: 2025-12-06 10:17:41.076 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:41 localhost ovn_controller[154851]: 2025-12-06T10:17:41Z|00228|binding|INFO|Releasing lport 69eb5d2a-055c-47ec-aa6f-2e93d626f115 from this chassis (sb_readonly=0) Dec 6 05:17:41 localhost ovn_controller[154851]: 2025-12-06T10:17:41Z|00229|binding|INFO|Setting lport 69eb5d2a-055c-47ec-aa6f-2e93d626f115 down in Southbound Dec 6 05:17:41 localhost kernel: device tap69eb5d2a-05 left promiscuous mode Dec 6 05:17:41 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:41.088 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548789.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=69eb5d2a-055c-47ec-aa6f-2e93d626f115) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:17:41 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:41.090 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 69eb5d2a-055c-47ec-aa6f-2e93d626f115 in datapath 43883dce-1590-48c4-987c-a21b63b82a1c unbound from our chassis#033[00m Dec 6 05:17:41 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:41.092 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 43883dce-1590-48c4-987c-a21b63b82a1c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:17:41 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:41.093 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[74f8c446-81ae-45ef-87cc-68b3a5e0dfb7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:17:41 localhost nova_compute[282193]: 2025-12-06 10:17:41.096 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:41 localhost neutron_sriov_agent[256690]: 2025-12-06 10:17:41.101 2 INFO neutron.agent.securitygroups_rpc [None req-4acfb63b-6c96-4af3-b5fa-66e73a2e25c0 cf2cadf875da4c9b86fb2902b9ee90bb 2b975a1e6b7941c09260aeb20365b968 - - default default] Security group member updated ['f9be6b32-ff8a-467f-8358-ff505a55042e']#033[00m Dec 6 05:17:41 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:41.158 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:17:40Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=268db8ec-d894-4956-8c8c-14070df1373b, ip_allocation=immediate, mac_address=fa:16:3e:f5:df:b2, name=tempest-RoutersAdminNegativeTest-1492414301, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=True, project_id=2b975a1e6b7941c09260aeb20365b968, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['f9be6b32-ff8a-467f-8358-ff505a55042e'], standard_attr_id=1509, status=DOWN, tags=[], tenant_id=2b975a1e6b7941c09260aeb20365b968, updated_at=2025-12-06T10:17:40Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:17:41 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e135 e135: 6 total, 6 up, 6 in Dec 6 05:17:41 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 2 addresses Dec 6 05:17:41 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:17:41 localhost podman[318858]: 2025-12-06 10:17:41.388417781 +0000 UTC m=+0.064024333 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:17:41 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:17:41 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:41.758 263652 INFO neutron.agent.dhcp.agent [None req-2a9236fd-e688-46e4-aa65-bd10dd2e6ae5 - - - - - -] DHCP configuration for ports {'268db8ec-d894-4956-8c8c-14070df1373b'} is completed#033[00m Dec 6 05:17:41 localhost systemd[1]: var-lib-containers-storage-overlay-d161c57bfce869ae5bb0e8067b7e7d5f20f25f550e88e581d171fd0c5e663098-merged.mount: Deactivated successfully. Dec 6 05:17:41 localhost systemd[1]: run-netns-qdhcp\x2d43883dce\x2d1590\x2d48c4\x2d987c\x2da21b63b82a1c.mount: Deactivated successfully. Dec 6 05:17:42 localhost neutron_sriov_agent[256690]: 2025-12-06 10:17:42.011 2 INFO neutron.agent.securitygroups_rpc [None req-f035cee5-5c71-4777-a408-c824903df12b 3ea76362796945abb0389f60eab07566 23fdd860878442e1b8fc77e4ae3ef271 - - default default] Security group member updated ['dd9785c1-eb5d-4293-ac78-0fc1ce108f20']#033[00m Dec 6 05:17:42 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e136 e136: 6 total, 6 up, 6 in Dec 6 05:17:42 localhost neutron_sriov_agent[256690]: 2025-12-06 10:17:42.180 2 INFO neutron.agent.securitygroups_rpc [None req-2d1fe085-81b9-49e2-b303-f7feeabc4137 a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']#033[00m Dec 6 05:17:42 localhost nova_compute[282193]: 2025-12-06 10:17:42.183 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:17:42 localhost dnsmasq[318368]: read /var/lib/neutron/dhcp/fb8c7162-302b-4277-a437-7090f604bfc2/addn_hosts - 0 addresses Dec 6 05:17:42 localhost dnsmasq-dhcp[318368]: read /var/lib/neutron/dhcp/fb8c7162-302b-4277-a437-7090f604bfc2/host Dec 6 05:17:42 localhost dnsmasq-dhcp[318368]: read /var/lib/neutron/dhcp/fb8c7162-302b-4277-a437-7090f604bfc2/opts Dec 6 05:17:42 localhost podman[318895]: 2025-12-06 10:17:42.386485674 +0000 UTC m=+0.086484294 container kill c4c8ff188d2142423ad18d025ac974ba3c2dd4c9e1be9e72713cefc79062138c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fb8c7162-302b-4277-a437-7090f604bfc2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:17:42 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:42.455 263652 INFO neutron.agent.linux.ip_lib [None req-33bbd667-c762-4367-9967-0b80fcaf35ed - - - - - -] Device tapf557e6c6-d3 cannot be used as it has no MAC address#033[00m Dec 6 05:17:42 localhost nova_compute[282193]: 2025-12-06 10:17:42.503 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:42 localhost kernel: device tapf557e6c6-d3 entered promiscuous mode Dec 6 05:17:42 localhost NetworkManager[5973]: [1765016262.5136] manager: (tapf557e6c6-d3): new Generic device (/org/freedesktop/NetworkManager/Devices/40) Dec 6 05:17:42 localhost systemd-udevd[318923]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:17:42 localhost ovn_controller[154851]: 2025-12-06T10:17:42Z|00230|binding|INFO|Claiming lport f557e6c6-d34f-468a-a9fd-a253f0fb196d for this chassis. Dec 6 05:17:42 localhost nova_compute[282193]: 2025-12-06 10:17:42.515 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:42 localhost ovn_controller[154851]: 2025-12-06T10:17:42Z|00231|binding|INFO|f557e6c6-d34f-468a-a9fd-a253f0fb196d: Claiming unknown Dec 6 05:17:42 localhost ovn_controller[154851]: 2025-12-06T10:17:42Z|00232|binding|INFO|Setting lport f557e6c6-d34f-468a-a9fd-a253f0fb196d ovn-installed in OVS Dec 6 05:17:42 localhost nova_compute[282193]: 2025-12-06 10:17:42.525 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:42 localhost nova_compute[282193]: 2025-12-06 10:17:42.535 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:42 localhost neutron_sriov_agent[256690]: 2025-12-06 10:17:42.568 2 INFO neutron.agent.securitygroups_rpc [None req-463c5a9c-1342-4628-be66-c954070435e6 cf2cadf875da4c9b86fb2902b9ee90bb 2b975a1e6b7941c09260aeb20365b968 - - default default] Security group member updated ['f9be6b32-ff8a-467f-8358-ff505a55042e']#033[00m Dec 6 05:17:42 localhost nova_compute[282193]: 2025-12-06 10:17:42.570 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:42 localhost nova_compute[282193]: 2025-12-06 10:17:42.583 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:42 localhost nova_compute[282193]: 2025-12-06 10:17:42.617 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:42 localhost ovn_controller[154851]: 2025-12-06T10:17:42Z|00233|binding|INFO|Setting lport f557e6c6-d34f-468a-a9fd-a253f0fb196d up in Southbound Dec 6 05:17:42 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:42.638 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=f557e6c6-d34f-468a-a9fd-a253f0fb196d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:17:42 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:42.641 160509 INFO neutron.agent.ovn.metadata.agent [-] Port f557e6c6-d34f-468a-a9fd-a253f0fb196d in datapath 43883dce-1590-48c4-987c-a21b63b82a1c bound to our chassis#033[00m Dec 6 05:17:42 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:42.642 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 43883dce-1590-48c4-987c-a21b63b82a1c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:17:42 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:42.643 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[4e6794a5-8325-41c6-9c93-00d0346c609a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:17:42 localhost nova_compute[282193]: 2025-12-06 10:17:42.653 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:42 localhost podman[318960]: 2025-12-06 10:17:42.949864752 +0000 UTC m=+0.088343880 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 6 05:17:42 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 1 addresses Dec 6 05:17:42 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:17:42 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:17:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:17:43 localhost podman[318979]: 2025-12-06 10:17:43.087132375 +0000 UTC m=+0.102454108 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd) Dec 6 05:17:43 localhost podman[318979]: 2025-12-06 10:17:43.09553084 +0000 UTC m=+0.110852603 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 6 05:17:43 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:17:43 localhost neutron_sriov_agent[256690]: 2025-12-06 10:17:43.230 2 INFO neutron.agent.securitygroups_rpc [None req-74f6711f-47e9-487d-bd32-5a2f1bba6efe a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']#033[00m Dec 6 05:17:43 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 6 05:17:43 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1439659764' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 6 05:17:43 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 6 05:17:43 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1439659764' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 6 05:17:43 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:17:43 localhost dnsmasq[318368]: exiting on receipt of SIGTERM Dec 6 05:17:43 localhost podman[319032]: 2025-12-06 10:17:43.419852917 +0000 UTC m=+0.061823456 container kill c4c8ff188d2142423ad18d025ac974ba3c2dd4c9e1be9e72713cefc79062138c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fb8c7162-302b-4277-a437-7090f604bfc2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:17:43 localhost systemd[1]: libpod-c4c8ff188d2142423ad18d025ac974ba3c2dd4c9e1be9e72713cefc79062138c.scope: Deactivated successfully. Dec 6 05:17:43 localhost podman[319044]: 2025-12-06 10:17:43.481625581 +0000 UTC m=+0.048560564 container died c4c8ff188d2142423ad18d025ac974ba3c2dd4c9e1be9e72713cefc79062138c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fb8c7162-302b-4277-a437-7090f604bfc2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3) Dec 6 05:17:43 localhost podman[319044]: 2025-12-06 10:17:43.507436005 +0000 UTC m=+0.074370968 container cleanup c4c8ff188d2142423ad18d025ac974ba3c2dd4c9e1be9e72713cefc79062138c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fb8c7162-302b-4277-a437-7090f604bfc2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 6 05:17:43 localhost systemd[1]: libpod-conmon-c4c8ff188d2142423ad18d025ac974ba3c2dd4c9e1be9e72713cefc79062138c.scope: Deactivated successfully. Dec 6 05:17:43 localhost podman[319046]: 2025-12-06 10:17:43.527993848 +0000 UTC m=+0.083451213 container remove c4c8ff188d2142423ad18d025ac974ba3c2dd4c9e1be9e72713cefc79062138c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fb8c7162-302b-4277-a437-7090f604bfc2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS) Dec 6 05:17:43 localhost nova_compute[282193]: 2025-12-06 10:17:43.574 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:43 localhost kernel: device tapfd998f59-dd left promiscuous mode Dec 6 05:17:43 localhost ovn_controller[154851]: 2025-12-06T10:17:43Z|00234|binding|INFO|Releasing lport fd998f59-ddde-4bfa-95a4-6f61b1679474 from this chassis (sb_readonly=0) Dec 6 05:17:43 localhost ovn_controller[154851]: 2025-12-06T10:17:43Z|00235|binding|INFO|Setting lport fd998f59-ddde-4bfa-95a4-6f61b1679474 down in Southbound Dec 6 05:17:43 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:43.587 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-fb8c7162-302b-4277-a437-7090f604bfc2', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fb8c7162-302b-4277-a437-7090f604bfc2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '23fdd860878442e1b8fc77e4ae3ef271', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548789.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=23107f01-722b-406d-a1a5-a58a3fd6433e, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=fd998f59-ddde-4bfa-95a4-6f61b1679474) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:17:43 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:43.590 160509 INFO neutron.agent.ovn.metadata.agent [-] Port fd998f59-ddde-4bfa-95a4-6f61b1679474 in datapath fb8c7162-302b-4277-a437-7090f604bfc2 unbound from our chassis#033[00m Dec 6 05:17:43 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:43.593 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fb8c7162-302b-4277-a437-7090f604bfc2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:17:43 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:43.593 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[2eb5eeea-1a40-4be9-8bc6-3ab9b66c28dd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:17:43 localhost nova_compute[282193]: 2025-12-06 10:17:43.597 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:43 localhost nova_compute[282193]: 2025-12-06 10:17:43.599 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:43 localhost podman[319097]: Dec 6 05:17:43 localhost podman[319097]: 2025-12-06 10:17:43.679992498 +0000 UTC m=+0.068620602 container create 934aed5f8aed9367cf7dbffb3f3484b6785027ae3e6c11154898bf341f681568 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 6 05:17:43 localhost systemd[1]: Started libpod-conmon-934aed5f8aed9367cf7dbffb3f3484b6785027ae3e6c11154898bf341f681568.scope. Dec 6 05:17:43 localhost systemd[1]: Started libcrun container. Dec 6 05:17:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5875013fd97a751f2abbcbe86d70812a3b1c1a55826b088cbe7bf4bd86e9f25/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:17:43 localhost podman[319097]: 2025-12-06 10:17:43.740971918 +0000 UTC m=+0.129600022 container init 934aed5f8aed9367cf7dbffb3f3484b6785027ae3e6c11154898bf341f681568 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:17:43 localhost podman[319097]: 2025-12-06 10:17:43.647018518 +0000 UTC m=+0.035646642 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:17:43 localhost podman[319097]: 2025-12-06 10:17:43.750375643 +0000 UTC m=+0.139003747 container start 934aed5f8aed9367cf7dbffb3f3484b6785027ae3e6c11154898bf341f681568 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 6 05:17:43 localhost dnsmasq[319116]: started, version 2.85 cachesize 150 Dec 6 05:17:43 localhost dnsmasq[319116]: DNS service limited to local subnets Dec 6 05:17:43 localhost dnsmasq[319116]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:17:43 localhost dnsmasq[319116]: warning: no upstream servers configured Dec 6 05:17:43 localhost dnsmasq[319116]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 0 addresses Dec 6 05:17:43 localhost nova_compute[282193]: 2025-12-06 10:17:43.869 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:43 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:43.886 263652 INFO neutron.agent.dhcp.agent [None req-22992f25-fb05-4247-84cc-2880863fe345 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:17:43 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:43.888 263652 INFO neutron.agent.dhcp.agent [None req-22992f25-fb05-4247-84cc-2880863fe345 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:17:43 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:43.915 263652 INFO neutron.agent.dhcp.agent [None req-e5d2d496-bab8-4581-9a7e-de9a58c8a0dc - - - - - -] DHCP configuration for ports {'687d7abb-e6aa-4047-aa26-552c962fcc91'} is completed#033[00m Dec 6 05:17:43 localhost systemd[1]: tmp-crun.wW3soN.mount: Deactivated successfully. Dec 6 05:17:43 localhost systemd[1]: var-lib-containers-storage-overlay-4e8ffbe4ba68729562762de2b34e92b94090dcca76e856e5cd87d912c117f07d-merged.mount: Deactivated successfully. Dec 6 05:17:43 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c4c8ff188d2142423ad18d025ac974ba3c2dd4c9e1be9e72713cefc79062138c-userdata-shm.mount: Deactivated successfully. Dec 6 05:17:43 localhost systemd[1]: run-netns-qdhcp\x2dfb8c7162\x2d302b\x2d4277\x2da437\x2d7090f604bfc2.mount: Deactivated successfully. Dec 6 05:17:44 localhost dnsmasq[319116]: exiting on receipt of SIGTERM Dec 6 05:17:44 localhost systemd[1]: libpod-934aed5f8aed9367cf7dbffb3f3484b6785027ae3e6c11154898bf341f681568.scope: Deactivated successfully. Dec 6 05:17:44 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:44.090 263652 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:17:44 localhost podman[319134]: 2025-12-06 10:17:44.090713636 +0000 UTC m=+0.067861909 container kill 934aed5f8aed9367cf7dbffb3f3484b6785027ae3e6c11154898bf341f681568 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true) Dec 6 05:17:44 localhost podman[319149]: 2025-12-06 10:17:44.140380462 +0000 UTC m=+0.043505390 container died 934aed5f8aed9367cf7dbffb3f3484b6785027ae3e6c11154898bf341f681568 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3) Dec 6 05:17:44 localhost podman[319149]: 2025-12-06 10:17:44.1871338 +0000 UTC m=+0.090258698 container cleanup 934aed5f8aed9367cf7dbffb3f3484b6785027ae3e6c11154898bf341f681568 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Dec 6 05:17:44 localhost systemd[1]: libpod-conmon-934aed5f8aed9367cf7dbffb3f3484b6785027ae3e6c11154898bf341f681568.scope: Deactivated successfully. Dec 6 05:17:44 localhost podman[319156]: 2025-12-06 10:17:44.259228797 +0000 UTC m=+0.142369239 container remove 934aed5f8aed9367cf7dbffb3f3484b6785027ae3e6c11154898bf341f681568 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 6 05:17:44 localhost ovn_controller[154851]: 2025-12-06T10:17:44Z|00236|binding|INFO|Releasing lport f557e6c6-d34f-468a-a9fd-a253f0fb196d from this chassis (sb_readonly=0) Dec 6 05:17:44 localhost nova_compute[282193]: 2025-12-06 10:17:44.273 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:44 localhost ovn_controller[154851]: 2025-12-06T10:17:44Z|00237|binding|INFO|Setting lport f557e6c6-d34f-468a-a9fd-a253f0fb196d down in Southbound Dec 6 05:17:44 localhost kernel: device tapf557e6c6-d3 left promiscuous mode Dec 6 05:17:44 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:44.283 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548789.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=f557e6c6-d34f-468a-a9fd-a253f0fb196d) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:17:44 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:44.285 160509 INFO neutron.agent.ovn.metadata.agent [-] Port f557e6c6-d34f-468a-a9fd-a253f0fb196d in datapath 43883dce-1590-48c4-987c-a21b63b82a1c unbound from our chassis#033[00m Dec 6 05:17:44 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:44.286 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 43883dce-1590-48c4-987c-a21b63b82a1c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:17:44 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:44.287 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[a4e0b169-cdb9-4192-92e6-23993b3c3344]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:17:44 localhost nova_compute[282193]: 2025-12-06 10:17:44.313 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:44 localhost ovn_controller[154851]: 2025-12-06T10:17:44Z|00238|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:17:44 localhost nova_compute[282193]: 2025-12-06 10:17:44.435 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:44 localhost systemd[1]: tmp-crun.EeSq4G.mount: Deactivated successfully. Dec 6 05:17:44 localhost systemd[1]: var-lib-containers-storage-overlay-b5875013fd97a751f2abbcbe86d70812a3b1c1a55826b088cbe7bf4bd86e9f25-merged.mount: Deactivated successfully. Dec 6 05:17:44 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-934aed5f8aed9367cf7dbffb3f3484b6785027ae3e6c11154898bf341f681568-userdata-shm.mount: Deactivated successfully. Dec 6 05:17:44 localhost systemd[1]: run-netns-qdhcp\x2d43883dce\x2d1590\x2d48c4\x2d987c\x2da21b63b82a1c.mount: Deactivated successfully. Dec 6 05:17:45 localhost neutron_sriov_agent[256690]: 2025-12-06 10:17:45.567 2 INFO neutron.agent.securitygroups_rpc [None req-a9308ef0-170e-430a-9f5f-6439b979faf7 a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']#033[00m Dec 6 05:17:45 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:45.791 263652 INFO neutron.agent.linux.ip_lib [None req-14a0282a-9c8b-4b61-9054-18eb98946d63 - - - - - -] Device tap42f6d111-d5 cannot be used as it has no MAC address#033[00m Dec 6 05:17:45 localhost nova_compute[282193]: 2025-12-06 10:17:45.822 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:45 localhost kernel: device tap42f6d111-d5 entered promiscuous mode Dec 6 05:17:45 localhost ovn_controller[154851]: 2025-12-06T10:17:45Z|00239|binding|INFO|Claiming lport 42f6d111-d5b7-4fe3-8ff2-29ff3ea79b8d for this chassis. Dec 6 05:17:45 localhost NetworkManager[5973]: [1765016265.8326] manager: (tap42f6d111-d5): new Generic device (/org/freedesktop/NetworkManager/Devices/41) Dec 6 05:17:45 localhost nova_compute[282193]: 2025-12-06 10:17:45.833 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:45 localhost systemd-udevd[319187]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:17:45 localhost ovn_controller[154851]: 2025-12-06T10:17:45Z|00240|binding|INFO|42f6d111-d5b7-4fe3-8ff2-29ff3ea79b8d: Claiming unknown Dec 6 05:17:45 localhost nova_compute[282193]: 2025-12-06 10:17:45.877 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:45 localhost ovn_controller[154851]: 2025-12-06T10:17:45Z|00241|binding|INFO|Setting lport 42f6d111-d5b7-4fe3-8ff2-29ff3ea79b8d ovn-installed in OVS Dec 6 05:17:45 localhost nova_compute[282193]: 2025-12-06 10:17:45.883 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:45 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:45.893 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:17:45Z, description=, device_id=e1d0435f-41a7-4a3a-9168-d8b2d102536f, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=2d5e1755-43eb-417b-85bd-7bf4bf92c7f1, ip_allocation=immediate, mac_address=fa:16:3e:e7:6e:9c, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1541, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-06T10:17:45Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:17:45 localhost nova_compute[282193]: 2025-12-06 10:17:45.930 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:45 localhost ovn_controller[154851]: 2025-12-06T10:17:45Z|00242|binding|INFO|Setting lport 42f6d111-d5b7-4fe3-8ff2-29ff3ea79b8d up in Southbound Dec 6 05:17:45 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:45.934 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=42f6d111-d5b7-4fe3-8ff2-29ff3ea79b8d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:17:45 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:45.936 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 42f6d111-d5b7-4fe3-8ff2-29ff3ea79b8d in datapath 43883dce-1590-48c4-987c-a21b63b82a1c bound to our chassis#033[00m Dec 6 05:17:45 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:45.937 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 43883dce-1590-48c4-987c-a21b63b82a1c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:17:45 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:45.938 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[a127cf2c-f315-4f4e-a5a7-463cd3cbaa45]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:17:45 localhost nova_compute[282193]: 2025-12-06 10:17:45.966 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:46 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 2 addresses Dec 6 05:17:46 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:17:46 localhost podman[319216]: 2025-12-06 10:17:46.199624924 +0000 UTC m=+0.070233122 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:17:46 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:17:46 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:46.409 263652 INFO neutron.agent.dhcp.agent [None req-2484307c-bf12-4411-bd5e-aaac47468baf - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:17:45Z, description=, device_id=ef42a7b7-856f-4d93-83fd-eafb16254770, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=77624924-0bc3-409e-b98a-f90d3ca2c4ea, ip_allocation=immediate, mac_address=fa:16:3e:f5:bc:1b, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1542, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-06T10:17:45Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:17:46 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:46.499 263652 INFO neutron.agent.dhcp.agent [None req-174298ce-5597-453c-800d-f95742b6876d - - - - - -] DHCP configuration for ports {'2d5e1755-43eb-417b-85bd-7bf4bf92c7f1'} is completed#033[00m Dec 6 05:17:46 localhost neutron_sriov_agent[256690]: 2025-12-06 10:17:46.573 2 INFO neutron.agent.securitygroups_rpc [None req-38541453-b414-4a96-8c97-455c5ffb96a0 a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']#033[00m Dec 6 05:17:46 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 3 addresses Dec 6 05:17:46 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:17:46 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:17:46 localhost podman[319271]: 2025-12-06 10:17:46.594446348 +0000 UTC m=+0.052485622 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:17:46 localhost openstack_network_exporter[243110]: ERROR 10:17:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:17:46 localhost openstack_network_exporter[243110]: ERROR 10:17:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:17:46 localhost openstack_network_exporter[243110]: ERROR 10:17:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:17:46 localhost openstack_network_exporter[243110]: ERROR 10:17:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:17:46 localhost openstack_network_exporter[243110]: Dec 6 05:17:46 localhost openstack_network_exporter[243110]: ERROR 10:17:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:17:46 localhost openstack_network_exporter[243110]: Dec 6 05:17:46 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:46.866 263652 INFO neutron.agent.dhcp.agent [None req-2659614a-559b-44b0-9a11-954524623b26 - - - - - -] DHCP configuration for ports {'77624924-0bc3-409e-b98a-f90d3ca2c4ea'} is completed#033[00m Dec 6 05:17:46 localhost podman[319315]: Dec 6 05:17:46 localhost podman[319315]: 2025-12-06 10:17:46.952248721 +0000 UTC m=+0.079224774 container create d8b7e2ea67f9833a438449d4214ac95d50cd723c09fe787e3726ac96d044b217 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0) Dec 6 05:17:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:17:46 localhost systemd[1]: Started libpod-conmon-d8b7e2ea67f9833a438449d4214ac95d50cd723c09fe787e3726ac96d044b217.scope. Dec 6 05:17:47 localhost systemd[1]: Started libcrun container. Dec 6 05:17:47 localhost podman[319315]: 2025-12-06 10:17:46.910812294 +0000 UTC m=+0.037788337 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:17:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1970bf154652cc87df1e1e2e4619407439f842cdcb4c780ba34358f189fc220/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:17:47 localhost podman[319315]: 2025-12-06 10:17:47.027093262 +0000 UTC m=+0.154069285 container init d8b7e2ea67f9833a438449d4214ac95d50cd723c09fe787e3726ac96d044b217 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Dec 6 05:17:47 localhost dnsmasq[319346]: started, version 2.85 cachesize 150 Dec 6 05:17:47 localhost dnsmasq[319346]: DNS service limited to local subnets Dec 6 05:17:47 localhost dnsmasq[319346]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:17:47 localhost dnsmasq[319346]: warning: no upstream servers configured Dec 6 05:17:47 localhost dnsmasq-dhcp[319346]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 6 05:17:47 localhost dnsmasq[319346]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 0 addresses Dec 6 05:17:47 localhost dnsmasq-dhcp[319346]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:17:47 localhost dnsmasq-dhcp[319346]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:17:47 localhost podman[319329]: 2025-12-06 10:17:47.074082327 +0000 UTC m=+0.077656727 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 6 05:17:47 localhost podman[319315]: 2025-12-06 10:17:47.087846675 +0000 UTC m=+0.214822728 container start d8b7e2ea67f9833a438449d4214ac95d50cd723c09fe787e3726ac96d044b217 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125) Dec 6 05:17:47 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e137 e137: 6 total, 6 up, 6 in Dec 6 05:17:47 localhost podman[319329]: 2025-12-06 10:17:47.111654856 +0000 UTC m=+0.115229266 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:17:47 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:17:47 localhost nova_compute[282193]: 2025-12-06 10:17:47.133 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:47.307 160509 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:17:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:47.307 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:17:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:47.308 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:17:47 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:47.313 263652 INFO neutron.agent.dhcp.agent [None req-b2fc75f0-40f8-46bb-a8d6-d7912a1e5e9d - - - - - -] DHCP configuration for ports {'687d7abb-e6aa-4047-aa26-552c962fcc91'} is completed#033[00m Dec 6 05:17:47 localhost dnsmasq[319346]: exiting on receipt of SIGTERM Dec 6 05:17:47 localhost podman[319374]: 2025-12-06 10:17:47.476808062 +0000 UTC m=+0.066351793 container kill d8b7e2ea67f9833a438449d4214ac95d50cd723c09fe787e3726ac96d044b217 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 6 05:17:47 localhost systemd[1]: libpod-d8b7e2ea67f9833a438449d4214ac95d50cd723c09fe787e3726ac96d044b217.scope: Deactivated successfully. Dec 6 05:17:47 localhost podman[319386]: 2025-12-06 10:17:47.553345864 +0000 UTC m=+0.062317691 container died d8b7e2ea67f9833a438449d4214ac95d50cd723c09fe787e3726ac96d044b217 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true) Dec 6 05:17:47 localhost nova_compute[282193]: 2025-12-06 10:17:47.629 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:47 localhost podman[319386]: 2025-12-06 10:17:47.634151135 +0000 UTC m=+0.143122922 container cleanup d8b7e2ea67f9833a438449d4214ac95d50cd723c09fe787e3726ac96d044b217 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125) Dec 6 05:17:47 localhost systemd[1]: libpod-conmon-d8b7e2ea67f9833a438449d4214ac95d50cd723c09fe787e3726ac96d044b217.scope: Deactivated successfully. Dec 6 05:17:47 localhost podman[319388]: 2025-12-06 10:17:47.660944427 +0000 UTC m=+0.159132418 container remove d8b7e2ea67f9833a438449d4214ac95d50cd723c09fe787e3726ac96d044b217 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:17:47 localhost nova_compute[282193]: 2025-12-06 10:17:47.676 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:47 localhost ovn_controller[154851]: 2025-12-06T10:17:47Z|00243|binding|INFO|Releasing lport 42f6d111-d5b7-4fe3-8ff2-29ff3ea79b8d from this chassis (sb_readonly=0) Dec 6 05:17:47 localhost ovn_controller[154851]: 2025-12-06T10:17:47Z|00244|binding|INFO|Setting lport 42f6d111-d5b7-4fe3-8ff2-29ff3ea79b8d down in Southbound Dec 6 05:17:47 localhost kernel: device tap42f6d111-d5 left promiscuous mode Dec 6 05:17:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:47.690 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548789.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=42f6d111-d5b7-4fe3-8ff2-29ff3ea79b8d) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:17:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:47.692 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 42f6d111-d5b7-4fe3-8ff2-29ff3ea79b8d in datapath 43883dce-1590-48c4-987c-a21b63b82a1c unbound from our chassis#033[00m Dec 6 05:17:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:47.694 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 43883dce-1590-48c4-987c-a21b63b82a1c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:17:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:47.695 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[95b162a2-d794-48ec-b2ab-ae499d5d37b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:17:47 localhost nova_compute[282193]: 2025-12-06 10:17:47.699 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:48 localhost neutron_sriov_agent[256690]: 2025-12-06 10:17:48.020 2 INFO neutron.agent.securitygroups_rpc [None req-77939ad8-3a8c-44db-b1d8-896917e1a291 a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']#033[00m Dec 6 05:17:48 localhost systemd[1]: var-lib-containers-storage-overlay-f1970bf154652cc87df1e1e2e4619407439f842cdcb4c780ba34358f189fc220-merged.mount: Deactivated successfully. Dec 6 05:17:48 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d8b7e2ea67f9833a438449d4214ac95d50cd723c09fe787e3726ac96d044b217-userdata-shm.mount: Deactivated successfully. Dec 6 05:17:48 localhost systemd[1]: run-netns-qdhcp\x2d43883dce\x2d1590\x2d48c4\x2d987c\x2da21b63b82a1c.mount: Deactivated successfully. Dec 6 05:17:48 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:17:48 localhost nova_compute[282193]: 2025-12-06 10:17:48.896 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:49 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:49.096 263652 INFO neutron.agent.linux.ip_lib [None req-7eb03ef0-f215-4e63-a52e-9c8abe015ca5 - - - - - -] Device tap15e8fc8e-25 cannot be used as it has no MAC address#033[00m Dec 6 05:17:49 localhost nova_compute[282193]: 2025-12-06 10:17:49.127 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:49 localhost kernel: device tap15e8fc8e-25 entered promiscuous mode Dec 6 05:17:49 localhost NetworkManager[5973]: [1765016269.1363] manager: (tap15e8fc8e-25): new Generic device (/org/freedesktop/NetworkManager/Devices/42) Dec 6 05:17:49 localhost nova_compute[282193]: 2025-12-06 10:17:49.135 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:49 localhost ovn_controller[154851]: 2025-12-06T10:17:49Z|00245|binding|INFO|Claiming lport 15e8fc8e-2569-4456-89e1-7a3d1684c267 for this chassis. Dec 6 05:17:49 localhost ovn_controller[154851]: 2025-12-06T10:17:49Z|00246|binding|INFO|15e8fc8e-2569-4456-89e1-7a3d1684c267: Claiming unknown Dec 6 05:17:49 localhost systemd-udevd[319426]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:17:49 localhost ovn_controller[154851]: 2025-12-06T10:17:49Z|00247|binding|INFO|Setting lport 15e8fc8e-2569-4456-89e1-7a3d1684c267 ovn-installed in OVS Dec 6 05:17:49 localhost ovn_controller[154851]: 2025-12-06T10:17:49Z|00248|binding|INFO|Setting lport 15e8fc8e-2569-4456-89e1-7a3d1684c267 up in Southbound Dec 6 05:17:49 localhost nova_compute[282193]: 2025-12-06 10:17:49.149 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:49 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:49.148 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=15e8fc8e-2569-4456-89e1-7a3d1684c267) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:17:49 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:49.150 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 15e8fc8e-2569-4456-89e1-7a3d1684c267 in datapath 43883dce-1590-48c4-987c-a21b63b82a1c bound to our chassis#033[00m Dec 6 05:17:49 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:49.152 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 43883dce-1590-48c4-987c-a21b63b82a1c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:17:49 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:49.153 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[eda05078-7701-415b-a4bf-fff626310cb4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:17:49 localhost nova_compute[282193]: 2025-12-06 10:17:49.158 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:49 localhost nova_compute[282193]: 2025-12-06 10:17:49.177 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:49 localhost nova_compute[282193]: 2025-12-06 10:17:49.229 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:49 localhost nova_compute[282193]: 2025-12-06 10:17:49.267 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:49 localhost neutron_sriov_agent[256690]: 2025-12-06 10:17:49.735 2 INFO neutron.agent.securitygroups_rpc [None req-028fe2d3-a2af-4154-9a69-d7d602ad3ddf a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']#033[00m Dec 6 05:17:49 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 2 addresses Dec 6 05:17:49 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:17:49 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:17:49 localhost podman[319477]: 2025-12-06 10:17:49.912728637 +0000 UTC m=+0.062065433 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 6 05:17:50 localhost podman[319519]: Dec 6 05:17:50 localhost podman[319519]: 2025-12-06 10:17:50.271712096 +0000 UTC m=+0.073704617 container create b73c6d50186bb868d1093a23b8c34e1ae426a8ee07e6598ead72c83b7962a732 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Dec 6 05:17:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:17:50 localhost systemd[1]: Started libpod-conmon-b73c6d50186bb868d1093a23b8c34e1ae426a8ee07e6598ead72c83b7962a732.scope. Dec 6 05:17:50 localhost systemd[1]: Started libcrun container. Dec 6 05:17:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c18ac33546be1aae3397ead48d8dd1e3f5abac98a4c4c00eeed7d61e8509d7e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:17:50 localhost podman[319519]: 2025-12-06 10:17:50.233466795 +0000 UTC m=+0.035459376 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:17:50 localhost podman[319519]: 2025-12-06 10:17:50.337640186 +0000 UTC m=+0.139632707 container init b73c6d50186bb868d1093a23b8c34e1ae426a8ee07e6598ead72c83b7962a732 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 6 05:17:50 localhost podman[319519]: 2025-12-06 10:17:50.346864035 +0000 UTC m=+0.148856546 container start b73c6d50186bb868d1093a23b8c34e1ae426a8ee07e6598ead72c83b7962a732 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 6 05:17:50 localhost dnsmasq[319546]: started, version 2.85 cachesize 150 Dec 6 05:17:50 localhost dnsmasq[319546]: DNS service limited to local subnets Dec 6 05:17:50 localhost dnsmasq[319546]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:17:50 localhost dnsmasq[319546]: warning: no upstream servers configured Dec 6 05:17:50 localhost dnsmasq-dhcp[319546]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 6 05:17:50 localhost dnsmasq[319546]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 1 addresses Dec 6 05:17:50 localhost dnsmasq-dhcp[319546]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:17:50 localhost dnsmasq-dhcp[319546]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:17:50 localhost podman[319533]: 2025-12-06 10:17:50.408468284 +0000 UTC m=+0.092993142 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible) Dec 6 05:17:50 localhost podman[319533]: 2025-12-06 10:17:50.453282573 +0000 UTC m=+0.137807441 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller) Dec 6 05:17:50 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:17:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:50.523 263652 INFO neutron.agent.dhcp.agent [None req-c9dc30e2-8a6e-4d5f-bb3c-8a9245821114 - - - - - -] DHCP configuration for ports {'031e9ed7-2f9d-4794-b149-fed50ddb5365', '687d7abb-e6aa-4047-aa26-552c962fcc91'} is completed#033[00m Dec 6 05:17:50 localhost dnsmasq[319546]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 0 addresses Dec 6 05:17:50 localhost dnsmasq-dhcp[319546]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:17:50 localhost dnsmasq-dhcp[319546]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:17:50 localhost podman[319582]: 2025-12-06 10:17:50.716838337 +0000 UTC m=+0.068526439 container kill b73c6d50186bb868d1093a23b8c34e1ae426a8ee07e6598ead72c83b7962a732 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:17:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:50.918 263652 INFO neutron.agent.dhcp.agent [None req-c3044ec0-d405-49dc-90fc-0d54811d5574 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:17:34Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=4e06a687-1f49-4292-acf2-929e0eb84acf, ip_allocation=immediate, mac_address=fa:16:3e:4d:58:46, name=tempest-NetworksTestDHCPv6-462185524, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:17:28Z, description=, dns_domain=, id=43883dce-1590-48c4-987c-a21b63b82a1c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-1975538139, port_security_enabled=True, project_id=34a17eee71de4bac8b71972a4b7b506c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=42818, qos_policy_id=None, revision_number=6, router:external=False, shared=False, standard_attr_id=1415, status=ACTIVE, subnets=['bc7d1843-cf65-45d5-94a7-f389cac666c9'], tags=[], tenant_id=34a17eee71de4bac8b71972a4b7b506c, updated_at=2025-12-06T10:17:34Z, vlan_transparent=None, network_id=43883dce-1590-48c4-987c-a21b63b82a1c, port_security_enabled=True, project_id=34a17eee71de4bac8b71972a4b7b506c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['d618a097-5989-47aa-9263-1c8a114ad269'], standard_attr_id=1467, status=DOWN, tags=[], tenant_id=34a17eee71de4bac8b71972a4b7b506c, updated_at=2025-12-06T10:17:34Z on network 43883dce-1590-48c4-987c-a21b63b82a1c#033[00m Dec 6 05:17:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:50.997 263652 INFO neutron.agent.dhcp.agent [None req-837fa81a-61f0-4ca8-a9a5-c3cf98bd60d8 - - - - - -] DHCP configuration for ports {'15e8fc8e-2569-4456-89e1-7a3d1684c267', '687d7abb-e6aa-4047-aa26-552c962fcc91'} is completed#033[00m Dec 6 05:17:51 localhost podman[319621]: 2025-12-06 10:17:51.126494402 +0000 UTC m=+0.066658363 container kill b73c6d50186bb868d1093a23b8c34e1ae426a8ee07e6598ead72c83b7962a732 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125) Dec 6 05:17:51 localhost dnsmasq[319546]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 1 addresses Dec 6 05:17:51 localhost dnsmasq-dhcp[319546]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:17:51 localhost dnsmasq-dhcp[319546]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:17:51 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:51.293 263652 INFO neutron.agent.dhcp.agent [None req-c3044ec0-d405-49dc-90fc-0d54811d5574 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:17:37Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=9a6a53c7-1fed-40b6-9731-a522fa01a8e9, ip_allocation=immediate, mac_address=fa:16:3e:95:da:c2, name=tempest-NetworksTestDHCPv6-1157473240, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:17:28Z, description=, dns_domain=, id=43883dce-1590-48c4-987c-a21b63b82a1c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-1975538139, port_security_enabled=True, project_id=34a17eee71de4bac8b71972a4b7b506c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=42818, qos_policy_id=None, revision_number=8, router:external=False, shared=False, standard_attr_id=1415, status=ACTIVE, subnets=['ce5e9894-b404-44e0-bb7b-8eb8d1458ed9'], tags=[], tenant_id=34a17eee71de4bac8b71972a4b7b506c, updated_at=2025-12-06T10:17:37Z, vlan_transparent=None, network_id=43883dce-1590-48c4-987c-a21b63b82a1c, port_security_enabled=True, project_id=34a17eee71de4bac8b71972a4b7b506c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['d618a097-5989-47aa-9263-1c8a114ad269'], standard_attr_id=1492, status=DOWN, tags=[], tenant_id=34a17eee71de4bac8b71972a4b7b506c, updated_at=2025-12-06T10:17:37Z on network 43883dce-1590-48c4-987c-a21b63b82a1c#033[00m Dec 6 05:17:51 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:51.333 263652 INFO neutron.agent.dhcp.agent [None req-bdca00a0-e797-46d6-8906-19bbae78f545 - - - - - -] DHCP configuration for ports {'4e06a687-1f49-4292-acf2-929e0eb84acf'} is completed#033[00m Dec 6 05:17:51 localhost dnsmasq[319546]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 2 addresses Dec 6 05:17:51 localhost podman[319658]: 2025-12-06 10:17:51.481304225 +0000 UTC m=+0.064300872 container kill b73c6d50186bb868d1093a23b8c34e1ae426a8ee07e6598ead72c83b7962a732 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:17:51 localhost dnsmasq-dhcp[319546]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:17:51 localhost dnsmasq-dhcp[319546]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:17:51 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:51.561 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:17:50Z, description=, device_id=5a9249e3-4953-4808-89a5-568f69ae8159, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=4238df07-d7ae-46fc-981f-02a73e40206c, ip_allocation=immediate, mac_address=fa:16:3e:d5:08:58, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1571, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-06T10:17:51Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:17:51 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:51.750 263652 INFO neutron.agent.dhcp.agent [None req-b3c01a27-6b0d-43c8-b6cb-b41f907a0986 - - - - - -] DHCP configuration for ports {'9a6a53c7-1fed-40b6-9731-a522fa01a8e9'} is completed#033[00m Dec 6 05:17:51 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 3 addresses Dec 6 05:17:51 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:17:51 localhost podman[319694]: 2025-12-06 10:17:51.831718503 +0000 UTC m=+0.073542861 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 6 05:17:51 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:17:51 localhost dnsmasq[319546]: exiting on receipt of SIGTERM Dec 6 05:17:51 localhost systemd[1]: tmp-crun.qjrrlZ.mount: Deactivated successfully. Dec 6 05:17:51 localhost podman[319726]: 2025-12-06 10:17:51.973399701 +0000 UTC m=+0.067094956 container kill b73c6d50186bb868d1093a23b8c34e1ae426a8ee07e6598ead72c83b7962a732 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:17:51 localhost systemd[1]: libpod-b73c6d50186bb868d1093a23b8c34e1ae426a8ee07e6598ead72c83b7962a732.scope: Deactivated successfully. Dec 6 05:17:52 localhost podman[319743]: 2025-12-06 10:17:52.057385608 +0000 UTC m=+0.060149526 container died b73c6d50186bb868d1093a23b8c34e1ae426a8ee07e6598ead72c83b7962a732 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2) Dec 6 05:17:52 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b73c6d50186bb868d1093a23b8c34e1ae426a8ee07e6598ead72c83b7962a732-userdata-shm.mount: Deactivated successfully. Dec 6 05:17:52 localhost systemd[1]: var-lib-containers-storage-overlay-9c18ac33546be1aae3397ead48d8dd1e3f5abac98a4c4c00eeed7d61e8509d7e-merged.mount: Deactivated successfully. Dec 6 05:17:52 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:52.093 263652 INFO neutron.agent.dhcp.agent [None req-b6568ef1-e238-4481-95e1-b1b0d0170d6f - - - - - -] DHCP configuration for ports {'4238df07-d7ae-46fc-981f-02a73e40206c'} is completed#033[00m Dec 6 05:17:52 localhost podman[319743]: 2025-12-06 10:17:52.157533155 +0000 UTC m=+0.160297043 container remove b73c6d50186bb868d1093a23b8c34e1ae426a8ee07e6598ead72c83b7962a732 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 6 05:17:52 localhost systemd[1]: libpod-conmon-b73c6d50186bb868d1093a23b8c34e1ae426a8ee07e6598ead72c83b7962a732.scope: Deactivated successfully. Dec 6 05:17:52 localhost nova_compute[282193]: 2025-12-06 10:17:52.169 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:52 localhost kernel: device tap15e8fc8e-25 left promiscuous mode Dec 6 05:17:52 localhost ovn_controller[154851]: 2025-12-06T10:17:52Z|00249|binding|INFO|Releasing lport 15e8fc8e-2569-4456-89e1-7a3d1684c267 from this chassis (sb_readonly=0) Dec 6 05:17:52 localhost ovn_controller[154851]: 2025-12-06T10:17:52Z|00250|binding|INFO|Setting lport 15e8fc8e-2569-4456-89e1-7a3d1684c267 down in Southbound Dec 6 05:17:52 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:52.186 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=15e8fc8e-2569-4456-89e1-7a3d1684c267) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:17:52 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:52.187 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 15e8fc8e-2569-4456-89e1-7a3d1684c267 in datapath 43883dce-1590-48c4-987c-a21b63b82a1c unbound from our chassis#033[00m Dec 6 05:17:52 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:52.188 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 43883dce-1590-48c4-987c-a21b63b82a1c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:17:52 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:52.189 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[94aedc94-6ee6-4288-b9a3-d3de407122d3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:17:52 localhost nova_compute[282193]: 2025-12-06 10:17:52.193 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:52 localhost nova_compute[282193]: 2025-12-06 10:17:52.669 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:52 localhost systemd[1]: run-netns-qdhcp\x2d43883dce\x2d1590\x2d48c4\x2d987c\x2da21b63b82a1c.mount: Deactivated successfully. Dec 6 05:17:53 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:53.118 263652 INFO neutron.agent.linux.ip_lib [None req-8275c5f7-8320-4bbc-b1d7-7694a270cf2f - - - - - -] Device tape3c62197-6b cannot be used as it has no MAC address#033[00m Dec 6 05:17:53 localhost nova_compute[282193]: 2025-12-06 10:17:53.143 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:53 localhost kernel: device tape3c62197-6b entered promiscuous mode Dec 6 05:17:53 localhost NetworkManager[5973]: [1765016273.1507] manager: (tape3c62197-6b): new Generic device (/org/freedesktop/NetworkManager/Devices/43) Dec 6 05:17:53 localhost nova_compute[282193]: 2025-12-06 10:17:53.151 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:53 localhost ovn_controller[154851]: 2025-12-06T10:17:53Z|00251|binding|INFO|Claiming lport e3c62197-6b1b-4fe2-b169-9cfa6917af0a for this chassis. Dec 6 05:17:53 localhost ovn_controller[154851]: 2025-12-06T10:17:53Z|00252|binding|INFO|e3c62197-6b1b-4fe2-b169-9cfa6917af0a: Claiming unknown Dec 6 05:17:53 localhost systemd-udevd[319781]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:17:53 localhost ovn_controller[154851]: 2025-12-06T10:17:53Z|00253|binding|INFO|Setting lport e3c62197-6b1b-4fe2-b169-9cfa6917af0a up in Southbound Dec 6 05:17:53 localhost ovn_controller[154851]: 2025-12-06T10:17:53Z|00254|binding|INFO|Setting lport e3c62197-6b1b-4fe2-b169-9cfa6917af0a ovn-installed in OVS Dec 6 05:17:53 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:53.165 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=e3c62197-6b1b-4fe2-b169-9cfa6917af0a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:17:53 localhost nova_compute[282193]: 2025-12-06 10:17:53.165 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:53 localhost nova_compute[282193]: 2025-12-06 10:17:53.166 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:53 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:53.170 160509 INFO neutron.agent.ovn.metadata.agent [-] Port e3c62197-6b1b-4fe2-b169-9cfa6917af0a in datapath 43883dce-1590-48c4-987c-a21b63b82a1c bound to our chassis#033[00m Dec 6 05:17:53 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:53.171 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 43883dce-1590-48c4-987c-a21b63b82a1c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:17:53 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:53.172 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[f46539ba-aa3d-40b4-bc23-9b17c4803d71]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:17:53 localhost nova_compute[282193]: 2025-12-06 10:17:53.183 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:53 localhost nova_compute[282193]: 2025-12-06 10:17:53.190 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:53 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e138 e138: 6 total, 6 up, 6 in Dec 6 05:17:53 localhost nova_compute[282193]: 2025-12-06 10:17:53.224 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:53 localhost nova_compute[282193]: 2025-12-06 10:17:53.255 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:53 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:17:53 localhost nova_compute[282193]: 2025-12-06 10:17:53.751 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:53 localhost nova_compute[282193]: 2025-12-06 10:17:53.898 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:53 localhost podman[241090]: time="2025-12-06T10:17:53Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:17:53 localhost podman[241090]: @ - - [06/Dec/2025:10:17:53 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156104 "" "Go-http-client/1.1" Dec 6 05:17:53 localhost podman[241090]: @ - - [06/Dec/2025:10:17:53 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19263 "" "Go-http-client/1.1" Dec 6 05:17:53 localhost nova_compute[282193]: 2025-12-06 10:17:53.979 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:54 localhost nova_compute[282193]: 2025-12-06 10:17:54.091 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:54 localhost podman[319836]: Dec 6 05:17:54 localhost podman[319836]: 2025-12-06 10:17:54.210260428 +0000 UTC m=+0.119171245 container create fc20fcf21869e5d1157562ec291824a00698e06684a29f052ec6f97cdf9bab98 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Dec 6 05:17:54 localhost podman[319836]: 2025-12-06 10:17:54.144887196 +0000 UTC m=+0.053798053 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:17:54 localhost systemd[1]: Started libpod-conmon-fc20fcf21869e5d1157562ec291824a00698e06684a29f052ec6f97cdf9bab98.scope. Dec 6 05:17:54 localhost systemd[1]: Started libcrun container. Dec 6 05:17:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46c21cbe908f45bcdc1b00e9cb02aecc14e9e3226ea975523b944b16c6663789/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:17:54 localhost podman[319836]: 2025-12-06 10:17:54.288204102 +0000 UTC m=+0.197114929 container init fc20fcf21869e5d1157562ec291824a00698e06684a29f052ec6f97cdf9bab98 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:17:54 localhost podman[319836]: 2025-12-06 10:17:54.296461333 +0000 UTC m=+0.205372150 container start fc20fcf21869e5d1157562ec291824a00698e06684a29f052ec6f97cdf9bab98 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:17:54 localhost dnsmasq[319854]: started, version 2.85 cachesize 150 Dec 6 05:17:54 localhost dnsmasq[319854]: DNS service limited to local subnets Dec 6 05:17:54 localhost dnsmasq[319854]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:17:54 localhost dnsmasq[319854]: warning: no upstream servers configured Dec 6 05:17:54 localhost dnsmasq-dhcp[319854]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 6 05:17:54 localhost dnsmasq[319854]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 0 addresses Dec 6 05:17:54 localhost dnsmasq-dhcp[319854]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:17:54 localhost dnsmasq-dhcp[319854]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:17:54 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:54.365 263652 INFO neutron.agent.dhcp.agent [None req-8275c5f7-8320-4bbc-b1d7-7694a270cf2f - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:17:41Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=67b3547b-9d27-4643-bddb-ba71d121551d, ip_allocation=immediate, mac_address=fa:16:3e:f3:31:93, name=tempest-NetworksTestDHCPv6-1485293953, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:17:28Z, description=, dns_domain=, id=43883dce-1590-48c4-987c-a21b63b82a1c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-1975538139, port_security_enabled=True, project_id=34a17eee71de4bac8b71972a4b7b506c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=42818, qos_policy_id=None, revision_number=10, router:external=False, shared=False, standard_attr_id=1415, status=ACTIVE, subnets=['6b9e94cd-e549-4ef1-a60c-bc98bdbbee8c'], tags=[], tenant_id=34a17eee71de4bac8b71972a4b7b506c, updated_at=2025-12-06T10:17:40Z, vlan_transparent=None, network_id=43883dce-1590-48c4-987c-a21b63b82a1c, port_security_enabled=True, project_id=34a17eee71de4bac8b71972a4b7b506c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['d618a097-5989-47aa-9263-1c8a114ad269'], standard_attr_id=1520, status=DOWN, tags=[], tenant_id=34a17eee71de4bac8b71972a4b7b506c, updated_at=2025-12-06T10:17:41Z on network 43883dce-1590-48c4-987c-a21b63b82a1c#033[00m Dec 6 05:17:54 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:54.479 263652 INFO neutron.agent.dhcp.agent [None req-e70cc606-5920-4c25-8fb7-1e5b3b5bc41a - - - - - -] DHCP configuration for ports {'687d7abb-e6aa-4047-aa26-552c962fcc91'} is completed#033[00m Dec 6 05:17:54 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:54.509 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:17:54Z, description=, device_id=4756bfdd-20ae-4420-baa8-2e1807a793b3, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=39dfb9d0-ac72-4ae3-ac11-cd04485d1755, ip_allocation=immediate, mac_address=fa:16:3e:27:36:ee, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1597, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-06T10:17:54Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:17:54 localhost dnsmasq[319854]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 1 addresses Dec 6 05:17:54 localhost podman[319871]: 2025-12-06 10:17:54.582338284 +0000 UTC m=+0.083667179 container kill fc20fcf21869e5d1157562ec291824a00698e06684a29f052ec6f97cdf9bab98 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 6 05:17:54 localhost dnsmasq-dhcp[319854]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:17:54 localhost dnsmasq-dhcp[319854]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:17:54 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 4 addresses Dec 6 05:17:54 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:17:54 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:17:54 localhost podman[319904]: 2025-12-06 10:17:54.746919396 +0000 UTC m=+0.068901241 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 6 05:17:54 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:54.792 263652 INFO neutron.agent.dhcp.agent [None req-8275c5f7-8320-4bbc-b1d7-7694a270cf2f - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:17:44Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=dc85b11d-4c03-4e32-ae66-316d37e2ed0c, ip_allocation=immediate, mac_address=fa:16:3e:d4:b8:56, name=tempest-NetworksTestDHCPv6-58844838, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:17:28Z, description=, dns_domain=, id=43883dce-1590-48c4-987c-a21b63b82a1c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-1975538139, port_security_enabled=True, project_id=34a17eee71de4bac8b71972a4b7b506c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=42818, qos_policy_id=None, revision_number=12, router:external=False, shared=False, standard_attr_id=1415, status=ACTIVE, subnets=['773a18d1-5b62-4bc9-af5e-7fc433180497'], tags=[], tenant_id=34a17eee71de4bac8b71972a4b7b506c, updated_at=2025-12-06T10:17:44Z, vlan_transparent=None, network_id=43883dce-1590-48c4-987c-a21b63b82a1c, port_security_enabled=True, project_id=34a17eee71de4bac8b71972a4b7b506c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['d618a097-5989-47aa-9263-1c8a114ad269'], standard_attr_id=1538, status=DOWN, tags=[], tenant_id=34a17eee71de4bac8b71972a4b7b506c, updated_at=2025-12-06T10:17:44Z on network 43883dce-1590-48c4-987c-a21b63b82a1c#033[00m Dec 6 05:17:54 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:54.896 263652 INFO neutron.agent.dhcp.agent [None req-772f3f79-0e41-40f0-8b75-e46d6b3a7d1b - - - - - -] DHCP configuration for ports {'67b3547b-9d27-4643-bddb-ba71d121551d'} is completed#033[00m Dec 6 05:17:54 localhost dnsmasq[319854]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 2 addresses Dec 6 05:17:54 localhost dnsmasq-dhcp[319854]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:17:54 localhost dnsmasq-dhcp[319854]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:17:54 localhost podman[319944]: 2025-12-06 10:17:54.990399271 +0000 UTC m=+0.064039553 container kill fc20fcf21869e5d1157562ec291824a00698e06684a29f052ec6f97cdf9bab98 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Dec 6 05:17:55 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:55.030 263652 INFO neutron.agent.dhcp.agent [None req-00f6a456-e68b-467f-a5e8-6f89918f7ec4 - - - - - -] DHCP configuration for ports {'39dfb9d0-ac72-4ae3-ac11-cd04485d1755'} is completed#033[00m Dec 6 05:17:55 localhost nova_compute[282193]: 2025-12-06 10:17:55.088 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:55 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:55.133 263652 INFO neutron.agent.dhcp.agent [None req-8275c5f7-8320-4bbc-b1d7-7694a270cf2f - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:17:47Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=031e9ed7-2f9d-4794-b149-fed50ddb5365, ip_allocation=immediate, mac_address=fa:16:3e:a1:a1:ca, name=tempest-NetworksTestDHCPv6-1428419815, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:17:28Z, description=, dns_domain=, id=43883dce-1590-48c4-987c-a21b63b82a1c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-1975538139, port_security_enabled=True, project_id=34a17eee71de4bac8b71972a4b7b506c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=42818, qos_policy_id=None, revision_number=14, router:external=False, shared=False, standard_attr_id=1415, status=ACTIVE, subnets=['f4c32db1-eb59-48e0-aec0-c4465a7e322c'], tags=[], tenant_id=34a17eee71de4bac8b71972a4b7b506c, updated_at=2025-12-06T10:17:47Z, vlan_transparent=None, network_id=43883dce-1590-48c4-987c-a21b63b82a1c, port_security_enabled=True, project_id=34a17eee71de4bac8b71972a4b7b506c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['d618a097-5989-47aa-9263-1c8a114ad269'], standard_attr_id=1554, status=DOWN, tags=[], tenant_id=34a17eee71de4bac8b71972a4b7b506c, updated_at=2025-12-06T10:17:47Z on network 43883dce-1590-48c4-987c-a21b63b82a1c#033[00m Dec 6 05:17:55 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:55.242 263652 INFO neutron.agent.dhcp.agent [None req-36de5a5a-04cc-4adc-aefd-1294e927fe2f - - - - - -] DHCP configuration for ports {'dc85b11d-4c03-4e32-ae66-316d37e2ed0c'} is completed#033[00m Dec 6 05:17:55 localhost dnsmasq[319854]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 3 addresses Dec 6 05:17:55 localhost dnsmasq-dhcp[319854]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:17:55 localhost dnsmasq-dhcp[319854]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:17:55 localhost podman[319981]: 2025-12-06 10:17:55.359557358 +0000 UTC m=+0.089111563 container kill fc20fcf21869e5d1157562ec291824a00698e06684a29f052ec6f97cdf9bab98 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:17:55 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:55.556 263652 INFO neutron.agent.dhcp.agent [None req-8275c5f7-8320-4bbc-b1d7-7694a270cf2f - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:17:52Z, description=, device_id=984ba1bf-ed49-495e-9318-1b56761910e8, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=846d7e3e-31ba-499c-b8e2-0158928f1018, ip_allocation=immediate, mac_address=fa:16:3e:3f:1a:25, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:17:28Z, description=, dns_domain=, id=43883dce-1590-48c4-987c-a21b63b82a1c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-1975538139, port_security_enabled=True, project_id=34a17eee71de4bac8b71972a4b7b506c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=42818, qos_policy_id=None, revision_number=18, router:external=False, shared=False, standard_attr_id=1415, status=ACTIVE, subnets=['f5202084-e1f1-45f3-9585-2947d7b89bec'], tags=[], tenant_id=34a17eee71de4bac8b71972a4b7b506c, updated_at=2025-12-06T10:17:51Z, vlan_transparent=None, network_id=43883dce-1590-48c4-987c-a21b63b82a1c, port_security_enabled=False, project_id=34a17eee71de4bac8b71972a4b7b506c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1590, status=DOWN, tags=[], tenant_id=34a17eee71de4bac8b71972a4b7b506c, updated_at=2025-12-06T10:17:52Z on network 43883dce-1590-48c4-987c-a21b63b82a1c#033[00m Dec 6 05:17:55 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:55.660 263652 INFO neutron.agent.dhcp.agent [None req-998f1619-48b0-4d21-9c7b-7258357dc7e7 - - - - - -] DHCP configuration for ports {'031e9ed7-2f9d-4794-b149-fed50ddb5365'} is completed#033[00m Dec 6 05:17:55 localhost dnsmasq[319854]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 4 addresses Dec 6 05:17:55 localhost dnsmasq-dhcp[319854]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:17:55 localhost dnsmasq-dhcp[319854]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:17:55 localhost podman[320020]: 2025-12-06 10:17:55.762888302 +0000 UTC m=+0.061602780 container kill fc20fcf21869e5d1157562ec291824a00698e06684a29f052ec6f97cdf9bab98 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS) Dec 6 05:17:55 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:55.922 263652 INFO neutron.agent.dhcp.agent [None req-8275c5f7-8320-4bbc-b1d7-7694a270cf2f - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:17:52Z, description=, device_id=984ba1bf-ed49-495e-9318-1b56761910e8, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=846d7e3e-31ba-499c-b8e2-0158928f1018, ip_allocation=immediate, mac_address=fa:16:3e:3f:1a:25, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:17:28Z, description=, dns_domain=, id=43883dce-1590-48c4-987c-a21b63b82a1c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-1975538139, port_security_enabled=True, project_id=34a17eee71de4bac8b71972a4b7b506c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=42818, qos_policy_id=None, revision_number=18, router:external=False, shared=False, standard_attr_id=1415, status=ACTIVE, subnets=['f5202084-e1f1-45f3-9585-2947d7b89bec'], tags=[], tenant_id=34a17eee71de4bac8b71972a4b7b506c, updated_at=2025-12-06T10:17:51Z, vlan_transparent=None, network_id=43883dce-1590-48c4-987c-a21b63b82a1c, port_security_enabled=False, project_id=34a17eee71de4bac8b71972a4b7b506c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1590, status=DOWN, tags=[], tenant_id=34a17eee71de4bac8b71972a4b7b506c, updated_at=2025-12-06T10:17:52Z on network 43883dce-1590-48c4-987c-a21b63b82a1c#033[00m Dec 6 05:17:55 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:55.986 263652 INFO neutron.agent.dhcp.agent [None req-973dab4b-7548-4deb-ac0e-1081184bbe7a - - - - - -] DHCP configuration for ports {'846d7e3e-31ba-499c-b8e2-0158928f1018'} is completed#033[00m Dec 6 05:17:56 localhost dnsmasq[319854]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 4 addresses Dec 6 05:17:56 localhost dnsmasq-dhcp[319854]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:17:56 localhost dnsmasq-dhcp[319854]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:17:56 localhost podman[320059]: 2025-12-06 10:17:56.12615263 +0000 UTC m=+0.069614062 container kill fc20fcf21869e5d1157562ec291824a00698e06684a29f052ec6f97cdf9bab98 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 6 05:17:56 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:56.314 263652 INFO neutron.agent.dhcp.agent [None req-e2ce761d-886a-4efd-963a-2f20b12878a1 - - - - - -] DHCP configuration for ports {'846d7e3e-31ba-499c-b8e2-0158928f1018'} is completed#033[00m Dec 6 05:17:56 localhost systemd[1]: tmp-crun.hOqZ7c.mount: Deactivated successfully. Dec 6 05:17:56 localhost podman[320098]: 2025-12-06 10:17:56.40505601 +0000 UTC m=+0.074508811 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125) Dec 6 05:17:56 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 3 addresses Dec 6 05:17:56 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:17:56 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:17:56 localhost dnsmasq[319854]: exiting on receipt of SIGTERM Dec 6 05:17:56 localhost podman[320162]: 2025-12-06 10:17:56.495702089 +0000 UTC m=+0.044110059 container kill fc20fcf21869e5d1157562ec291824a00698e06684a29f052ec6f97cdf9bab98 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 6 05:17:56 localhost systemd[1]: libpod-fc20fcf21869e5d1157562ec291824a00698e06684a29f052ec6f97cdf9bab98.scope: Deactivated successfully. Dec 6 05:17:56 localhost podman[320180]: 2025-12-06 10:17:56.539200388 +0000 UTC m=+0.031578168 container died fc20fcf21869e5d1157562ec291824a00698e06684a29f052ec6f97cdf9bab98 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 6 05:17:56 localhost podman[320180]: 2025-12-06 10:17:56.568222899 +0000 UTC m=+0.060600669 container cleanup fc20fcf21869e5d1157562ec291824a00698e06684a29f052ec6f97cdf9bab98 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 6 05:17:56 localhost systemd[1]: libpod-conmon-fc20fcf21869e5d1157562ec291824a00698e06684a29f052ec6f97cdf9bab98.scope: Deactivated successfully. Dec 6 05:17:56 localhost podman[320182]: 2025-12-06 10:17:56.644970357 +0000 UTC m=+0.127771856 container remove fc20fcf21869e5d1157562ec291824a00698e06684a29f052ec6f97cdf9bab98 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 6 05:17:56 localhost nova_compute[282193]: 2025-12-06 10:17:56.653 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:56 localhost ovn_controller[154851]: 2025-12-06T10:17:56Z|00255|binding|INFO|Releasing lport e3c62197-6b1b-4fe2-b169-9cfa6917af0a from this chassis (sb_readonly=0) Dec 6 05:17:56 localhost kernel: device tape3c62197-6b left promiscuous mode Dec 6 05:17:56 localhost ovn_controller[154851]: 2025-12-06T10:17:56Z|00256|binding|INFO|Setting lport e3c62197-6b1b-4fe2-b169-9cfa6917af0a down in Southbound Dec 6 05:17:56 localhost ovn_controller[154851]: 2025-12-06T10:17:56Z|00257|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:17:56 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:56.669 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=e3c62197-6b1b-4fe2-b169-9cfa6917af0a) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:17:56 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:56.670 160509 INFO neutron.agent.ovn.metadata.agent [-] Port e3c62197-6b1b-4fe2-b169-9cfa6917af0a in datapath 43883dce-1590-48c4-987c-a21b63b82a1c unbound from our chassis#033[00m Dec 6 05:17:56 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:56.670 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 43883dce-1590-48c4-987c-a21b63b82a1c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:17:56 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:56.683 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[e0d24796-20bd-48e9-a798-6fec88d542ed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:17:56 localhost nova_compute[282193]: 2025-12-06 10:17:56.684 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:56 localhost nova_compute[282193]: 2025-12-06 10:17:56.695 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:56 localhost nova_compute[282193]: 2025-12-06 10:17:56.698 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:56 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 6 05:17:56 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1671277929' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 6 05:17:56 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 6 05:17:56 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1671277929' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 6 05:17:57 localhost systemd[1]: var-lib-containers-storage-overlay-46c21cbe908f45bcdc1b00e9cb02aecc14e9e3226ea975523b944b16c6663789-merged.mount: Deactivated successfully. Dec 6 05:17:57 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fc20fcf21869e5d1157562ec291824a00698e06684a29f052ec6f97cdf9bab98-userdata-shm.mount: Deactivated successfully. Dec 6 05:17:57 localhost systemd[1]: run-netns-qdhcp\x2d43883dce\x2d1590\x2d48c4\x2d987c\x2da21b63b82a1c.mount: Deactivated successfully. Dec 6 05:17:57 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:57.213 263652 INFO neutron.agent.dhcp.agent [None req-7f2f6ac0-106b-4c57-a85b-ca2375d7a5ed - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:17:57 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:57.216 263652 INFO neutron.agent.dhcp.agent [None req-7f2f6ac0-106b-4c57-a85b-ca2375d7a5ed - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:17:57 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:57.216 263652 INFO neutron.agent.dhcp.agent [None req-7f2f6ac0-106b-4c57-a85b-ca2375d7a5ed - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:17:57 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:57.217 263652 INFO neutron.agent.dhcp.agent [None req-7f2f6ac0-106b-4c57-a85b-ca2375d7a5ed - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:17:57 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:57.217 263652 INFO neutron.agent.dhcp.agent [None req-7f2f6ac0-106b-4c57-a85b-ca2375d7a5ed - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:17:57 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:57.217 263652 INFO neutron.agent.dhcp.agent [None req-7f2f6ac0-106b-4c57-a85b-ca2375d7a5ed - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:17:57 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:57.218 263652 INFO neutron.agent.dhcp.agent [None req-7f2f6ac0-106b-4c57-a85b-ca2375d7a5ed - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:17:57 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:57.218 263652 INFO neutron.agent.dhcp.agent [None req-7f2f6ac0-106b-4c57-a85b-ca2375d7a5ed - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:17:57 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:57.218 263652 INFO neutron.agent.dhcp.agent [None req-7f2f6ac0-106b-4c57-a85b-ca2375d7a5ed - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:17:57 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:57.218 263652 INFO neutron.agent.dhcp.agent [None req-7f2f6ac0-106b-4c57-a85b-ca2375d7a5ed - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:17:57 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:57.219 263652 INFO neutron.agent.dhcp.agent [None req-7f2f6ac0-106b-4c57-a85b-ca2375d7a5ed - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:17:57 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:57.219 263652 INFO neutron.agent.dhcp.agent [None req-7f2f6ac0-106b-4c57-a85b-ca2375d7a5ed - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:17:57 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:57.219 263652 INFO neutron.agent.dhcp.agent [None req-7f2f6ac0-106b-4c57-a85b-ca2375d7a5ed - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:17:57 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:57.219 263652 INFO neutron.agent.dhcp.agent [None req-7f2f6ac0-106b-4c57-a85b-ca2375d7a5ed - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:17:57 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:57.220 263652 INFO neutron.agent.dhcp.agent [None req-7f2f6ac0-106b-4c57-a85b-ca2375d7a5ed - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:17:57 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:57.220 263652 INFO neutron.agent.dhcp.agent [None req-7f2f6ac0-106b-4c57-a85b-ca2375d7a5ed - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:17:57 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:57.220 263652 INFO neutron.agent.dhcp.agent [None req-7f2f6ac0-106b-4c57-a85b-ca2375d7a5ed - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:17:57 localhost nova_compute[282193]: 2025-12-06 10:17:57.703 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:57 localhost ovn_controller[154851]: 2025-12-06T10:17:57Z|00258|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:17:57 localhost nova_compute[282193]: 2025-12-06 10:17:57.841 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:57 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 2 addresses Dec 6 05:17:57 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:17:57 localhost podman[320279]: 2025-12-06 10:17:57.924224009 +0000 UTC m=+0.054304219 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:17:57 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:17:58 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:17:58 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:17:58 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:17:58 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e139 e139: 6 total, 6 up, 6 in Dec 6 05:17:58 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:58.452 263652 INFO neutron.agent.linux.ip_lib [None req-2fb783c0-7cf0-45de-874b-b32c19755a44 - - - - - -] Device tapa88d84d5-c8 cannot be used as it has no MAC address#033[00m Dec 6 05:17:58 localhost nova_compute[282193]: 2025-12-06 10:17:58.479 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:58 localhost kernel: device tapa88d84d5-c8 entered promiscuous mode Dec 6 05:17:58 localhost NetworkManager[5973]: [1765016278.4896] manager: (tapa88d84d5-c8): new Generic device (/org/freedesktop/NetworkManager/Devices/44) Dec 6 05:17:58 localhost nova_compute[282193]: 2025-12-06 10:17:58.489 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:58 localhost ovn_controller[154851]: 2025-12-06T10:17:58Z|00259|binding|INFO|Claiming lport a88d84d5-c856-402e-975d-7a0db34028a3 for this chassis. Dec 6 05:17:58 localhost ovn_controller[154851]: 2025-12-06T10:17:58Z|00260|binding|INFO|a88d84d5-c856-402e-975d-7a0db34028a3: Claiming unknown Dec 6 05:17:58 localhost systemd-udevd[320310]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:17:58 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:58.504 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=a88d84d5-c856-402e-975d-7a0db34028a3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:17:58 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:58.506 160509 INFO neutron.agent.ovn.metadata.agent [-] Port a88d84d5-c856-402e-975d-7a0db34028a3 in datapath 43883dce-1590-48c4-987c-a21b63b82a1c bound to our chassis#033[00m Dec 6 05:17:58 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:58.507 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 43883dce-1590-48c4-987c-a21b63b82a1c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:17:58 localhost ovn_metadata_agent[160504]: 2025-12-06 10:17:58.508 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[5cf731fb-de35-4086-96fc-b89e9928af0f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:17:58 localhost ovn_controller[154851]: 2025-12-06T10:17:58Z|00261|binding|INFO|Setting lport a88d84d5-c856-402e-975d-7a0db34028a3 ovn-installed in OVS Dec 6 05:17:58 localhost ovn_controller[154851]: 2025-12-06T10:17:58Z|00262|binding|INFO|Setting lport a88d84d5-c856-402e-975d-7a0db34028a3 up in Southbound Dec 6 05:17:58 localhost journal[230404]: ethtool ioctl error on tapa88d84d5-c8: No such device Dec 6 05:17:58 localhost nova_compute[282193]: 2025-12-06 10:17:58.534 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:58 localhost journal[230404]: ethtool ioctl error on tapa88d84d5-c8: No such device Dec 6 05:17:58 localhost journal[230404]: ethtool ioctl error on tapa88d84d5-c8: No such device Dec 6 05:17:58 localhost journal[230404]: ethtool ioctl error on tapa88d84d5-c8: No such device Dec 6 05:17:58 localhost journal[230404]: ethtool ioctl error on tapa88d84d5-c8: No such device Dec 6 05:17:58 localhost journal[230404]: ethtool ioctl error on tapa88d84d5-c8: No such device Dec 6 05:17:58 localhost journal[230404]: ethtool ioctl error on tapa88d84d5-c8: No such device Dec 6 05:17:58 localhost journal[230404]: ethtool ioctl error on tapa88d84d5-c8: No such device Dec 6 05:17:58 localhost nova_compute[282193]: 2025-12-06 10:17:58.582 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:58 localhost nova_compute[282193]: 2025-12-06 10:17:58.612 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:58 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:58.615 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:17:58Z, description=, device_id=f9d47455-4f4d-4051-9259-2dd1238f7b5a, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=5653c267-2244-44fb-bd63-9f60854c0945, ip_allocation=immediate, mac_address=fa:16:3e:a5:e7:e5, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1631, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-06T10:17:58Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:17:58 localhost podman[320359]: 2025-12-06 10:17:58.830418494 +0000 UTC m=+0.049894114 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 6 05:17:58 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 3 addresses Dec 6 05:17:58 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:17:58 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:17:58 localhost nova_compute[282193]: 2025-12-06 10:17:58.900 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:17:59 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:59.316 263652 INFO neutron.agent.dhcp.agent [None req-730aaf8d-f0b4-43a0-8b8a-87c5e3888346 - - - - - -] DHCP configuration for ports {'5653c267-2244-44fb-bd63-9f60854c0945'} is completed#033[00m Dec 6 05:17:59 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:59.343 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:17:58Z, description=, device_id=6d77d769-2432-46ea-81cb-7c9efbed3186, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=50973850-abb3-4347-9d48-675f44e4821a, ip_allocation=immediate, mac_address=fa:16:3e:24:dc:28, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1634, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-06T10:17:59Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:17:59 localhost podman[320433]: Dec 6 05:17:59 localhost podman[320433]: 2025-12-06 10:17:59.582565789 +0000 UTC m=+0.058486655 container create 57b346441c3abd133ae032fa76a3084c226ca485c6132fd718ec2e896677be9a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:17:59 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 4 addresses Dec 6 05:17:59 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:17:59 localhost podman[320447]: 2025-12-06 10:17:59.622736827 +0000 UTC m=+0.061868708 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 6 05:17:59 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:17:59 localhost systemd[1]: Started libpod-conmon-57b346441c3abd133ae032fa76a3084c226ca485c6132fd718ec2e896677be9a.scope. Dec 6 05:17:59 localhost systemd[1]: Started libcrun container. Dec 6 05:17:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/271f045096e962713e97fa807cab1b588a56da9a1b0e67641acbd6559ec28410/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:17:59 localhost podman[320433]: 2025-12-06 10:17:59.551313001 +0000 UTC m=+0.027233897 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:17:59 localhost podman[320433]: 2025-12-06 10:17:59.651505129 +0000 UTC m=+0.127426035 container init 57b346441c3abd133ae032fa76a3084c226ca485c6132fd718ec2e896677be9a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3) Dec 6 05:17:59 localhost podman[320433]: 2025-12-06 10:17:59.667214056 +0000 UTC m=+0.143134952 container start 57b346441c3abd133ae032fa76a3084c226ca485c6132fd718ec2e896677be9a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:17:59 localhost dnsmasq[320470]: started, version 2.85 cachesize 150 Dec 6 05:17:59 localhost dnsmasq[320470]: DNS service limited to local subnets Dec 6 05:17:59 localhost dnsmasq[320470]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:17:59 localhost dnsmasq[320470]: warning: no upstream servers configured Dec 6 05:17:59 localhost dnsmasq[320470]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 0 addresses Dec 6 05:17:59 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:59.723 263652 INFO neutron.agent.dhcp.agent [None req-2fb783c0-7cf0-45de-874b-b32c19755a44 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:17:58Z, description=, device_id=65c1a743-e3fe-40a2-b51b-1d247b2883ed, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=c3106dac-02c2-4639-a185-038f65c0f50b, ip_allocation=immediate, mac_address=fa:16:3e:ab:06:e2, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:17:28Z, description=, dns_domain=, id=43883dce-1590-48c4-987c-a21b63b82a1c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-1975538139, port_security_enabled=True, project_id=34a17eee71de4bac8b71972a4b7b506c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=42818, qos_policy_id=None, revision_number=20, router:external=False, shared=False, standard_attr_id=1415, status=ACTIVE, subnets=['10c3ac68-1998-4b91-9b6f-10a0e5a37ad1'], tags=[], tenant_id=34a17eee71de4bac8b71972a4b7b506c, updated_at=2025-12-06T10:17:57Z, vlan_transparent=None, network_id=43883dce-1590-48c4-987c-a21b63b82a1c, port_security_enabled=False, project_id=34a17eee71de4bac8b71972a4b7b506c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1630, status=DOWN, tags=[], tenant_id=34a17eee71de4bac8b71972a4b7b506c, updated_at=2025-12-06T10:17:58Z on network 43883dce-1590-48c4-987c-a21b63b82a1c#033[00m Dec 6 05:17:59 localhost dnsmasq[320470]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 1 addresses Dec 6 05:17:59 localhost podman[320492]: 2025-12-06 10:17:59.912917079 +0000 UTC m=+0.061792895 container kill 57b346441c3abd133ae032fa76a3084c226ca485c6132fd718ec2e896677be9a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 6 05:17:59 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:17:59.955 263652 INFO neutron.agent.dhcp.agent [None req-ef6dfc4f-d6b1-49e0-a129-db86020f3578 - - - - - -] DHCP configuration for ports {'687d7abb-e6aa-4047-aa26-552c962fcc91', '50973850-abb3-4347-9d48-675f44e4821a'} is completed#033[00m Dec 6 05:18:00 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:00.063 263652 INFO neutron.agent.dhcp.agent [None req-2fb783c0-7cf0-45de-874b-b32c19755a44 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:17:58Z, description=, device_id=65c1a743-e3fe-40a2-b51b-1d247b2883ed, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=c3106dac-02c2-4639-a185-038f65c0f50b, ip_allocation=immediate, mac_address=fa:16:3e:ab:06:e2, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:17:28Z, description=, dns_domain=, id=43883dce-1590-48c4-987c-a21b63b82a1c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-1975538139, port_security_enabled=True, project_id=34a17eee71de4bac8b71972a4b7b506c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=42818, qos_policy_id=None, revision_number=20, router:external=False, shared=False, standard_attr_id=1415, status=ACTIVE, subnets=['10c3ac68-1998-4b91-9b6f-10a0e5a37ad1'], tags=[], tenant_id=34a17eee71de4bac8b71972a4b7b506c, updated_at=2025-12-06T10:17:57Z, vlan_transparent=None, network_id=43883dce-1590-48c4-987c-a21b63b82a1c, port_security_enabled=False, project_id=34a17eee71de4bac8b71972a4b7b506c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1630, status=DOWN, tags=[], tenant_id=34a17eee71de4bac8b71972a4b7b506c, updated_at=2025-12-06T10:17:58Z on network 43883dce-1590-48c4-987c-a21b63b82a1c#033[00m Dec 6 05:18:00 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 6 05:18:00 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1856326074' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 6 05:18:00 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 6 05:18:00 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1856326074' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 6 05:18:00 localhost nova_compute[282193]: 2025-12-06 10:18:00.158 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:00 localhost dnsmasq[320470]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 1 addresses Dec 6 05:18:00 localhost podman[320532]: 2025-12-06 10:18:00.273573277 +0000 UTC m=+0.063405843 container kill 57b346441c3abd133ae032fa76a3084c226ca485c6132fd718ec2e896677be9a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2) Dec 6 05:18:00 localhost systemd[1]: tmp-crun.0xyTwN.mount: Deactivated successfully. Dec 6 05:18:00 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:00.614 263652 INFO neutron.agent.dhcp.agent [None req-dc1e3993-e255-4e3c-ae44-d123ce14b3be - - - - - -] DHCP configuration for ports {'c3106dac-02c2-4639-a185-038f65c0f50b'} is completed#033[00m Dec 6 05:18:00 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:00.784 263652 INFO neutron.agent.dhcp.agent [None req-123f9a4d-72c8-4ebf-a4cf-57cc7b111bea - - - - - -] DHCP configuration for ports {'c3106dac-02c2-4639-a185-038f65c0f50b'} is completed#033[00m Dec 6 05:18:00 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 6 05:18:00 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2389509646' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 6 05:18:00 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 6 05:18:00 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2389509646' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 6 05:18:01 localhost nova_compute[282193]: 2025-12-06 10:18:01.148 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:18:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:18:01 localhost podman[320555]: 2025-12-06 10:18:01.922846963 +0000 UTC m=+0.074396937 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 6 05:18:01 localhost podman[320555]: 2025-12-06 10:18:01.931196736 +0000 UTC m=+0.082746730 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:18:01 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:18:01 localhost podman[320554]: 2025-12-06 10:18:01.981786021 +0000 UTC m=+0.140666948 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Dec 6 05:18:01 localhost podman[320554]: 2025-12-06 10:18:01.98804649 +0000 UTC m=+0.146927367 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:18:02 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:18:02 localhost dnsmasq[320470]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 0 addresses Dec 6 05:18:02 localhost podman[320612]: 2025-12-06 10:18:02.403943806 +0000 UTC m=+0.071772648 container kill 57b346441c3abd133ae032fa76a3084c226ca485c6132fd718ec2e896677be9a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:18:02 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:18:02 localhost nova_compute[282193]: 2025-12-06 10:18:02.681 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:02 localhost ovn_controller[154851]: 2025-12-06T10:18:02Z|00263|binding|INFO|Releasing lport a88d84d5-c856-402e-975d-7a0db34028a3 from this chassis (sb_readonly=0) Dec 6 05:18:02 localhost ovn_controller[154851]: 2025-12-06T10:18:02Z|00264|binding|INFO|Setting lport a88d84d5-c856-402e-975d-7a0db34028a3 down in Southbound Dec 6 05:18:02 localhost kernel: device tapa88d84d5-c8 left promiscuous mode Dec 6 05:18:02 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:02.692 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548789.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=a88d84d5-c856-402e-975d-7a0db34028a3) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:18:02 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:02.694 160509 INFO neutron.agent.ovn.metadata.agent [-] Port a88d84d5-c856-402e-975d-7a0db34028a3 in datapath 43883dce-1590-48c4-987c-a21b63b82a1c unbound from our chassis#033[00m Dec 6 05:18:02 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:02.696 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 43883dce-1590-48c4-987c-a21b63b82a1c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:18:02 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:02.697 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[cdabcf84-34ab-48a4-ba08-4ba22e66dc41]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:18:02 localhost nova_compute[282193]: 2025-12-06 10:18:02.702 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:02 localhost nova_compute[282193]: 2025-12-06 10:18:02.705 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:02 localhost systemd[1]: tmp-crun.qfS4S5.mount: Deactivated successfully. Dec 6 05:18:03 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:18:03 localhost podman[320651]: 2025-12-06 10:18:03.313923677 +0000 UTC m=+0.061042152 container kill 57b346441c3abd133ae032fa76a3084c226ca485c6132fd718ec2e896677be9a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0) Dec 6 05:18:03 localhost dnsmasq[320470]: exiting on receipt of SIGTERM Dec 6 05:18:03 localhost systemd[1]: libpod-57b346441c3abd133ae032fa76a3084c226ca485c6132fd718ec2e896677be9a.scope: Deactivated successfully. Dec 6 05:18:03 localhost podman[320664]: 2025-12-06 10:18:03.391610533 +0000 UTC m=+0.059138535 container died 57b346441c3abd133ae032fa76a3084c226ca485c6132fd718ec2e896677be9a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 6 05:18:03 localhost ovn_controller[154851]: 2025-12-06T10:18:03Z|00265|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:18:03 localhost podman[320664]: 2025-12-06 10:18:03.428919335 +0000 UTC m=+0.096447297 container cleanup 57b346441c3abd133ae032fa76a3084c226ca485c6132fd718ec2e896677be9a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:18:03 localhost systemd[1]: libpod-conmon-57b346441c3abd133ae032fa76a3084c226ca485c6132fd718ec2e896677be9a.scope: Deactivated successfully. Dec 6 05:18:03 localhost nova_compute[282193]: 2025-12-06 10:18:03.453 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:03 localhost podman[320665]: 2025-12-06 10:18:03.527097123 +0000 UTC m=+0.190744016 container remove 57b346441c3abd133ae032fa76a3084c226ca485c6132fd718ec2e896677be9a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 6 05:18:03 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:03.824 263652 INFO neutron.agent.dhcp.agent [None req-af7bf473-a0e6-4493-960a-d20f0832d732 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:18:03 localhost nova_compute[282193]: 2025-12-06 10:18:03.936 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:03 localhost systemd[1]: var-lib-containers-storage-overlay-271f045096e962713e97fa807cab1b588a56da9a1b0e67641acbd6559ec28410-merged.mount: Deactivated successfully. Dec 6 05:18:03 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-57b346441c3abd133ae032fa76a3084c226ca485c6132fd718ec2e896677be9a-userdata-shm.mount: Deactivated successfully. Dec 6 05:18:03 localhost systemd[1]: run-netns-qdhcp\x2d43883dce\x2d1590\x2d48c4\x2d987c\x2da21b63b82a1c.mount: Deactivated successfully. Dec 6 05:18:04 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:04.287 2 INFO neutron.agent.securitygroups_rpc [None req-a84ff9e7-4dda-4f24-9c52-73179c1374d1 4c6008178bdc445aa99fb1b726f87b45 a2aaeadee6f14b78a73f8886be99b671 - - default default] Security group member updated ['13551db8-e8e0-43b0-89a9-b0d8423e74c9']#033[00m Dec 6 05:18:04 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:04.735 263652 INFO neutron.agent.linux.ip_lib [None req-c817246b-9af4-4a9c-bfad-8300a0140231 - - - - - -] Device tap795909b0-e9 cannot be used as it has no MAC address#033[00m Dec 6 05:18:04 localhost nova_compute[282193]: 2025-12-06 10:18:04.763 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:04 localhost kernel: device tap795909b0-e9 entered promiscuous mode Dec 6 05:18:04 localhost ovn_controller[154851]: 2025-12-06T10:18:04Z|00266|binding|INFO|Claiming lport 795909b0-e9c1-4d84-850f-e878bfa3090c for this chassis. Dec 6 05:18:04 localhost ovn_controller[154851]: 2025-12-06T10:18:04Z|00267|binding|INFO|795909b0-e9c1-4d84-850f-e878bfa3090c: Claiming unknown Dec 6 05:18:04 localhost nova_compute[282193]: 2025-12-06 10:18:04.773 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:04 localhost NetworkManager[5973]: [1765016284.7746] manager: (tap795909b0-e9): new Generic device (/org/freedesktop/NetworkManager/Devices/45) Dec 6 05:18:04 localhost systemd-udevd[320702]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:18:04 localhost nova_compute[282193]: 2025-12-06 10:18:04.790 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:04 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:04.807 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fea3:f6bb/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-aacf8ef2-726e-4b97-b5f2-032a84aa6e97', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aacf8ef2-726e-4b97-b5f2-032a84aa6e97', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a2aaeadee6f14b78a73f8886be99b671', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=51d66027-c066-482f-93f1-6217163f6b22, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=795909b0-e9c1-4d84-850f-e878bfa3090c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:18:04 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:04.808 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 795909b0-e9c1-4d84-850f-e878bfa3090c in datapath aacf8ef2-726e-4b97-b5f2-032a84aa6e97 bound to our chassis#033[00m Dec 6 05:18:04 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:04.810 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Port ab0a787e-d042-495b-a77f-e65096e28c65 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 6 05:18:04 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:04.810 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aacf8ef2-726e-4b97-b5f2-032a84aa6e97, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:18:04 localhost journal[230404]: ethtool ioctl error on tap795909b0-e9: No such device Dec 6 05:18:04 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:04.811 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[5e956e4e-65d0-4f77-b058-f53cbd2b86d2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:18:04 localhost ovn_controller[154851]: 2025-12-06T10:18:04Z|00268|binding|INFO|Setting lport 795909b0-e9c1-4d84-850f-e878bfa3090c ovn-installed in OVS Dec 6 05:18:04 localhost ovn_controller[154851]: 2025-12-06T10:18:04Z|00269|binding|INFO|Setting lport 795909b0-e9c1-4d84-850f-e878bfa3090c up in Southbound Dec 6 05:18:04 localhost nova_compute[282193]: 2025-12-06 10:18:04.817 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:04 localhost journal[230404]: ethtool ioctl error on tap795909b0-e9: No such device Dec 6 05:18:04 localhost journal[230404]: ethtool ioctl error on tap795909b0-e9: No such device Dec 6 05:18:04 localhost journal[230404]: ethtool ioctl error on tap795909b0-e9: No such device Dec 6 05:18:04 localhost journal[230404]: ethtool ioctl error on tap795909b0-e9: No such device Dec 6 05:18:04 localhost journal[230404]: ethtool ioctl error on tap795909b0-e9: No such device Dec 6 05:18:04 localhost journal[230404]: ethtool ioctl error on tap795909b0-e9: No such device Dec 6 05:18:04 localhost journal[230404]: ethtool ioctl error on tap795909b0-e9: No such device Dec 6 05:18:04 localhost nova_compute[282193]: 2025-12-06 10:18:04.875 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:04 localhost nova_compute[282193]: 2025-12-06 10:18:04.901 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:05 localhost systemd[1]: tmp-crun.lziCyZ.mount: Deactivated successfully. Dec 6 05:18:05 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 3 addresses Dec 6 05:18:05 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:18:05 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:18:05 localhost podman[320749]: 2025-12-06 10:18:05.071785835 +0000 UTC m=+0.072792709 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3) Dec 6 05:18:05 localhost podman[320812]: Dec 6 05:18:05 localhost podman[320812]: 2025-12-06 10:18:05.912986991 +0000 UTC m=+0.093823667 container create ee0cfafc2cf0bb1386540f40a224a2f3bfd7d908989bd2f85b43129d15f457fd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-aacf8ef2-726e-4b97-b5f2-032a84aa6e97, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS) Dec 6 05:18:05 localhost systemd[1]: Started libpod-conmon-ee0cfafc2cf0bb1386540f40a224a2f3bfd7d908989bd2f85b43129d15f457fd.scope. Dec 6 05:18:05 localhost podman[320812]: 2025-12-06 10:18:05.869568273 +0000 UTC m=+0.050404949 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:18:05 localhost systemd[1]: Started libcrun container. Dec 6 05:18:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d313493fde52dfd539ad4b8ab9379c14b9f143306ec1d068c13748552a2ec796/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:18:05 localhost podman[320812]: 2025-12-06 10:18:05.996480173 +0000 UTC m=+0.177316849 container init ee0cfafc2cf0bb1386540f40a224a2f3bfd7d908989bd2f85b43129d15f457fd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-aacf8ef2-726e-4b97-b5f2-032a84aa6e97, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true) Dec 6 05:18:06 localhost podman[320812]: 2025-12-06 10:18:06.006210438 +0000 UTC m=+0.187047114 container start ee0cfafc2cf0bb1386540f40a224a2f3bfd7d908989bd2f85b43129d15f457fd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-aacf8ef2-726e-4b97-b5f2-032a84aa6e97, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0) Dec 6 05:18:06 localhost dnsmasq[320831]: started, version 2.85 cachesize 150 Dec 6 05:18:06 localhost dnsmasq[320831]: DNS service limited to local subnets Dec 6 05:18:06 localhost dnsmasq[320831]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:18:06 localhost dnsmasq[320831]: warning: no upstream servers configured Dec 6 05:18:06 localhost dnsmasq[320831]: read /var/lib/neutron/dhcp/aacf8ef2-726e-4b97-b5f2-032a84aa6e97/addn_hosts - 0 addresses Dec 6 05:18:06 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:06.201 263652 INFO neutron.agent.dhcp.agent [None req-978a36a2-91c1-42bb-86d7-97571eab2243 - - - - - -] DHCP configuration for ports {'ddf6de7e-486c-44c8-8a7f-842446c78589'} is completed#033[00m Dec 6 05:18:06 localhost dnsmasq[320831]: exiting on receipt of SIGTERM Dec 6 05:18:06 localhost podman[320847]: 2025-12-06 10:18:06.331185065 +0000 UTC m=+0.066691274 container kill ee0cfafc2cf0bb1386540f40a224a2f3bfd7d908989bd2f85b43129d15f457fd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-aacf8ef2-726e-4b97-b5f2-032a84aa6e97, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 6 05:18:06 localhost systemd[1]: libpod-ee0cfafc2cf0bb1386540f40a224a2f3bfd7d908989bd2f85b43129d15f457fd.scope: Deactivated successfully. Dec 6 05:18:06 localhost podman[320860]: 2025-12-06 10:18:06.414882944 +0000 UTC m=+0.061848417 container died ee0cfafc2cf0bb1386540f40a224a2f3bfd7d908989bd2f85b43129d15f457fd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-aacf8ef2-726e-4b97-b5f2-032a84aa6e97, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:18:06 localhost podman[320860]: 2025-12-06 10:18:06.452608188 +0000 UTC m=+0.099573621 container cleanup ee0cfafc2cf0bb1386540f40a224a2f3bfd7d908989bd2f85b43129d15f457fd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-aacf8ef2-726e-4b97-b5f2-032a84aa6e97, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:18:06 localhost systemd[1]: libpod-conmon-ee0cfafc2cf0bb1386540f40a224a2f3bfd7d908989bd2f85b43129d15f457fd.scope: Deactivated successfully. Dec 6 05:18:06 localhost podman[320861]: 2025-12-06 10:18:06.49190774 +0000 UTC m=+0.136553293 container remove ee0cfafc2cf0bb1386540f40a224a2f3bfd7d908989bd2f85b43129d15f457fd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-aacf8ef2-726e-4b97-b5f2-032a84aa6e97, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 6 05:18:06 localhost nova_compute[282193]: 2025-12-06 10:18:06.506 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:06 localhost ovn_controller[154851]: 2025-12-06T10:18:06Z|00270|binding|INFO|Releasing lport 795909b0-e9c1-4d84-850f-e878bfa3090c from this chassis (sb_readonly=0) Dec 6 05:18:06 localhost kernel: device tap795909b0-e9 left promiscuous mode Dec 6 05:18:06 localhost ovn_controller[154851]: 2025-12-06T10:18:06Z|00271|binding|INFO|Setting lport 795909b0-e9c1-4d84-850f-e878bfa3090c down in Southbound Dec 6 05:18:06 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:06.518 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fea3:f6bb/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-aacf8ef2-726e-4b97-b5f2-032a84aa6e97', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aacf8ef2-726e-4b97-b5f2-032a84aa6e97', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a2aaeadee6f14b78a73f8886be99b671', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=51d66027-c066-482f-93f1-6217163f6b22, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=795909b0-e9c1-4d84-850f-e878bfa3090c) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:18:06 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:06.520 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 795909b0-e9c1-4d84-850f-e878bfa3090c in datapath aacf8ef2-726e-4b97-b5f2-032a84aa6e97 unbound from our chassis#033[00m Dec 6 05:18:06 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:06.523 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aacf8ef2-726e-4b97-b5f2-032a84aa6e97, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:18:06 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:06.524 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[f0b0e7f2-858a-4fbf-93f8-9a498978e07c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:18:06 localhost nova_compute[282193]: 2025-12-06 10:18:06.527 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:06 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:06.848 263652 INFO neutron.agent.dhcp.agent [None req-85939946-bc92-414a-acbc-b30bab3c85ba - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:18:07 localhost systemd[1]: var-lib-containers-storage-overlay-d313493fde52dfd539ad4b8ab9379c14b9f143306ec1d068c13748552a2ec796-merged.mount: Deactivated successfully. Dec 6 05:18:07 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ee0cfafc2cf0bb1386540f40a224a2f3bfd7d908989bd2f85b43129d15f457fd-userdata-shm.mount: Deactivated successfully. Dec 6 05:18:07 localhost systemd[1]: run-netns-qdhcp\x2daacf8ef2\x2d726e\x2d4b97\x2db5f2\x2d032a84aa6e97.mount: Deactivated successfully. Dec 6 05:18:07 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e140 e140: 6 total, 6 up, 6 in Dec 6 05:18:07 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:07.608 263652 INFO neutron.agent.linux.ip_lib [None req-1d7f5d7d-a7d8-4cb5-9f7d-4b35e873013c - - - - - -] Device tape6df781f-3c cannot be used as it has no MAC address#033[00m Dec 6 05:18:07 localhost nova_compute[282193]: 2025-12-06 10:18:07.672 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:07 localhost kernel: device tape6df781f-3c entered promiscuous mode Dec 6 05:18:07 localhost ovn_controller[154851]: 2025-12-06T10:18:07Z|00272|binding|INFO|Claiming lport e6df781f-3c99-4041-b79d-84bfb7ba881e for this chassis. Dec 6 05:18:07 localhost ovn_controller[154851]: 2025-12-06T10:18:07Z|00273|binding|INFO|e6df781f-3c99-4041-b79d-84bfb7ba881e: Claiming unknown Dec 6 05:18:07 localhost NetworkManager[5973]: [1765016287.6804] manager: (tape6df781f-3c): new Generic device (/org/freedesktop/NetworkManager/Devices/46) Dec 6 05:18:07 localhost systemd-udevd[320705]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:18:07 localhost nova_compute[282193]: 2025-12-06 10:18:07.682 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:07 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:07.700 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe84:1b9/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=e6df781f-3c99-4041-b79d-84bfb7ba881e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:18:07 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:07.703 160509 INFO neutron.agent.ovn.metadata.agent [-] Port e6df781f-3c99-4041-b79d-84bfb7ba881e in datapath 43883dce-1590-48c4-987c-a21b63b82a1c bound to our chassis#033[00m Dec 6 05:18:07 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:07.706 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Port 1059294c-cfcd-41d2-879b-e9dc313613f9 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 6 05:18:07 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:07.706 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43883dce-1590-48c4-987c-a21b63b82a1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:18:07 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:07.707 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[5e2cc02a-4c0f-4f60-81e0-594a9820dc3b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:18:07 localhost nova_compute[282193]: 2025-12-06 10:18:07.713 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:07 localhost journal[230404]: ethtool ioctl error on tape6df781f-3c: No such device Dec 6 05:18:07 localhost ovn_controller[154851]: 2025-12-06T10:18:07Z|00274|binding|INFO|Setting lport e6df781f-3c99-4041-b79d-84bfb7ba881e ovn-installed in OVS Dec 6 05:18:07 localhost ovn_controller[154851]: 2025-12-06T10:18:07Z|00275|binding|INFO|Setting lport e6df781f-3c99-4041-b79d-84bfb7ba881e up in Southbound Dec 6 05:18:07 localhost nova_compute[282193]: 2025-12-06 10:18:07.720 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:07 localhost nova_compute[282193]: 2025-12-06 10:18:07.721 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:07 localhost journal[230404]: ethtool ioctl error on tape6df781f-3c: No such device Dec 6 05:18:07 localhost journal[230404]: ethtool ioctl error on tape6df781f-3c: No such device Dec 6 05:18:07 localhost journal[230404]: ethtool ioctl error on tape6df781f-3c: No such device Dec 6 05:18:07 localhost journal[230404]: ethtool ioctl error on tape6df781f-3c: No such device Dec 6 05:18:07 localhost journal[230404]: ethtool ioctl error on tape6df781f-3c: No such device Dec 6 05:18:07 localhost journal[230404]: ethtool ioctl error on tape6df781f-3c: No such device Dec 6 05:18:07 localhost journal[230404]: ethtool ioctl error on tape6df781f-3c: No such device Dec 6 05:18:07 localhost nova_compute[282193]: 2025-12-06 10:18:07.770 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:07 localhost nova_compute[282193]: 2025-12-06 10:18:07.806 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:07 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:07.977 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:18:07Z, description=, device_id=1217a843-f657-48ae-9649-70dee34aefa0, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=b37b54ab-181e-4c3f-84f8-dee38b4d66be, ip_allocation=immediate, mac_address=fa:16:3e:d1:0b:4f, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1680, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-06T10:18:07Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:18:08 localhost podman[320952]: 2025-12-06 10:18:08.208707804 +0000 UTC m=+0.065605941 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 6 05:18:08 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 4 addresses Dec 6 05:18:08 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:18:08 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:18:08 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:18:08 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:08.600 263652 INFO neutron.agent.dhcp.agent [None req-1f545d09-ee9f-4b4e-9006-a512ebe7de32 - - - - - -] DHCP configuration for ports {'b37b54ab-181e-4c3f-84f8-dee38b4d66be'} is completed#033[00m Dec 6 05:18:08 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:08.686 2 INFO neutron.agent.securitygroups_rpc [None req-2cd445e7-be6d-4272-b78a-eedc8c1ca774 4c6008178bdc445aa99fb1b726f87b45 a2aaeadee6f14b78a73f8886be99b671 - - default default] Security group member updated ['13551db8-e8e0-43b0-89a9-b0d8423e74c9']#033[00m Dec 6 05:18:08 localhost podman[321007]: Dec 6 05:18:08 localhost podman[321007]: 2025-12-06 10:18:08.807365412 +0000 UTC m=+0.099043346 container create 07ecf95c17dd1c0fe167c3b563f02c09bc22bd14b2d45508afdb9207cd0a77c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:18:08 localhost systemd[1]: Started libpod-conmon-07ecf95c17dd1c0fe167c3b563f02c09bc22bd14b2d45508afdb9207cd0a77c0.scope. Dec 6 05:18:08 localhost podman[321007]: 2025-12-06 10:18:08.759244793 +0000 UTC m=+0.050922757 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:18:08 localhost systemd[1]: Started libcrun container. Dec 6 05:18:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d10b16333c71c329bbfb79d2b80e4350b7ea74cfc2d42251c0bd5d1c1cd279f2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:18:08 localhost podman[321007]: 2025-12-06 10:18:08.885944896 +0000 UTC m=+0.177622820 container init 07ecf95c17dd1c0fe167c3b563f02c09bc22bd14b2d45508afdb9207cd0a77c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:18:08 localhost podman[321007]: 2025-12-06 10:18:08.895474024 +0000 UTC m=+0.187151948 container start 07ecf95c17dd1c0fe167c3b563f02c09bc22bd14b2d45508afdb9207cd0a77c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:18:08 localhost dnsmasq[321026]: started, version 2.85 cachesize 150 Dec 6 05:18:08 localhost dnsmasq[321026]: DNS service limited to local subnets Dec 6 05:18:08 localhost dnsmasq[321026]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:18:08 localhost dnsmasq[321026]: warning: no upstream servers configured Dec 6 05:18:08 localhost dnsmasq[321026]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 0 addresses Dec 6 05:18:08 localhost nova_compute[282193]: 2025-12-06 10:18:08.976 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:09 localhost nova_compute[282193]: 2025-12-06 10:18:09.012 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:09 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:09.075 263652 INFO neutron.agent.dhcp.agent [None req-0ae67413-d3cc-4ad8-bc4e-3a0105829b79 - - - - - -] DHCP configuration for ports {'687d7abb-e6aa-4047-aa26-552c962fcc91'} is completed#033[00m Dec 6 05:18:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:18:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:18:09 localhost podman[321032]: 2025-12-06 10:18:09.192340329 +0000 UTC m=+0.091150955 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_managed=true) Dec 6 05:18:09 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:09.225 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:fa:29 10.100.0.2 2001:db8::f816:3eff:fe3f:fa29'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe3f:fa29/64', 'neutron:device_id': 'ovnmeta-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=687d7abb-e6aa-4047-aa26-552c962fcc91) old=Port_Binding(mac=['fa:16:3e:3f:fa:29 2001:db8::f816:3eff:fe3f:fa29'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe3f:fa29/64', 'neutron:device_id': 'ovnmeta-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:18:09 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:09.226 160509 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 687d7abb-e6aa-4047-aa26-552c962fcc91 in datapath 43883dce-1590-48c4-987c-a21b63b82a1c updated#033[00m Dec 6 05:18:09 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:09.228 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Port 1059294c-cfcd-41d2-879b-e9dc313613f9 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 6 05:18:09 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:09.228 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43883dce-1590-48c4-987c-a21b63b82a1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:18:09 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:09.229 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[e555dd53-c23e-4e3f-a9ab-a32a218e03a2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:18:09 localhost podman[321030]: 2025-12-06 10:18:09.250327208 +0000 UTC m=+0.151790985 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, distribution-scope=public, io.openshift.tags=minimal rhel9, name=ubi9-minimal, container_name=openstack_network_exporter, io.openshift.expose-services=, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, release=1755695350, vendor=Red Hat, Inc., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 6 05:18:09 localhost podman[321032]: 2025-12-06 10:18:09.313648448 +0000 UTC m=+0.212459064 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 6 05:18:09 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:18:09 localhost dnsmasq[321026]: exiting on receipt of SIGTERM Dec 6 05:18:09 localhost podman[321083]: 2025-12-06 10:18:09.365023337 +0000 UTC m=+0.063651582 container kill 07ecf95c17dd1c0fe167c3b563f02c09bc22bd14b2d45508afdb9207cd0a77c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2) Dec 6 05:18:09 localhost systemd[1]: libpod-07ecf95c17dd1c0fe167c3b563f02c09bc22bd14b2d45508afdb9207cd0a77c0.scope: Deactivated successfully. Dec 6 05:18:09 localhost podman[321030]: 2025-12-06 10:18:09.366858352 +0000 UTC m=+0.268322169 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_id=edpm, version=9.6, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, release=1755695350, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Dec 6 05:18:09 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:18:09 localhost podman[321095]: 2025-12-06 10:18:09.437048951 +0000 UTC m=+0.060277469 container died 07ecf95c17dd1c0fe167c3b563f02c09bc22bd14b2d45508afdb9207cd0a77c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Dec 6 05:18:09 localhost podman[321095]: 2025-12-06 10:18:09.521556085 +0000 UTC m=+0.144784653 container cleanup 07ecf95c17dd1c0fe167c3b563f02c09bc22bd14b2d45508afdb9207cd0a77c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Dec 6 05:18:09 localhost systemd[1]: libpod-conmon-07ecf95c17dd1c0fe167c3b563f02c09bc22bd14b2d45508afdb9207cd0a77c0.scope: Deactivated successfully. Dec 6 05:18:09 localhost podman[321102]: 2025-12-06 10:18:09.55174266 +0000 UTC m=+0.162034175 container remove 07ecf95c17dd1c0fe167c3b563f02c09bc22bd14b2d45508afdb9207cd0a77c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:18:10 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:10.061 2 INFO neutron.agent.securitygroups_rpc [None req-36813505-8d2e-42b4-bcdd-400a4500589a a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']#033[00m Dec 6 05:18:10 localhost systemd[1]: var-lib-containers-storage-overlay-d10b16333c71c329bbfb79d2b80e4350b7ea74cfc2d42251c0bd5d1c1cd279f2-merged.mount: Deactivated successfully. Dec 6 05:18:10 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-07ecf95c17dd1c0fe167c3b563f02c09bc22bd14b2d45508afdb9207cd0a77c0-userdata-shm.mount: Deactivated successfully. Dec 6 05:18:10 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:10.939 2 INFO neutron.agent.securitygroups_rpc [None req-809d6155-5d31-4aee-97b1-907b0d1ee5ee a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']#033[00m Dec 6 05:18:11 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 3 addresses Dec 6 05:18:11 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:18:11 localhost podman[321142]: 2025-12-06 10:18:11.018255812 +0000 UTC m=+0.072260343 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Dec 6 05:18:11 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:18:11 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e141 e141: 6 total, 6 up, 6 in Dec 6 05:18:11 localhost nova_compute[282193]: 2025-12-06 10:18:11.928 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:11 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:11.929 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:6c:02', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:a8:2f:0c:cb:a1'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:18:11 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:11.931 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 6 05:18:12 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e142 e142: 6 total, 6 up, 6 in Dec 6 05:18:12 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 6 05:18:12 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2958126727' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 6 05:18:12 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 6 05:18:12 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2958126727' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 6 05:18:12 localhost podman[321214]: Dec 6 05:18:12 localhost podman[321214]: 2025-12-06 10:18:12.282792807 +0000 UTC m=+0.104837571 container create 3c95f8107be33efc6b62126a494e6c2cd6457b38c83e6f5347b359ffdaf46e6a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125) Dec 6 05:18:12 localhost podman[321214]: 2025-12-06 10:18:12.231901093 +0000 UTC m=+0.053945857 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:18:12 localhost systemd[1]: Started libpod-conmon-3c95f8107be33efc6b62126a494e6c2cd6457b38c83e6f5347b359ffdaf46e6a.scope. Dec 6 05:18:12 localhost systemd[1]: Started libcrun container. Dec 6 05:18:12 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96c1ac57ed0addf4968a8a3f8a0b5b5b6a27c910ab64d54d14bea32ff62ad6b9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:18:12 localhost podman[321214]: 2025-12-06 10:18:12.363949508 +0000 UTC m=+0.185994242 container init 3c95f8107be33efc6b62126a494e6c2cd6457b38c83e6f5347b359ffdaf46e6a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0) Dec 6 05:18:12 localhost podman[321214]: 2025-12-06 10:18:12.372816197 +0000 UTC m=+0.194860931 container start 3c95f8107be33efc6b62126a494e6c2cd6457b38c83e6f5347b359ffdaf46e6a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 6 05:18:12 localhost dnsmasq[321232]: started, version 2.85 cachesize 150 Dec 6 05:18:12 localhost dnsmasq[321232]: DNS service limited to local subnets Dec 6 05:18:12 localhost dnsmasq[321232]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:18:12 localhost dnsmasq[321232]: warning: no upstream servers configured Dec 6 05:18:12 localhost dnsmasq-dhcp[321232]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 6 05:18:12 localhost dnsmasq[321232]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 0 addresses Dec 6 05:18:12 localhost dnsmasq-dhcp[321232]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:18:12 localhost dnsmasq-dhcp[321232]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:18:12 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:12.680 263652 INFO neutron.agent.dhcp.agent [None req-c568e7b2-f4d6-414c-9171-b170262f466d - - - - - -] DHCP configuration for ports {'687d7abb-e6aa-4047-aa26-552c962fcc91', 'e6df781f-3c99-4041-b79d-84bfb7ba881e'} is completed#033[00m Dec 6 05:18:12 localhost nova_compute[282193]: 2025-12-06 10:18:12.724 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:12 localhost dnsmasq[321232]: exiting on receipt of SIGTERM Dec 6 05:18:12 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:12.747 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:18:12Z, description=, device_id=0e94edaf-39e7-4c44-b823-1518a09d8708, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=5fe0748d-ad57-4cf2-ab87-16200f623579, ip_allocation=immediate, mac_address=fa:16:3e:e9:d9:ee, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1702, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-06T10:18:12Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:18:12 localhost podman[321249]: 2025-12-06 10:18:12.748116901 +0000 UTC m=+0.092886509 container kill 3c95f8107be33efc6b62126a494e6c2cd6457b38c83e6f5347b359ffdaf46e6a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 6 05:18:12 localhost systemd[1]: libpod-3c95f8107be33efc6b62126a494e6c2cd6457b38c83e6f5347b359ffdaf46e6a.scope: Deactivated successfully. Dec 6 05:18:12 localhost podman[321263]: 2025-12-06 10:18:12.828933082 +0000 UTC m=+0.064726054 container died 3c95f8107be33efc6b62126a494e6c2cd6457b38c83e6f5347b359ffdaf46e6a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:18:12 localhost podman[321263]: 2025-12-06 10:18:12.860168769 +0000 UTC m=+0.095961701 container cleanup 3c95f8107be33efc6b62126a494e6c2cd6457b38c83e6f5347b359ffdaf46e6a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:18:12 localhost systemd[1]: libpod-conmon-3c95f8107be33efc6b62126a494e6c2cd6457b38c83e6f5347b359ffdaf46e6a.scope: Deactivated successfully. Dec 6 05:18:12 localhost podman[321265]: 2025-12-06 10:18:12.913045073 +0000 UTC m=+0.141019338 container remove 3c95f8107be33efc6b62126a494e6c2cd6457b38c83e6f5347b359ffdaf46e6a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125) Dec 6 05:18:12 localhost nova_compute[282193]: 2025-12-06 10:18:12.932 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:12 localhost kernel: device tape6df781f-3c left promiscuous mode Dec 6 05:18:12 localhost ovn_controller[154851]: 2025-12-06T10:18:12Z|00276|binding|INFO|Releasing lport e6df781f-3c99-4041-b79d-84bfb7ba881e from this chassis (sb_readonly=0) Dec 6 05:18:12 localhost ovn_controller[154851]: 2025-12-06T10:18:12Z|00277|binding|INFO|Setting lport e6df781f-3c99-4041-b79d-84bfb7ba881e down in Southbound Dec 6 05:18:12 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:12.934 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b142a5ef-fbed-4e92-aa78-e3ad080c6370, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 05:18:12 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:12.947 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28 2001:db8::f816:3eff:fe84:1b9/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548789.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=e6df781f-3c99-4041-b79d-84bfb7ba881e) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:18:12 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:12.949 160509 INFO neutron.agent.ovn.metadata.agent [-] Port e6df781f-3c99-4041-b79d-84bfb7ba881e in datapath 43883dce-1590-48c4-987c-a21b63b82a1c unbound from our chassis#033[00m Dec 6 05:18:12 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:12.955 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43883dce-1590-48c4-987c-a21b63b82a1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:18:12 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:12.957 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[1090082a-9344-4079-901b-66a5a4bbd573]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:18:12 localhost nova_compute[282193]: 2025-12-06 10:18:12.958 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:13 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 4 addresses Dec 6 05:18:13 localhost podman[321311]: 2025-12-06 10:18:13.051368509 +0000 UTC m=+0.063018903 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 6 05:18:13 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:18:13 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:18:13 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:13.171 263652 INFO neutron.agent.dhcp.agent [None req-b3df4455-9686-4af5-8abc-1b26a04e7e36 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:18:13 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:13.172 263652 INFO neutron.agent.dhcp.agent [None req-b3df4455-9686-4af5-8abc-1b26a04e7e36 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:18:13 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:18:13 localhost systemd[1]: var-lib-containers-storage-overlay-96c1ac57ed0addf4968a8a3f8a0b5b5b6a27c910ab64d54d14bea32ff62ad6b9-merged.mount: Deactivated successfully. Dec 6 05:18:13 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3c95f8107be33efc6b62126a494e6c2cd6457b38c83e6f5347b359ffdaf46e6a-userdata-shm.mount: Deactivated successfully. Dec 6 05:18:13 localhost systemd[1]: run-netns-qdhcp\x2d43883dce\x2d1590\x2d48c4\x2d987c\x2da21b63b82a1c.mount: Deactivated successfully. Dec 6 05:18:13 localhost sshd[321333]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:18:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:18:13 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:13.324 263652 INFO neutron.agent.dhcp.agent [None req-1e452ab4-e004-4251-8405-42bff3df5140 - - - - - -] DHCP configuration for ports {'5fe0748d-ad57-4cf2-ab87-16200f623579'} is completed#033[00m Dec 6 05:18:13 localhost podman[321335]: 2025-12-06 10:18:13.40087144 +0000 UTC m=+0.082592086 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:18:13 localhost podman[321335]: 2025-12-06 10:18:13.415091472 +0000 UTC m=+0.096812168 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Dec 6 05:18:13 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:18:13 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:13.756 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:fa:29 2001:db8::f816:3eff:fe3f:fa29'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe3f:fa29/64', 'neutron:device_id': 'ovnmeta-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=687d7abb-e6aa-4047-aa26-552c962fcc91) old=Port_Binding(mac=['fa:16:3e:3f:fa:29 10.100.0.2 2001:db8::f816:3eff:fe3f:fa29'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe3f:fa29/64', 'neutron:device_id': 'ovnmeta-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:18:13 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:13.757 160509 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 687d7abb-e6aa-4047-aa26-552c962fcc91 in datapath 43883dce-1590-48c4-987c-a21b63b82a1c updated#033[00m Dec 6 05:18:13 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:13.760 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43883dce-1590-48c4-987c-a21b63b82a1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:18:13 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:13.761 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[04a65137-6c24-40e2-969b-71eab511667b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:18:13 localhost nova_compute[282193]: 2025-12-06 10:18:13.980 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:14 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e143 e143: 6 total, 6 up, 6 in Dec 6 05:18:14 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:14.624 263652 INFO neutron.agent.linux.ip_lib [None req-b0be1b0b-5514-4abf-b284-a4ef25d427c9 - - - - - -] Device tap7bbbac24-f9 cannot be used as it has no MAC address#033[00m Dec 6 05:18:14 localhost nova_compute[282193]: 2025-12-06 10:18:14.643 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:14 localhost kernel: device tap7bbbac24-f9 entered promiscuous mode Dec 6 05:18:14 localhost NetworkManager[5973]: [1765016294.6530] manager: (tap7bbbac24-f9): new Generic device (/org/freedesktop/NetworkManager/Devices/47) Dec 6 05:18:14 localhost ovn_controller[154851]: 2025-12-06T10:18:14Z|00278|binding|INFO|Claiming lport 7bbbac24-f9f6-48a3-8929-680a1ce4ebea for this chassis. Dec 6 05:18:14 localhost ovn_controller[154851]: 2025-12-06T10:18:14Z|00279|binding|INFO|7bbbac24-f9f6-48a3-8929-680a1ce4ebea: Claiming unknown Dec 6 05:18:14 localhost nova_compute[282193]: 2025-12-06 10:18:14.655 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:14 localhost systemd-udevd[321365]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:18:14 localhost ovn_controller[154851]: 2025-12-06T10:18:14Z|00280|binding|INFO|Setting lport 7bbbac24-f9f6-48a3-8929-680a1ce4ebea up in Southbound Dec 6 05:18:14 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:14.662 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fedb:c901/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=7bbbac24-f9f6-48a3-8929-680a1ce4ebea) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:18:14 localhost ovn_controller[154851]: 2025-12-06T10:18:14Z|00281|binding|INFO|Setting lport 7bbbac24-f9f6-48a3-8929-680a1ce4ebea ovn-installed in OVS Dec 6 05:18:14 localhost nova_compute[282193]: 2025-12-06 10:18:14.664 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:14 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:14.666 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 7bbbac24-f9f6-48a3-8929-680a1ce4ebea in datapath 43883dce-1590-48c4-987c-a21b63b82a1c bound to our chassis#033[00m Dec 6 05:18:14 localhost nova_compute[282193]: 2025-12-06 10:18:14.668 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:14 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:14.669 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Port 07d15fbe-03f7-4926-82da-8e475fa08c52 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 6 05:18:14 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:14.670 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43883dce-1590-48c4-987c-a21b63b82a1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:18:14 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:14.671 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[a6741d9f-3a55-42f4-9f73-958d19b5a1c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:18:14 localhost journal[230404]: ethtool ioctl error on tap7bbbac24-f9: No such device Dec 6 05:18:14 localhost journal[230404]: ethtool ioctl error on tap7bbbac24-f9: No such device Dec 6 05:18:14 localhost nova_compute[282193]: 2025-12-06 10:18:14.693 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:14 localhost journal[230404]: ethtool ioctl error on tap7bbbac24-f9: No such device Dec 6 05:18:14 localhost journal[230404]: ethtool ioctl error on tap7bbbac24-f9: No such device Dec 6 05:18:14 localhost journal[230404]: ethtool ioctl error on tap7bbbac24-f9: No such device Dec 6 05:18:14 localhost journal[230404]: ethtool ioctl error on tap7bbbac24-f9: No such device Dec 6 05:18:14 localhost journal[230404]: ethtool ioctl error on tap7bbbac24-f9: No such device Dec 6 05:18:14 localhost journal[230404]: ethtool ioctl error on tap7bbbac24-f9: No such device Dec 6 05:18:14 localhost nova_compute[282193]: 2025-12-06 10:18:14.732 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:14 localhost nova_compute[282193]: 2025-12-06 10:18:14.761 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:14 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:14.987 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:fa:29 10.100.0.2 2001:db8::f816:3eff:fe3f:fa29'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe3f:fa29/64', 'neutron:device_id': 'ovnmeta-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=687d7abb-e6aa-4047-aa26-552c962fcc91) old=Port_Binding(mac=['fa:16:3e:3f:fa:29 2001:db8::f816:3eff:fe3f:fa29'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe3f:fa29/64', 'neutron:device_id': 'ovnmeta-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:18:14 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:14.989 160509 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 687d7abb-e6aa-4047-aa26-552c962fcc91 in datapath 43883dce-1590-48c4-987c-a21b63b82a1c updated#033[00m Dec 6 05:18:14 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:14.992 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Port 07d15fbe-03f7-4926-82da-8e475fa08c52 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 6 05:18:14 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:14.992 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43883dce-1590-48c4-987c-a21b63b82a1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:18:14 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:14.993 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[988f650d-c8a6-4115-9fd9-079325638be8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:18:15 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e144 e144: 6 total, 6 up, 6 in Dec 6 05:18:15 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 3 addresses Dec 6 05:18:15 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:18:15 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:18:15 localhost podman[321448]: 2025-12-06 10:18:15.542735196 +0000 UTC m=+0.104236842 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3) Dec 6 05:18:15 localhost podman[321464]: Dec 6 05:18:15 localhost podman[321464]: 2025-12-06 10:18:15.630511768 +0000 UTC m=+0.111782691 container create 8b713c67d98072f47ed508c4de3d2c936900013066e24e6e09f965416b3728ba (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:18:15 localhost podman[321464]: 2025-12-06 10:18:15.577848902 +0000 UTC m=+0.059119895 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:18:15 localhost systemd[1]: Started libpod-conmon-8b713c67d98072f47ed508c4de3d2c936900013066e24e6e09f965416b3728ba.scope. Dec 6 05:18:15 localhost systemd[1]: Started libcrun container. Dec 6 05:18:15 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5ec4c3a71df60d42de27b2a8fedf4f4c1b76c95442542d42d577d962b012c1d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:18:15 localhost podman[321464]: 2025-12-06 10:18:15.737550306 +0000 UTC m=+0.218821229 container init 8b713c67d98072f47ed508c4de3d2c936900013066e24e6e09f965416b3728ba (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:18:15 localhost podman[321464]: 2025-12-06 10:18:15.748228889 +0000 UTC m=+0.229499812 container start 8b713c67d98072f47ed508c4de3d2c936900013066e24e6e09f965416b3728ba (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:18:15 localhost dnsmasq[321493]: started, version 2.85 cachesize 150 Dec 6 05:18:15 localhost dnsmasq[321493]: DNS service limited to local subnets Dec 6 05:18:15 localhost dnsmasq[321493]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:18:15 localhost dnsmasq[321493]: warning: no upstream servers configured Dec 6 05:18:15 localhost dnsmasq-dhcp[321493]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 6 05:18:15 localhost dnsmasq[321493]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 0 addresses Dec 6 05:18:15 localhost dnsmasq-dhcp[321493]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:18:15 localhost dnsmasq-dhcp[321493]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:18:15 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:15.930 263652 INFO neutron.agent.dhcp.agent [None req-05519599-9fb8-4aab-919f-2dfe40f1eb8c - - - - - -] DHCP configuration for ports {'687d7abb-e6aa-4047-aa26-552c962fcc91'} is completed#033[00m Dec 6 05:18:16 localhost dnsmasq[321493]: exiting on receipt of SIGTERM Dec 6 05:18:16 localhost podman[321511]: 2025-12-06 10:18:16.096894384 +0000 UTC m=+0.062604041 container kill 8b713c67d98072f47ed508c4de3d2c936900013066e24e6e09f965416b3728ba (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 6 05:18:16 localhost systemd[1]: libpod-8b713c67d98072f47ed508c4de3d2c936900013066e24e6e09f965416b3728ba.scope: Deactivated successfully. Dec 6 05:18:16 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:16.111 2 INFO neutron.agent.securitygroups_rpc [None req-6fa383fb-a4a1-4db9-8964-14f7246d83c2 a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']#033[00m Dec 6 05:18:16 localhost podman[321525]: 2025-12-06 10:18:16.170299761 +0000 UTC m=+0.055633839 container died 8b713c67d98072f47ed508c4de3d2c936900013066e24e6e09f965416b3728ba (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 6 05:18:16 localhost podman[321525]: 2025-12-06 10:18:16.258678352 +0000 UTC m=+0.144012390 container cleanup 8b713c67d98072f47ed508c4de3d2c936900013066e24e6e09f965416b3728ba (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 6 05:18:16 localhost systemd[1]: libpod-conmon-8b713c67d98072f47ed508c4de3d2c936900013066e24e6e09f965416b3728ba.scope: Deactivated successfully. Dec 6 05:18:16 localhost podman[321527]: 2025-12-06 10:18:16.285586237 +0000 UTC m=+0.154485006 container remove 8b713c67d98072f47ed508c4de3d2c936900013066e24e6e09f965416b3728ba (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0) Dec 6 05:18:16 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e145 e145: 6 total, 6 up, 6 in Dec 6 05:18:16 localhost systemd[1]: var-lib-containers-storage-overlay-f5ec4c3a71df60d42de27b2a8fedf4f4c1b76c95442542d42d577d962b012c1d-merged.mount: Deactivated successfully. Dec 6 05:18:16 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8b713c67d98072f47ed508c4de3d2c936900013066e24e6e09f965416b3728ba-userdata-shm.mount: Deactivated successfully. Dec 6 05:18:16 localhost openstack_network_exporter[243110]: ERROR 10:18:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:18:16 localhost openstack_network_exporter[243110]: ERROR 10:18:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:18:16 localhost openstack_network_exporter[243110]: ERROR 10:18:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:18:16 localhost openstack_network_exporter[243110]: ERROR 10:18:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:18:16 localhost openstack_network_exporter[243110]: Dec 6 05:18:16 localhost openstack_network_exporter[243110]: ERROR 10:18:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:18:16 localhost openstack_network_exporter[243110]: Dec 6 05:18:17 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:17.010 263652 INFO neutron.agent.linux.ip_lib [None req-133b9a21-ee6e-4861-b739-4ff976c8ae20 - - - - - -] Device tap002e4ba5-7f cannot be used as it has no MAC address#033[00m Dec 6 05:18:17 localhost nova_compute[282193]: 2025-12-06 10:18:17.045 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:17 localhost kernel: device tap002e4ba5-7f entered promiscuous mode Dec 6 05:18:17 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:17.053 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:18:16Z, description=, device_id=d43c2188-cd4b-4c96-8093-8bdb70fa0d41, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=200c81a7-f7c4-4ce3-a4d6-6f1963f32326, ip_allocation=immediate, mac_address=fa:16:3e:ed:29:39, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1733, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-06T10:18:16Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:18:17 localhost systemd-udevd[321367]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:18:17 localhost NetworkManager[5973]: [1765016297.0571] manager: (tap002e4ba5-7f): new Generic device (/org/freedesktop/NetworkManager/Devices/48) Dec 6 05:18:17 localhost ovn_controller[154851]: 2025-12-06T10:18:17Z|00282|binding|INFO|Claiming lport 002e4ba5-7f51-4c5d-93ec-79ed9f1d16ea for this chassis. Dec 6 05:18:17 localhost ovn_controller[154851]: 2025-12-06T10:18:17Z|00283|binding|INFO|002e4ba5-7f51-4c5d-93ec-79ed9f1d16ea: Claiming unknown Dec 6 05:18:17 localhost nova_compute[282193]: 2025-12-06 10:18:17.063 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:17 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:17.078 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-9e18bc76-c51e-4fe0-a47b-eaa50620189c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9e18bc76-c51e-4fe0-a47b-eaa50620189c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a2aaeadee6f14b78a73f8886be99b671', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed2c9190-6e13-4a60-9837-0f4d9edea65e, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=002e4ba5-7f51-4c5d-93ec-79ed9f1d16ea) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:18:17 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:17.080 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 002e4ba5-7f51-4c5d-93ec-79ed9f1d16ea in datapath 9e18bc76-c51e-4fe0-a47b-eaa50620189c bound to our chassis#033[00m Dec 6 05:18:17 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:17.081 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9e18bc76-c51e-4fe0-a47b-eaa50620189c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:18:17 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:17.082 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[480b0e13-7e0a-4603-aaf1-7629d21838fd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:18:17 localhost ovn_controller[154851]: 2025-12-06T10:18:17Z|00284|binding|INFO|Setting lport 002e4ba5-7f51-4c5d-93ec-79ed9f1d16ea ovn-installed in OVS Dec 6 05:18:17 localhost ovn_controller[154851]: 2025-12-06T10:18:17Z|00285|binding|INFO|Setting lport 002e4ba5-7f51-4c5d-93ec-79ed9f1d16ea up in Southbound Dec 6 05:18:17 localhost nova_compute[282193]: 2025-12-06 10:18:17.117 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:17 localhost nova_compute[282193]: 2025-12-06 10:18:17.168 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:17 localhost nova_compute[282193]: 2025-12-06 10:18:17.203 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:17 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:17.332 2 INFO neutron.agent.securitygroups_rpc [None req-034cc1e4-4fb9-4793-8ac5-168cd3b3cb7e a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']#033[00m Dec 6 05:18:17 localhost systemd[1]: tmp-crun.wOtBh1.mount: Deactivated successfully. Dec 6 05:18:17 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 4 addresses Dec 6 05:18:17 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:18:17 localhost podman[321597]: 2025-12-06 10:18:17.350421235 +0000 UTC m=+0.062355422 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 6 05:18:17 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:18:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:18:17 localhost podman[321617]: 2025-12-06 10:18:17.451894964 +0000 UTC m=+0.082591137 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:18:17 localhost podman[321617]: 2025-12-06 10:18:17.490338899 +0000 UTC m=+0.121035032 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:18:17 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:18:17 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:17.700 263652 INFO neutron.agent.dhcp.agent [None req-985ad0b2-c243-4936-9ec1-4e6314977268 - - - - - -] DHCP configuration for ports {'200c81a7-f7c4-4ce3-a4d6-6f1963f32326'} is completed#033[00m Dec 6 05:18:17 localhost nova_compute[282193]: 2025-12-06 10:18:17.745 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:18 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:18:18 localhost podman[321709]: Dec 6 05:18:18 localhost podman[321709]: 2025-12-06 10:18:18.366244107 +0000 UTC m=+0.097782137 container create 5512667bb0d7e6b1e3aa8412a868107815b9331685eff6b7cf20ec820030910c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9e18bc76-c51e-4fe0-a47b-eaa50620189c, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Dec 6 05:18:18 localhost systemd[1]: Started libpod-conmon-5512667bb0d7e6b1e3aa8412a868107815b9331685eff6b7cf20ec820030910c.scope. Dec 6 05:18:18 localhost podman[321709]: 2025-12-06 10:18:18.32021226 +0000 UTC m=+0.051750330 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:18:18 localhost systemd[1]: tmp-crun.hXXi6C.mount: Deactivated successfully. Dec 6 05:18:18 localhost ovn_controller[154851]: 2025-12-06T10:18:18Z|00286|binding|INFO|Removing iface tap002e4ba5-7f ovn-installed in OVS Dec 6 05:18:18 localhost ovn_controller[154851]: 2025-12-06T10:18:18Z|00287|binding|INFO|Removing lport 002e4ba5-7f51-4c5d-93ec-79ed9f1d16ea ovn-installed in OVS Dec 6 05:18:18 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:18.429 160509 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port bb22d0e7-f4bb-48d5-bb17-8b3da91582dc with type ""#033[00m Dec 6 05:18:18 localhost nova_compute[282193]: 2025-12-06 10:18:18.431 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:18 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:18.431 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-9e18bc76-c51e-4fe0-a47b-eaa50620189c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9e18bc76-c51e-4fe0-a47b-eaa50620189c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a2aaeadee6f14b78a73f8886be99b671', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed2c9190-6e13-4a60-9837-0f4d9edea65e, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=002e4ba5-7f51-4c5d-93ec-79ed9f1d16ea) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:18:18 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:18.433 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 002e4ba5-7f51-4c5d-93ec-79ed9f1d16ea in datapath 9e18bc76-c51e-4fe0-a47b-eaa50620189c unbound from our chassis#033[00m Dec 6 05:18:18 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:18.435 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9e18bc76-c51e-4fe0-a47b-eaa50620189c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:18:18 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:18.436 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[b177746e-9478-4adf-8978-0c36143b882b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:18:18 localhost nova_compute[282193]: 2025-12-06 10:18:18.439 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:18 localhost systemd[1]: Started libcrun container. Dec 6 05:18:18 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dd30d0fe6598c25e15c1ef77eb62e6473b8372e3dcc8c8e6af67f90141ace93/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:18:18 localhost podman[321709]: 2025-12-06 10:18:18.456378421 +0000 UTC m=+0.187916441 container init 5512667bb0d7e6b1e3aa8412a868107815b9331685eff6b7cf20ec820030910c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9e18bc76-c51e-4fe0-a47b-eaa50620189c, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:18:18 localhost podman[321709]: 2025-12-06 10:18:18.465492197 +0000 UTC m=+0.197030217 container start 5512667bb0d7e6b1e3aa8412a868107815b9331685eff6b7cf20ec820030910c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9e18bc76-c51e-4fe0-a47b-eaa50620189c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Dec 6 05:18:18 localhost dnsmasq[321744]: started, version 2.85 cachesize 150 Dec 6 05:18:18 localhost dnsmasq[321744]: DNS service limited to local subnets Dec 6 05:18:18 localhost dnsmasq[321744]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:18:18 localhost dnsmasq[321744]: warning: no upstream servers configured Dec 6 05:18:18 localhost dnsmasq-dhcp[321744]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 6 05:18:18 localhost dnsmasq[321744]: read /var/lib/neutron/dhcp/9e18bc76-c51e-4fe0-a47b-eaa50620189c/addn_hosts - 0 addresses Dec 6 05:18:18 localhost dnsmasq-dhcp[321744]: read /var/lib/neutron/dhcp/9e18bc76-c51e-4fe0-a47b-eaa50620189c/host Dec 6 05:18:18 localhost dnsmasq-dhcp[321744]: read /var/lib/neutron/dhcp/9e18bc76-c51e-4fe0-a47b-eaa50620189c/opts Dec 6 05:18:18 localhost dnsmasq[321744]: exiting on receipt of SIGTERM Dec 6 05:18:18 localhost systemd[1]: libpod-5512667bb0d7e6b1e3aa8412a868107815b9331685eff6b7cf20ec820030910c.scope: Deactivated successfully. Dec 6 05:18:18 localhost nova_compute[282193]: 2025-12-06 10:18:18.582 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:18 localhost kernel: device tap002e4ba5-7f left promiscuous mode Dec 6 05:18:18 localhost podman[321751]: 2025-12-06 10:18:18.584284471 +0000 UTC m=+0.086073112 container died 5512667bb0d7e6b1e3aa8412a868107815b9331685eff6b7cf20ec820030910c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9e18bc76-c51e-4fe0-a47b-eaa50620189c, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 6 05:18:18 localhost systemd[1]: tmp-crun.0MvI4F.mount: Deactivated successfully. Dec 6 05:18:18 localhost nova_compute[282193]: 2025-12-06 10:18:18.601 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:18 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:18.617 263652 INFO neutron.agent.dhcp.agent [None req-7af9cf4c-4fdd-471b-8d91-b1e3b3d5e68a - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:18:18 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:18.618 263652 INFO neutron.agent.dhcp.agent [None req-7af9cf4c-4fdd-471b-8d91-b1e3b3d5e68a - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:18:18 localhost podman[321751]: 2025-12-06 10:18:18.624924503 +0000 UTC m=+0.126713154 container cleanup 5512667bb0d7e6b1e3aa8412a868107815b9331685eff6b7cf20ec820030910c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9e18bc76-c51e-4fe0-a47b-eaa50620189c, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:18:18 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:18.628 263652 INFO neutron.agent.dhcp.agent [None req-5c14d042-7d42-49c3-84f6-0ec4bc9aebb2 - - - - - -] DHCP configuration for ports {'34a236c4-68d2-4892-b791-a5726cf64064'} is completed#033[00m Dec 6 05:18:18 localhost podman[321765]: 2025-12-06 10:18:18.650969854 +0000 UTC m=+0.066343094 container cleanup 5512667bb0d7e6b1e3aa8412a868107815b9331685eff6b7cf20ec820030910c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9e18bc76-c51e-4fe0-a47b-eaa50620189c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Dec 6 05:18:18 localhost systemd[1]: libpod-conmon-5512667bb0d7e6b1e3aa8412a868107815b9331685eff6b7cf20ec820030910c.scope: Deactivated successfully. Dec 6 05:18:18 localhost podman[321778]: 2025-12-06 10:18:18.704136016 +0000 UTC m=+0.065769796 container remove 5512667bb0d7e6b1e3aa8412a868107815b9331685eff6b7cf20ec820030910c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9e18bc76-c51e-4fe0-a47b-eaa50620189c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2) Dec 6 05:18:18 localhost ovn_controller[154851]: 2025-12-06T10:18:18Z|00288|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:18:18 localhost nova_compute[282193]: 2025-12-06 10:18:18.749 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:18 localhost podman[321799]: Dec 6 05:18:18 localhost podman[321799]: 2025-12-06 10:18:18.795213938 +0000 UTC m=+0.070278262 container create 38e9370e7a974e755d0fff89f0bd50a79088f2b6d8f8c5ccddffe487b048ce0a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 6 05:18:18 localhost systemd[1]: Started libpod-conmon-38e9370e7a974e755d0fff89f0bd50a79088f2b6d8f8c5ccddffe487b048ce0a.scope. Dec 6 05:18:18 localhost systemd[1]: Started libcrun container. Dec 6 05:18:18 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6de4ae044bb1f768d5915fea78d8879f29c2071e4a65c276f65b84bb4ebc306/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:18:18 localhost podman[321799]: 2025-12-06 10:18:18.761229828 +0000 UTC m=+0.036294132 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:18:18 localhost podman[321799]: 2025-12-06 10:18:18.865912673 +0000 UTC m=+0.140976987 container init 38e9370e7a974e755d0fff89f0bd50a79088f2b6d8f8c5ccddffe487b048ce0a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 6 05:18:18 localhost podman[321799]: 2025-12-06 10:18:18.876280767 +0000 UTC m=+0.151345091 container start 38e9370e7a974e755d0fff89f0bd50a79088f2b6d8f8c5ccddffe487b048ce0a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:18:18 localhost dnsmasq[321817]: started, version 2.85 cachesize 150 Dec 6 05:18:18 localhost dnsmasq[321817]: DNS service limited to local subnets Dec 6 05:18:18 localhost dnsmasq[321817]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:18:18 localhost dnsmasq[321817]: warning: no upstream servers configured Dec 6 05:18:18 localhost dnsmasq-dhcp[321817]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 6 05:18:18 localhost dnsmasq-dhcp[321817]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 6 05:18:18 localhost dnsmasq[321817]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 0 addresses Dec 6 05:18:18 localhost dnsmasq-dhcp[321817]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:18:18 localhost dnsmasq-dhcp[321817]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:18:19 localhost nova_compute[282193]: 2025-12-06 10:18:19.026 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:19 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:19.191 263652 INFO neutron.agent.dhcp.agent [None req-ed662b1f-eabc-4e24-9924-1e4959744cf8 - - - - - -] DHCP configuration for ports {'7bbbac24-f9f6-48a3-8929-680a1ce4ebea', '687d7abb-e6aa-4047-aa26-552c962fcc91'} is completed#033[00m Dec 6 05:18:19 localhost dnsmasq[321817]: exiting on receipt of SIGTERM Dec 6 05:18:19 localhost podman[321833]: 2025-12-06 10:18:19.300060771 +0000 UTC m=+0.074189761 container kill 38e9370e7a974e755d0fff89f0bd50a79088f2b6d8f8c5ccddffe487b048ce0a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true) Dec 6 05:18:19 localhost systemd[1]: libpod-38e9370e7a974e755d0fff89f0bd50a79088f2b6d8f8c5ccddffe487b048ce0a.scope: Deactivated successfully. Dec 6 05:18:19 localhost podman[321848]: 2025-12-06 10:18:19.376668365 +0000 UTC m=+0.063651942 container died 38e9370e7a974e755d0fff89f0bd50a79088f2b6d8f8c5ccddffe487b048ce0a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Dec 6 05:18:19 localhost podman[321848]: 2025-12-06 10:18:19.409625744 +0000 UTC m=+0.096609291 container cleanup 38e9370e7a974e755d0fff89f0bd50a79088f2b6d8f8c5ccddffe487b048ce0a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:18:19 localhost systemd[1]: libpod-conmon-38e9370e7a974e755d0fff89f0bd50a79088f2b6d8f8c5ccddffe487b048ce0a.scope: Deactivated successfully. Dec 6 05:18:19 localhost podman[321855]: 2025-12-06 10:18:19.464002854 +0000 UTC m=+0.131672975 container remove 38e9370e7a974e755d0fff89f0bd50a79088f2b6d8f8c5ccddffe487b048ce0a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3) Dec 6 05:18:19 localhost systemd[1]: var-lib-containers-storage-overlay-7dd30d0fe6598c25e15c1ef77eb62e6473b8372e3dcc8c8e6af67f90141ace93-merged.mount: Deactivated successfully. Dec 6 05:18:19 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5512667bb0d7e6b1e3aa8412a868107815b9331685eff6b7cf20ec820030910c-userdata-shm.mount: Deactivated successfully. Dec 6 05:18:19 localhost systemd[1]: run-netns-qdhcp\x2d9e18bc76\x2dc51e\x2d4fe0\x2da47b\x2deaa50620189c.mount: Deactivated successfully. Dec 6 05:18:19 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:19.718 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:fa:29 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=687d7abb-e6aa-4047-aa26-552c962fcc91) old=Port_Binding(mac=['fa:16:3e:3f:fa:29 10.100.0.2 2001:db8::f816:3eff:fe3f:fa29'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe3f:fa29/64', 'neutron:device_id': 'ovnmeta-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:18:19 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:19.720 160509 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 687d7abb-e6aa-4047-aa26-552c962fcc91 in datapath 43883dce-1590-48c4-987c-a21b63b82a1c updated#033[00m Dec 6 05:18:19 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:19.724 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Port 07d15fbe-03f7-4926-82da-8e475fa08c52 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 6 05:18:19 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:19.725 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43883dce-1590-48c4-987c-a21b63b82a1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:18:19 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:19.726 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[42696e4f-aff9-49c4-93e0-7ccf143c389b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:18:19 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 3 addresses Dec 6 05:18:19 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:18:19 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:18:19 localhost podman[321893]: 2025-12-06 10:18:19.872074621 +0000 UTC m=+0.066464347 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Dec 6 05:18:20 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:20.752 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:18:20Z, description=, device_id=0424f01c-36e4-4cfa-bf1a-e61c35c7ef48, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=4ae4a0b2-e3d7-4f1a-b9de-013e26a569c6, ip_allocation=immediate, mac_address=fa:16:3e:2e:e2:eb, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1741, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-06T10:18:20Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:18:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:18:20 localhost systemd[1]: tmp-crun.DuGCiy.mount: Deactivated successfully. Dec 6 05:18:20 localhost podman[321946]: 2025-12-06 10:18:20.950083779 +0000 UTC m=+0.100170530 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2) Dec 6 05:18:21 localhost podman[321946]: 2025-12-06 10:18:21.018323519 +0000 UTC m=+0.168410320 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 6 05:18:21 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:18:21 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 4 addresses Dec 6 05:18:21 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:18:21 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:18:21 localhost podman[321994]: 2025-12-06 10:18:21.069979965 +0000 UTC m=+0.070985833 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 6 05:18:21 localhost podman[322007]: Dec 6 05:18:21 localhost podman[322007]: 2025-12-06 10:18:21.156073547 +0000 UTC m=+0.126872750 container create dc4154f63878f1821bb554dd4e9f27396ecfb6b8486bc7cc4668a4fe1cdf5ced (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 6 05:18:21 localhost systemd[1]: Started libpod-conmon-dc4154f63878f1821bb554dd4e9f27396ecfb6b8486bc7cc4668a4fe1cdf5ced.scope. Dec 6 05:18:21 localhost podman[322007]: 2025-12-06 10:18:21.117062994 +0000 UTC m=+0.087862287 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:18:21 localhost systemd[1]: Started libcrun container. Dec 6 05:18:21 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0ed46f155af315b60d8e77e1ead0cf54960d92c9fb9020019f6aab4123a0dc9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:18:21 localhost podman[322007]: 2025-12-06 10:18:21.231927078 +0000 UTC m=+0.202726351 container init dc4154f63878f1821bb554dd4e9f27396ecfb6b8486bc7cc4668a4fe1cdf5ced (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:18:21 localhost podman[322007]: 2025-12-06 10:18:21.241169258 +0000 UTC m=+0.211968481 container start dc4154f63878f1821bb554dd4e9f27396ecfb6b8486bc7cc4668a4fe1cdf5ced (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 6 05:18:21 localhost dnsmasq[322038]: started, version 2.85 cachesize 150 Dec 6 05:18:21 localhost dnsmasq[322038]: DNS service limited to local subnets Dec 6 05:18:21 localhost dnsmasq[322038]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:18:21 localhost dnsmasq[322038]: warning: no upstream servers configured Dec 6 05:18:21 localhost dnsmasq-dhcp[322038]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 6 05:18:21 localhost dnsmasq[322038]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 0 addresses Dec 6 05:18:21 localhost dnsmasq-dhcp[322038]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:18:21 localhost dnsmasq-dhcp[322038]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:18:21 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:21.304 263652 INFO neutron.agent.dhcp.agent [None req-2b814532-aeea-4377-8df9-a338ba3f8a08 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:18:15Z, description=, device_id=, device_owner=, dns_assignment=[, ], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[, ], id=0a97c207-b259-4dd6-97a0-5e53d9dcfae9, ip_allocation=immediate, mac_address=fa:16:3e:8a:27:09, name=tempest-NetworksTestDHCPv6-1066377207, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:17:28Z, description=, dns_domain=, id=43883dce-1590-48c4-987c-a21b63b82a1c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-1975538139, port_security_enabled=True, project_id=34a17eee71de4bac8b71972a4b7b506c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=42818, qos_policy_id=None, revision_number=27, router:external=False, shared=False, standard_attr_id=1415, status=ACTIVE, subnets=['e373677f-5620-4e92-a6c1-ef2cc27d6d54', 'ee1b67d1-08dd-4780-a445-d29a810260e7'], tags=[], tenant_id=34a17eee71de4bac8b71972a4b7b506c, updated_at=2025-12-06T10:18:14Z, vlan_transparent=None, network_id=43883dce-1590-48c4-987c-a21b63b82a1c, port_security_enabled=True, project_id=34a17eee71de4bac8b71972a4b7b506c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['d618a097-5989-47aa-9263-1c8a114ad269'], standard_attr_id=1730, status=DOWN, tags=[], tenant_id=34a17eee71de4bac8b71972a4b7b506c, updated_at=2025-12-06T10:18:15Z on network 43883dce-1590-48c4-987c-a21b63b82a1c#033[00m Dec 6 05:18:21 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e146 e146: 6 total, 6 up, 6 in Dec 6 05:18:21 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:21.475 263652 INFO neutron.agent.dhcp.agent [None req-00188759-231a-44b0-8e6b-3e98cd0b4267 - - - - - -] DHCP configuration for ports {'4ae4a0b2-e3d7-4f1a-b9de-013e26a569c6', '7bbbac24-f9f6-48a3-8929-680a1ce4ebea', '687d7abb-e6aa-4047-aa26-552c962fcc91'} is completed#033[00m Dec 6 05:18:21 localhost dnsmasq[322038]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 2 addresses Dec 6 05:18:21 localhost dnsmasq-dhcp[322038]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:18:21 localhost dnsmasq-dhcp[322038]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:18:21 localhost podman[322056]: 2025-12-06 10:18:21.508894719 +0000 UTC m=+0.040234542 container kill dc4154f63878f1821bb554dd4e9f27396ecfb6b8486bc7cc4668a4fe1cdf5ced (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:18:21 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:21.716 263652 INFO neutron.agent.dhcp.agent [None req-a4c74881-40bf-46d0-9fb5-bf6e396e9ac3 - - - - - -] DHCP configuration for ports {'0a97c207-b259-4dd6-97a0-5e53d9dcfae9'} is completed#033[00m Dec 6 05:18:21 localhost dnsmasq[322038]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 0 addresses Dec 6 05:18:21 localhost dnsmasq-dhcp[322038]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:18:21 localhost dnsmasq-dhcp[322038]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:18:21 localhost podman[322093]: 2025-12-06 10:18:21.898689022 +0000 UTC m=+0.063409075 container kill dc4154f63878f1821bb554dd4e9f27396ecfb6b8486bc7cc4668a4fe1cdf5ced (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 6 05:18:22 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e147 e147: 6 total, 6 up, 6 in Dec 6 05:18:22 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:22.476 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:fa:29 10.100.0.2 2001:db8::f816:3eff:fe3f:fa29'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe3f:fa29/64', 'neutron:device_id': 'ovnmeta-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=687d7abb-e6aa-4047-aa26-552c962fcc91) old=Port_Binding(mac=['fa:16:3e:3f:fa:29 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:18:22 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:22.478 160509 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 687d7abb-e6aa-4047-aa26-552c962fcc91 in datapath 43883dce-1590-48c4-987c-a21b63b82a1c updated#033[00m Dec 6 05:18:22 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:22.481 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Port 07d15fbe-03f7-4926-82da-8e475fa08c52 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 6 05:18:22 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:22.481 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43883dce-1590-48c4-987c-a21b63b82a1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:18:22 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:22.483 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[0c2dac71-60ee-4c1a-a8cc-c1cbd3057bc4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:18:22 localhost nova_compute[282193]: 2025-12-06 10:18:22.788 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:23 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:18:23 localhost dnsmasq[322038]: exiting on receipt of SIGTERM Dec 6 05:18:23 localhost podman[322131]: 2025-12-06 10:18:23.448748558 +0000 UTC m=+0.068456757 container kill dc4154f63878f1821bb554dd4e9f27396ecfb6b8486bc7cc4668a4fe1cdf5ced (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 6 05:18:23 localhost systemd[1]: libpod-dc4154f63878f1821bb554dd4e9f27396ecfb6b8486bc7cc4668a4fe1cdf5ced.scope: Deactivated successfully. Dec 6 05:18:23 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e148 e148: 6 total, 6 up, 6 in Dec 6 05:18:23 localhost sshd[322163]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:18:23 localhost podman[322144]: 2025-12-06 10:18:23.54277454 +0000 UTC m=+0.080915346 container died dc4154f63878f1821bb554dd4e9f27396ecfb6b8486bc7cc4668a4fe1cdf5ced (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:18:23 localhost podman[322144]: 2025-12-06 10:18:23.580532765 +0000 UTC m=+0.118673500 container cleanup dc4154f63878f1821bb554dd4e9f27396ecfb6b8486bc7cc4668a4fe1cdf5ced (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 6 05:18:23 localhost systemd[1]: libpod-conmon-dc4154f63878f1821bb554dd4e9f27396ecfb6b8486bc7cc4668a4fe1cdf5ced.scope: Deactivated successfully. Dec 6 05:18:23 localhost podman[322151]: 2025-12-06 10:18:23.609716221 +0000 UTC m=+0.134082458 container remove dc4154f63878f1821bb554dd4e9f27396ecfb6b8486bc7cc4668a4fe1cdf5ced (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 6 05:18:23 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:23.641 2 INFO neutron.agent.securitygroups_rpc [None req-b79a01a3-8e64-4889-8420-e298cffcfc58 a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']#033[00m Dec 6 05:18:23 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:23.727 2 INFO neutron.agent.securitygroups_rpc [None req-9f63fce7-8a34-4731-bfa7-9d45ada3f54e 4c6008178bdc445aa99fb1b726f87b45 a2aaeadee6f14b78a73f8886be99b671 - - default default] Security group member updated ['13551db8-e8e0-43b0-89a9-b0d8423e74c9']#033[00m Dec 6 05:18:23 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 3 addresses Dec 6 05:18:23 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:18:23 localhost podman[322203]: 2025-12-06 10:18:23.886918419 +0000 UTC m=+0.085186225 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 6 05:18:23 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:18:23 localhost podman[241090]: time="2025-12-06T10:18:23Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:18:23 localhost podman[241090]: @ - - [06/Dec/2025:10:18:23 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156104 "" "Go-http-client/1.1" Dec 6 05:18:23 localhost podman[241090]: @ - - [06/Dec/2025:10:18:23 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19268 "" "Go-http-client/1.1" Dec 6 05:18:24 localhost nova_compute[282193]: 2025-12-06 10:18:24.028 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:24 localhost systemd[1]: var-lib-containers-storage-overlay-a0ed46f155af315b60d8e77e1ead0cf54960d92c9fb9020019f6aab4123a0dc9-merged.mount: Deactivated successfully. Dec 6 05:18:24 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dc4154f63878f1821bb554dd4e9f27396ecfb6b8486bc7cc4668a4fe1cdf5ced-userdata-shm.mount: Deactivated successfully. Dec 6 05:18:24 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e149 e149: 6 total, 6 up, 6 in Dec 6 05:18:24 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 6 05:18:24 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1581079548' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 6 05:18:24 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 6 05:18:24 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1581079548' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 6 05:18:24 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:24.635 2 INFO neutron.agent.securitygroups_rpc [None req-366a0057-fc3f-46e6-9a84-ba466e35126f a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']#033[00m Dec 6 05:18:24 localhost podman[322265]: Dec 6 05:18:24 localhost podman[322265]: 2025-12-06 10:18:24.7158147 +0000 UTC m=+0.114271386 container create 96403699a1358f1a4db9d2adf9ab3c07936dd48c6b352f0d937390988fa3463c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 6 05:18:24 localhost podman[322265]: 2025-12-06 10:18:24.658309766 +0000 UTC m=+0.056766482 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:18:24 localhost systemd[1]: Started libpod-conmon-96403699a1358f1a4db9d2adf9ab3c07936dd48c6b352f0d937390988fa3463c.scope. Dec 6 05:18:24 localhost systemd[1]: tmp-crun.oJmBM7.mount: Deactivated successfully. Dec 6 05:18:24 localhost systemd[1]: Started libcrun container. Dec 6 05:18:24 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/817b9bf6c12c6bcbb693e0f772f7a483fc0ca562d40fce2f6f132e4c3d88dc33/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:18:24 localhost podman[322265]: 2025-12-06 10:18:24.822216537 +0000 UTC m=+0.220673183 container init 96403699a1358f1a4db9d2adf9ab3c07936dd48c6b352f0d937390988fa3463c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 6 05:18:24 localhost podman[322265]: 2025-12-06 10:18:24.833151269 +0000 UTC m=+0.231607915 container start 96403699a1358f1a4db9d2adf9ab3c07936dd48c6b352f0d937390988fa3463c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Dec 6 05:18:24 localhost dnsmasq[322283]: started, version 2.85 cachesize 150 Dec 6 05:18:24 localhost dnsmasq[322283]: DNS service limited to local subnets Dec 6 05:18:24 localhost dnsmasq[322283]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:18:24 localhost dnsmasq[322283]: warning: no upstream servers configured Dec 6 05:18:24 localhost dnsmasq-dhcp[322283]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 6 05:18:24 localhost dnsmasq[322283]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 0 addresses Dec 6 05:18:24 localhost dnsmasq-dhcp[322283]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:18:24 localhost dnsmasq-dhcp[322283]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:18:24 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:24.911 263652 INFO neutron.agent.dhcp.agent [None req-9d612b54-4b84-4f39-818a-dcaba74301cb - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:18:23Z, description=, device_id=, device_owner=, dns_assignment=[, ], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[, ], id=b4fda620-9f24-4002-924f-2a076f0e31f0, ip_allocation=immediate, mac_address=fa:16:3e:9d:7e:fa, name=tempest-NetworksTestDHCPv6-1726515961, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:17:28Z, description=, dns_domain=, id=43883dce-1590-48c4-987c-a21b63b82a1c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-1975538139, port_security_enabled=True, project_id=34a17eee71de4bac8b71972a4b7b506c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=42818, qos_policy_id=None, revision_number=31, router:external=False, shared=False, standard_attr_id=1415, status=ACTIVE, subnets=['259cd01c-9daa-4d41-93b5-27fb2cf38ff6', 'cc5199f7-ffc7-4584-9436-5644a8e4cb87'], tags=[], tenant_id=34a17eee71de4bac8b71972a4b7b506c, updated_at=2025-12-06T10:18:20Z, vlan_transparent=None, network_id=43883dce-1590-48c4-987c-a21b63b82a1c, port_security_enabled=True, project_id=34a17eee71de4bac8b71972a4b7b506c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['d618a097-5989-47aa-9263-1c8a114ad269'], standard_attr_id=1748, status=DOWN, tags=[], tenant_id=34a17eee71de4bac8b71972a4b7b506c, updated_at=2025-12-06T10:18:23Z on network 43883dce-1590-48c4-987c-a21b63b82a1c#033[00m Dec 6 05:18:25 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:25.080 263652 INFO neutron.agent.dhcp.agent [None req-ffa0c331-58ad-49c7-ad5e-1f7cc23c0727 - - - - - -] DHCP configuration for ports {'7bbbac24-f9f6-48a3-8929-680a1ce4ebea', '687d7abb-e6aa-4047-aa26-552c962fcc91'} is completed#033[00m Dec 6 05:18:25 localhost dnsmasq[322283]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 2 addresses Dec 6 05:18:25 localhost dnsmasq-dhcp[322283]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:18:25 localhost podman[322302]: 2025-12-06 10:18:25.155019512 +0000 UTC m=+0.067905861 container kill 96403699a1358f1a4db9d2adf9ab3c07936dd48c6b352f0d937390988fa3463c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Dec 6 05:18:25 localhost dnsmasq-dhcp[322283]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:18:25 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:25.373 2 INFO neutron.agent.securitygroups_rpc [None req-9f7062a2-5eeb-4deb-87a1-858e2e900cdd 4c6008178bdc445aa99fb1b726f87b45 a2aaeadee6f14b78a73f8886be99b671 - - default default] Security group member updated ['13551db8-e8e0-43b0-89a9-b0d8423e74c9']#033[00m Dec 6 05:18:25 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:25.404 263652 INFO neutron.agent.dhcp.agent [None req-ce980b1a-31d9-441e-a6f1-debe43d8e15e - - - - - -] DHCP configuration for ports {'b4fda620-9f24-4002-924f-2a076f0e31f0'} is completed#033[00m Dec 6 05:18:25 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:25.420 263652 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:18:25 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e150 e150: 6 total, 6 up, 6 in Dec 6 05:18:25 localhost dnsmasq[322283]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 0 addresses Dec 6 05:18:25 localhost podman[322340]: 2025-12-06 10:18:25.651235543 +0000 UTC m=+0.073616444 container kill 96403699a1358f1a4db9d2adf9ab3c07936dd48c6b352f0d937390988fa3463c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Dec 6 05:18:25 localhost dnsmasq-dhcp[322283]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:18:25 localhost dnsmasq-dhcp[322283]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:18:25 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:25.843 263652 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:18:26 localhost dnsmasq[322283]: exiting on receipt of SIGTERM Dec 6 05:18:26 localhost systemd[1]: libpod-96403699a1358f1a4db9d2adf9ab3c07936dd48c6b352f0d937390988fa3463c.scope: Deactivated successfully. Dec 6 05:18:26 localhost podman[322380]: 2025-12-06 10:18:26.142887996 +0000 UTC m=+0.076809721 container kill 96403699a1358f1a4db9d2adf9ab3c07936dd48c6b352f0d937390988fa3463c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true) Dec 6 05:18:26 localhost nova_compute[282193]: 2025-12-06 10:18:26.182 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:18:26 localhost nova_compute[282193]: 2025-12-06 10:18:26.182 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Dec 6 05:18:26 localhost nova_compute[282193]: 2025-12-06 10:18:26.217 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Dec 6 05:18:26 localhost podman[322394]: 2025-12-06 10:18:26.223618065 +0000 UTC m=+0.053819964 container died 96403699a1358f1a4db9d2adf9ab3c07936dd48c6b352f0d937390988fa3463c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Dec 6 05:18:26 localhost podman[322394]: 2025-12-06 10:18:26.275972732 +0000 UTC m=+0.106174581 container remove 96403699a1358f1a4db9d2adf9ab3c07936dd48c6b352f0d937390988fa3463c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2) Dec 6 05:18:26 localhost systemd[1]: libpod-conmon-96403699a1358f1a4db9d2adf9ab3c07936dd48c6b352f0d937390988fa3463c.scope: Deactivated successfully. Dec 6 05:18:26 localhost kernel: device tap7bbbac24-f9 left promiscuous mode Dec 6 05:18:26 localhost nova_compute[282193]: 2025-12-06 10:18:26.335 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:26 localhost ovn_controller[154851]: 2025-12-06T10:18:26Z|00289|binding|INFO|Releasing lport 7bbbac24-f9f6-48a3-8929-680a1ce4ebea from this chassis (sb_readonly=0) Dec 6 05:18:26 localhost ovn_controller[154851]: 2025-12-06T10:18:26Z|00290|binding|INFO|Setting lport 7bbbac24-f9f6-48a3-8929-680a1ce4ebea down in Southbound Dec 6 05:18:26 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:26.346 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fedb:c901/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '8', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548789.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=7bbbac24-f9f6-48a3-8929-680a1ce4ebea) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:18:26 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:26.348 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 7bbbac24-f9f6-48a3-8929-680a1ce4ebea in datapath 43883dce-1590-48c4-987c-a21b63b82a1c unbound from our chassis#033[00m Dec 6 05:18:26 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:26.350 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43883dce-1590-48c4-987c-a21b63b82a1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:18:26 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:26.351 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[cc25167d-fe91-4f8d-9036-e773a1c01055]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:18:26 localhost nova_compute[282193]: 2025-12-06 10:18:26.354 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:26 localhost nova_compute[282193]: 2025-12-06 10:18:26.356 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:26 localhost systemd[1]: var-lib-containers-storage-overlay-817b9bf6c12c6bcbb693e0f772f7a483fc0ca562d40fce2f6f132e4c3d88dc33-merged.mount: Deactivated successfully. Dec 6 05:18:26 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-96403699a1358f1a4db9d2adf9ab3c07936dd48c6b352f0d937390988fa3463c-userdata-shm.mount: Deactivated successfully. Dec 6 05:18:26 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e151 e151: 6 total, 6 up, 6 in Dec 6 05:18:26 localhost systemd[1]: run-netns-qdhcp\x2d43883dce\x2d1590\x2d48c4\x2d987c\x2da21b63b82a1c.mount: Deactivated successfully. Dec 6 05:18:27 localhost podman[322437]: 2025-12-06 10:18:27.059400485 +0000 UTC m=+0.065718184 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 6 05:18:27 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 2 addresses Dec 6 05:18:27 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:18:27 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:18:27 localhost ovn_controller[154851]: 2025-12-06T10:18:27Z|00291|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:18:27 localhost sshd[322451]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:18:27 localhost nova_compute[282193]: 2025-12-06 10:18:27.111 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:27 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:27.167 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:fa:29 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=687d7abb-e6aa-4047-aa26-552c962fcc91) old=Port_Binding(mac=['fa:16:3e:3f:fa:29 10.100.0.2 2001:db8::f816:3eff:fe3f:fa29'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe3f:fa29/64', 'neutron:device_id': 'ovnmeta-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:18:27 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:27.169 160509 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 687d7abb-e6aa-4047-aa26-552c962fcc91 in datapath 43883dce-1590-48c4-987c-a21b63b82a1c updated#033[00m Dec 6 05:18:27 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:27.171 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43883dce-1590-48c4-987c-a21b63b82a1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:18:27 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:27.172 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[f7518ce9-d499-431a-af19-957509bff87b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:18:27 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e152 e152: 6 total, 6 up, 6 in Dec 6 05:18:27 localhost nova_compute[282193]: 2025-12-06 10:18:27.823 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:28 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:28.152 263652 INFO neutron.agent.linux.ip_lib [None req-f161c0a0-8298-408a-9c36-5da35e09fec6 - - - - - -] Device tapb3029020-e8 cannot be used as it has no MAC address#033[00m Dec 6 05:18:28 localhost nova_compute[282193]: 2025-12-06 10:18:28.185 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:28 localhost kernel: device tapb3029020-e8 entered promiscuous mode Dec 6 05:18:28 localhost NetworkManager[5973]: [1765016308.1958] manager: (tapb3029020-e8): new Generic device (/org/freedesktop/NetworkManager/Devices/49) Dec 6 05:18:28 localhost nova_compute[282193]: 2025-12-06 10:18:28.196 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:28 localhost ovn_controller[154851]: 2025-12-06T10:18:28Z|00292|binding|INFO|Claiming lport b3029020-e85b-4668-b0f6-5a9b030e9618 for this chassis. Dec 6 05:18:28 localhost ovn_controller[154851]: 2025-12-06T10:18:28Z|00293|binding|INFO|b3029020-e85b-4668-b0f6-5a9b030e9618: Claiming unknown Dec 6 05:18:28 localhost systemd-udevd[322471]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:18:28 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:28.210 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=b3029020-e85b-4668-b0f6-5a9b030e9618) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:18:28 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:28.212 160509 INFO neutron.agent.ovn.metadata.agent [-] Port b3029020-e85b-4668-b0f6-5a9b030e9618 in datapath 43883dce-1590-48c4-987c-a21b63b82a1c bound to our chassis#033[00m Dec 6 05:18:28 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:28.214 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Port 11e42bea-b512-4655-8b5d-68b257966ab4 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 6 05:18:28 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:28.215 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43883dce-1590-48c4-987c-a21b63b82a1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:18:28 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:28.216 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[395b2e66-de7d-471a-a278-7873b15eba20]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:18:28 localhost journal[230404]: ethtool ioctl error on tapb3029020-e8: No such device Dec 6 05:18:28 localhost ovn_controller[154851]: 2025-12-06T10:18:28Z|00294|binding|INFO|Setting lport b3029020-e85b-4668-b0f6-5a9b030e9618 ovn-installed in OVS Dec 6 05:18:28 localhost ovn_controller[154851]: 2025-12-06T10:18:28Z|00295|binding|INFO|Setting lport b3029020-e85b-4668-b0f6-5a9b030e9618 up in Southbound Dec 6 05:18:28 localhost nova_compute[282193]: 2025-12-06 10:18:28.240 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:28 localhost journal[230404]: ethtool ioctl error on tapb3029020-e8: No such device Dec 6 05:18:28 localhost journal[230404]: ethtool ioctl error on tapb3029020-e8: No such device Dec 6 05:18:28 localhost journal[230404]: ethtool ioctl error on tapb3029020-e8: No such device Dec 6 05:18:28 localhost journal[230404]: ethtool ioctl error on tapb3029020-e8: No such device Dec 6 05:18:28 localhost journal[230404]: ethtool ioctl error on tapb3029020-e8: No such device Dec 6 05:18:28 localhost journal[230404]: ethtool ioctl error on tapb3029020-e8: No such device Dec 6 05:18:28 localhost journal[230404]: ethtool ioctl error on tapb3029020-e8: No such device Dec 6 05:18:28 localhost nova_compute[282193]: 2025-12-06 10:18:28.281 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:28 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:18:28 localhost nova_compute[282193]: 2025-12-06 10:18:28.318 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:28 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:28.656 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:fa:29 10.100.0.2 2001:db8::f816:3eff:fe3f:fa29'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe3f:fa29/64', 'neutron:device_id': 'ovnmeta-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=687d7abb-e6aa-4047-aa26-552c962fcc91) old=Port_Binding(mac=['fa:16:3e:3f:fa:29 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:18:28 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:28.658 160509 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 687d7abb-e6aa-4047-aa26-552c962fcc91 in datapath 43883dce-1590-48c4-987c-a21b63b82a1c updated#033[00m Dec 6 05:18:28 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:28.661 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Port 11e42bea-b512-4655-8b5d-68b257966ab4 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 6 05:18:28 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:28.661 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43883dce-1590-48c4-987c-a21b63b82a1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:18:28 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:28.662 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[29291bd4-9862-4f55-9606-ec80e8723c09]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:18:29 localhost nova_compute[282193]: 2025-12-06 10:18:29.031 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:29 localhost podman[322542]: Dec 6 05:18:29 localhost podman[322542]: 2025-12-06 10:18:29.305301146 +0000 UTC m=+0.097263801 container create a3cc50ea41d601a7d39e4271e5834a77ca0be189e992511b75f8bb5186dca4c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Dec 6 05:18:29 localhost systemd[1]: Started libpod-conmon-a3cc50ea41d601a7d39e4271e5834a77ca0be189e992511b75f8bb5186dca4c8.scope. Dec 6 05:18:29 localhost podman[322542]: 2025-12-06 10:18:29.258832836 +0000 UTC m=+0.050795531 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:18:29 localhost systemd[1]: tmp-crun.jfdhv0.mount: Deactivated successfully. Dec 6 05:18:29 localhost systemd[1]: Started libcrun container. Dec 6 05:18:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78e52e6fef944f14e3e41e1d4fa967fb083ae195089e656baef3c3bf80ae5494/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:18:29 localhost podman[322542]: 2025-12-06 10:18:29.38617632 +0000 UTC m=+0.178138975 container init a3cc50ea41d601a7d39e4271e5834a77ca0be189e992511b75f8bb5186dca4c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Dec 6 05:18:29 localhost podman[322542]: 2025-12-06 10:18:29.395134171 +0000 UTC m=+0.187096826 container start a3cc50ea41d601a7d39e4271e5834a77ca0be189e992511b75f8bb5186dca4c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS) Dec 6 05:18:29 localhost dnsmasq[322560]: started, version 2.85 cachesize 150 Dec 6 05:18:29 localhost dnsmasq[322560]: DNS service limited to local subnets Dec 6 05:18:29 localhost dnsmasq[322560]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:18:29 localhost dnsmasq[322560]: warning: no upstream servers configured Dec 6 05:18:29 localhost dnsmasq-dhcp[322560]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 6 05:18:29 localhost dnsmasq[322560]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 0 addresses Dec 6 05:18:29 localhost dnsmasq-dhcp[322560]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:18:29 localhost dnsmasq-dhcp[322560]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:18:29 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:29.605 263652 INFO neutron.agent.dhcp.agent [None req-36684ac6-7a56-44dd-916a-4c88129a7c55 - - - - - -] DHCP configuration for ports {'687d7abb-e6aa-4047-aa26-552c962fcc91'} is completed#033[00m Dec 6 05:18:29 localhost dnsmasq[322560]: exiting on receipt of SIGTERM Dec 6 05:18:29 localhost podman[322577]: 2025-12-06 10:18:29.715796467 +0000 UTC m=+0.054185945 container kill a3cc50ea41d601a7d39e4271e5834a77ca0be189e992511b75f8bb5186dca4c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 6 05:18:29 localhost systemd[1]: libpod-a3cc50ea41d601a7d39e4271e5834a77ca0be189e992511b75f8bb5186dca4c8.scope: Deactivated successfully. Dec 6 05:18:29 localhost podman[322590]: 2025-12-06 10:18:29.775521419 +0000 UTC m=+0.045139960 container died a3cc50ea41d601a7d39e4271e5834a77ca0be189e992511b75f8bb5186dca4c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:18:29 localhost podman[322590]: 2025-12-06 10:18:29.813719877 +0000 UTC m=+0.083338388 container cleanup a3cc50ea41d601a7d39e4271e5834a77ca0be189e992511b75f8bb5186dca4c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:18:29 localhost systemd[1]: libpod-conmon-a3cc50ea41d601a7d39e4271e5834a77ca0be189e992511b75f8bb5186dca4c8.scope: Deactivated successfully. Dec 6 05:18:29 localhost podman[322592]: 2025-12-06 10:18:29.88635739 +0000 UTC m=+0.146742491 container remove a3cc50ea41d601a7d39e4271e5834a77ca0be189e992511b75f8bb5186dca4c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true) Dec 6 05:18:29 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:29.905 2 INFO neutron.agent.securitygroups_rpc [None req-cc7e06ae-2215-4c85-8ca6-e56c13503fc8 a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']#033[00m Dec 6 05:18:30 localhost systemd[1]: var-lib-containers-storage-overlay-78e52e6fef944f14e3e41e1d4fa967fb083ae195089e656baef3c3bf80ae5494-merged.mount: Deactivated successfully. Dec 6 05:18:30 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a3cc50ea41d601a7d39e4271e5834a77ca0be189e992511b75f8bb5186dca4c8-userdata-shm.mount: Deactivated successfully. Dec 6 05:18:30 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:30.759 2 INFO neutron.agent.securitygroups_rpc [None req-e8307117-28c2-4262-9c6e-dc24bf4a796c a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']#033[00m Dec 6 05:18:30 localhost podman[322668]: Dec 6 05:18:30 localhost podman[322668]: 2025-12-06 10:18:30.957686286 +0000 UTC m=+0.101700137 container create 18164ad664db41d65c729673ff53a331d1f1fc059dcd4257511254b0405a2045 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 6 05:18:30 localhost systemd[1]: Started libpod-conmon-18164ad664db41d65c729673ff53a331d1f1fc059dcd4257511254b0405a2045.scope. Dec 6 05:18:31 localhost podman[322668]: 2025-12-06 10:18:30.908599387 +0000 UTC m=+0.052613298 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:18:31 localhost systemd[1]: Started libcrun container. Dec 6 05:18:31 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89e42998c88cb974d7c2b22853155a2a789a8b224fcbfc3b35a404588475f19e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:18:31 localhost podman[322668]: 2025-12-06 10:18:31.028648538 +0000 UTC m=+0.172662369 container init 18164ad664db41d65c729673ff53a331d1f1fc059dcd4257511254b0405a2045 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 6 05:18:31 localhost podman[322668]: 2025-12-06 10:18:31.03891264 +0000 UTC m=+0.182926491 container start 18164ad664db41d65c729673ff53a331d1f1fc059dcd4257511254b0405a2045 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 6 05:18:31 localhost dnsmasq[322686]: started, version 2.85 cachesize 150 Dec 6 05:18:31 localhost dnsmasq[322686]: DNS service limited to local subnets Dec 6 05:18:31 localhost dnsmasq[322686]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:18:31 localhost dnsmasq[322686]: warning: no upstream servers configured Dec 6 05:18:31 localhost dnsmasq-dhcp[322686]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 6 05:18:31 localhost dnsmasq-dhcp[322686]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 6 05:18:31 localhost dnsmasq[322686]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 0 addresses Dec 6 05:18:31 localhost dnsmasq-dhcp[322686]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:18:31 localhost dnsmasq-dhcp[322686]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:18:31 localhost ovn_controller[154851]: 2025-12-06T10:18:31Z|00296|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:18:31 localhost nova_compute[282193]: 2025-12-06 10:18:31.202 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:31 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 1 addresses Dec 6 05:18:31 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:18:31 localhost podman[322704]: 2025-12-06 10:18:31.257325354 +0000 UTC m=+0.075063018 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Dec 6 05:18:31 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:18:31 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:31.270 263652 INFO neutron.agent.dhcp.agent [None req-17010f5e-7f00-4e31-8484-75a4746e68e8 - - - - - -] DHCP configuration for ports {'687d7abb-e6aa-4047-aa26-552c962fcc91', 'b3029020-e85b-4668-b0f6-5a9b030e9618'} is completed#033[00m Dec 6 05:18:31 localhost dnsmasq[322686]: exiting on receipt of SIGTERM Dec 6 05:18:31 localhost systemd[1]: libpod-18164ad664db41d65c729673ff53a331d1f1fc059dcd4257511254b0405a2045.scope: Deactivated successfully. Dec 6 05:18:31 localhost podman[322739]: 2025-12-06 10:18:31.434153248 +0000 UTC m=+0.080212374 container kill 18164ad664db41d65c729673ff53a331d1f1fc059dcd4257511254b0405a2045 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:18:31 localhost podman[322757]: 2025-12-06 10:18:31.498671775 +0000 UTC m=+0.053461543 container died 18164ad664db41d65c729673ff53a331d1f1fc059dcd4257511254b0405a2045 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:18:31 localhost systemd[1]: tmp-crun.tFc2sd.mount: Deactivated successfully. Dec 6 05:18:31 localhost podman[322757]: 2025-12-06 10:18:31.538134072 +0000 UTC m=+0.092923810 container cleanup 18164ad664db41d65c729673ff53a331d1f1fc059dcd4257511254b0405a2045 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:18:31 localhost systemd[1]: libpod-conmon-18164ad664db41d65c729673ff53a331d1f1fc059dcd4257511254b0405a2045.scope: Deactivated successfully. Dec 6 05:18:31 localhost podman[322765]: 2025-12-06 10:18:31.563012726 +0000 UTC m=+0.106507841 container remove 18164ad664db41d65c729673ff53a331d1f1fc059dcd4257511254b0405a2045 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:18:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:18:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:18:32 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e153 e153: 6 total, 6 up, 6 in Dec 6 05:18:32 localhost podman[322812]: 2025-12-06 10:18:32.194915262 +0000 UTC m=+0.093192897 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:18:32 localhost podman[322812]: 2025-12-06 10:18:32.204086641 +0000 UTC m=+0.102364196 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125) Dec 6 05:18:32 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:18:32 localhost podman[322813]: 2025-12-06 10:18:32.298784153 +0000 UTC m=+0.194658565 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 6 05:18:32 localhost systemd[1]: var-lib-containers-storage-overlay-89e42998c88cb974d7c2b22853155a2a789a8b224fcbfc3b35a404588475f19e-merged.mount: Deactivated successfully. Dec 6 05:18:32 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-18164ad664db41d65c729673ff53a331d1f1fc059dcd4257511254b0405a2045-userdata-shm.mount: Deactivated successfully. Dec 6 05:18:32 localhost podman[322813]: 2025-12-06 10:18:32.313322695 +0000 UTC m=+0.209197047 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:18:32 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:18:32 localhost podman[322876]: Dec 6 05:18:32 localhost podman[322876]: 2025-12-06 10:18:32.444529414 +0000 UTC m=+0.076201743 container create 3035dfe2999c811d0b2491162142d5fb209c1a29b600736554ddc43f5a334c6f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:18:32 localhost systemd[1]: Started libpod-conmon-3035dfe2999c811d0b2491162142d5fb209c1a29b600736554ddc43f5a334c6f.scope. Dec 6 05:18:32 localhost podman[322876]: 2025-12-06 10:18:32.403396597 +0000 UTC m=+0.035068976 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:18:32 localhost systemd[1]: Started libcrun container. Dec 6 05:18:32 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9d73a2fdfb03ffbcd30162cfb7ffe49e8b3eded892e7281f4a620f84c192ffd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:18:32 localhost podman[322876]: 2025-12-06 10:18:32.519398775 +0000 UTC m=+0.151071114 container init 3035dfe2999c811d0b2491162142d5fb209c1a29b600736554ddc43f5a334c6f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true) Dec 6 05:18:32 localhost podman[322876]: 2025-12-06 10:18:32.528329075 +0000 UTC m=+0.160001414 container start 3035dfe2999c811d0b2491162142d5fb209c1a29b600736554ddc43f5a334c6f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Dec 6 05:18:32 localhost dnsmasq[322895]: started, version 2.85 cachesize 150 Dec 6 05:18:32 localhost dnsmasq[322895]: DNS service limited to local subnets Dec 6 05:18:32 localhost dnsmasq[322895]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:18:32 localhost dnsmasq[322895]: warning: no upstream servers configured Dec 6 05:18:32 localhost dnsmasq-dhcp[322895]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 6 05:18:32 localhost dnsmasq[322895]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 0 addresses Dec 6 05:18:32 localhost dnsmasq-dhcp[322895]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:18:32 localhost dnsmasq-dhcp[322895]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:18:32 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:32.773 263652 INFO neutron.agent.dhcp.agent [None req-b6712de3-87d8-4c48-96f2-e44ea4e39d7f - - - - - -] DHCP configuration for ports {'687d7abb-e6aa-4047-aa26-552c962fcc91', 'b3029020-e85b-4668-b0f6-5a9b030e9618'} is completed#033[00m Dec 6 05:18:32 localhost nova_compute[282193]: 2025-12-06 10:18:32.893 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:32 localhost dnsmasq[322895]: exiting on receipt of SIGTERM Dec 6 05:18:32 localhost podman[322911]: 2025-12-06 10:18:32.917549401 +0000 UTC m=+0.124533988 container kill 3035dfe2999c811d0b2491162142d5fb209c1a29b600736554ddc43f5a334c6f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:18:32 localhost systemd[1]: libpod-3035dfe2999c811d0b2491162142d5fb209c1a29b600736554ddc43f5a334c6f.scope: Deactivated successfully. Dec 6 05:18:32 localhost podman[322924]: 2025-12-06 10:18:32.989433322 +0000 UTC m=+0.060331682 container died 3035dfe2999c811d0b2491162142d5fb209c1a29b600736554ddc43f5a334c6f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0) Dec 6 05:18:33 localhost podman[322924]: 2025-12-06 10:18:33.025118894 +0000 UTC m=+0.096017214 container cleanup 3035dfe2999c811d0b2491162142d5fb209c1a29b600736554ddc43f5a334c6f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:18:33 localhost systemd[1]: libpod-conmon-3035dfe2999c811d0b2491162142d5fb209c1a29b600736554ddc43f5a334c6f.scope: Deactivated successfully. Dec 6 05:18:33 localhost podman[322926]: 2025-12-06 10:18:33.07872985 +0000 UTC m=+0.140447371 container remove 3035dfe2999c811d0b2491162142d5fb209c1a29b600736554ddc43f5a334c6f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125) Dec 6 05:18:33 localhost nova_compute[282193]: 2025-12-06 10:18:33.089 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:33 localhost kernel: device tapb3029020-e8 left promiscuous mode Dec 6 05:18:33 localhost ovn_controller[154851]: 2025-12-06T10:18:33Z|00297|binding|INFO|Releasing lport b3029020-e85b-4668-b0f6-5a9b030e9618 from this chassis (sb_readonly=0) Dec 6 05:18:33 localhost ovn_controller[154851]: 2025-12-06T10:18:33Z|00298|binding|INFO|Setting lport b3029020-e85b-4668-b0f6-5a9b030e9618 down in Southbound Dec 6 05:18:33 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:33.098 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fed7:1d9d/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548789.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=b3029020-e85b-4668-b0f6-5a9b030e9618) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:18:33 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:33.099 160509 INFO neutron.agent.ovn.metadata.agent [-] Port b3029020-e85b-4668-b0f6-5a9b030e9618 in datapath 43883dce-1590-48c4-987c-a21b63b82a1c unbound from our chassis#033[00m Dec 6 05:18:33 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:33.101 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43883dce-1590-48c4-987c-a21b63b82a1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:18:33 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:33.102 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[11dffdbb-9fdc-4c0d-923d-88758d29ec30]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:18:33 localhost nova_compute[282193]: 2025-12-06 10:18:33.109 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:33 localhost nova_compute[282193]: 2025-12-06 10:18:33.110 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:33 localhost nova_compute[282193]: 2025-12-06 10:18:33.217 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:18:33 localhost nova_compute[282193]: 2025-12-06 10:18:33.234 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:18:33 localhost nova_compute[282193]: 2025-12-06 10:18:33.234 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:18:33 localhost nova_compute[282193]: 2025-12-06 10:18:33.234 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:18:33 localhost nova_compute[282193]: 2025-12-06 10:18:33.234 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Auditing locally available compute resources for np0005548789.localdomain (node: np0005548789.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 6 05:18:33 localhost nova_compute[282193]: 2025-12-06 10:18:33.234 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:18:33 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:18:33 localhost systemd[1]: var-lib-containers-storage-overlay-a9d73a2fdfb03ffbcd30162cfb7ffe49e8b3eded892e7281f4a620f84c192ffd-merged.mount: Deactivated successfully. Dec 6 05:18:33 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3035dfe2999c811d0b2491162142d5fb209c1a29b600736554ddc43f5a334c6f-userdata-shm.mount: Deactivated successfully. Dec 6 05:18:33 localhost systemd[1]: run-netns-qdhcp\x2d43883dce\x2d1590\x2d48c4\x2d987c\x2da21b63b82a1c.mount: Deactivated successfully. Dec 6 05:18:33 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:33.442 263652 INFO neutron.agent.dhcp.agent [None req-9586a7e6-b676-4fc8-b691-b754c5e5f7ec - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:18:33 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:33.443 263652 INFO neutron.agent.dhcp.agent [None req-9586a7e6-b676-4fc8-b691-b754c5e5f7ec - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:18:33 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:33.443 263652 INFO neutron.agent.dhcp.agent [None req-9586a7e6-b676-4fc8-b691-b754c5e5f7ec - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:18:33 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:33.525 2 INFO neutron.agent.securitygroups_rpc [None req-dd900d05-ceed-4a76-8792-94f73f7d9bdc b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['cd56abe4-204c-4363-ad64-0a6840260727']#033[00m Dec 6 05:18:33 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:18:33 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2082246135' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:18:33 localhost nova_compute[282193]: 2025-12-06 10:18:33.695 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:18:33 localhost nova_compute[282193]: 2025-12-06 10:18:33.797 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:18:33 localhost nova_compute[282193]: 2025-12-06 10:18:33.797 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:18:33 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:33.808 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:fa:29 2001:db8::f816:3eff:fe3f:fa29'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe3f:fa29/64', 'neutron:device_id': 'ovnmeta-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '18', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=687d7abb-e6aa-4047-aa26-552c962fcc91) old=Port_Binding(mac=['fa:16:3e:3f:fa:29 10.100.0.2 2001:db8::f816:3eff:fe3f:fa29'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe3f:fa29/64', 'neutron:device_id': 'ovnmeta-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:18:33 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:33.810 160509 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 687d7abb-e6aa-4047-aa26-552c962fcc91 in datapath 43883dce-1590-48c4-987c-a21b63b82a1c updated#033[00m Dec 6 05:18:33 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:33.812 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43883dce-1590-48c4-987c-a21b63b82a1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:18:33 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:33.813 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[6520fdae-ab35-4375-b812-0f5a3613639b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:18:33 localhost nova_compute[282193]: 2025-12-06 10:18:33.996 282197 WARNING nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 6 05:18:33 localhost nova_compute[282193]: 2025-12-06 10:18:33.997 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Hypervisor/Node resource view: name=np0005548789.localdomain free_ram=11250MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 6 05:18:33 localhost nova_compute[282193]: 2025-12-06 10:18:33.998 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:18:33 localhost nova_compute[282193]: 2025-12-06 10:18:33.998 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:18:34 localhost nova_compute[282193]: 2025-12-06 10:18:34.070 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:34 localhost nova_compute[282193]: 2025-12-06 10:18:34.275 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 6 05:18:34 localhost nova_compute[282193]: 2025-12-06 10:18:34.276 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 6 05:18:34 localhost nova_compute[282193]: 2025-12-06 10:18:34.276 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Final resource view: name=np0005548789.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 6 05:18:34 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:34.362 2 INFO neutron.agent.securitygroups_rpc [None req-155fc4a8-22cb-4d06-82dd-8cfd5b79a9e9 8705da02a69e4c3281916dd7bc9ac6d1 851f2bb5c4164322946aa41fe266eb66 - - default default] Security group member updated ['6607cea2-9b0f-45af-9864-1af2923eb94b']#033[00m Dec 6 05:18:34 localhost nova_compute[282193]: 2025-12-06 10:18:34.562 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Refreshing inventories for resource provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Dec 6 05:18:34 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:34.590 263652 INFO neutron.agent.linux.ip_lib [None req-0a7ab7d8-06a3-4d20-b52b-e59c9a3a54a4 - - - - - -] Device tap1e7eac16-04 cannot be used as it has no MAC address#033[00m Dec 6 05:18:34 localhost nova_compute[282193]: 2025-12-06 10:18:34.615 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:34 localhost kernel: device tap1e7eac16-04 entered promiscuous mode Dec 6 05:18:34 localhost nova_compute[282193]: 2025-12-06 10:18:34.626 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:34 localhost ovn_controller[154851]: 2025-12-06T10:18:34Z|00299|binding|INFO|Claiming lport 1e7eac16-0466-4938-b460-d43f8a6e5320 for this chassis. Dec 6 05:18:34 localhost ovn_controller[154851]: 2025-12-06T10:18:34Z|00300|binding|INFO|1e7eac16-0466-4938-b460-d43f8a6e5320: Claiming unknown Dec 6 05:18:34 localhost systemd-udevd[322988]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:18:34 localhost NetworkManager[5973]: [1765016314.6300] manager: (tap1e7eac16-04): new Generic device (/org/freedesktop/NetworkManager/Devices/50) Dec 6 05:18:34 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:34.636 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe5d:d6f0/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=1e7eac16-0466-4938-b460-d43f8a6e5320) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:18:34 localhost ovn_controller[154851]: 2025-12-06T10:18:34Z|00301|binding|INFO|Setting lport 1e7eac16-0466-4938-b460-d43f8a6e5320 ovn-installed in OVS Dec 6 05:18:34 localhost ovn_controller[154851]: 2025-12-06T10:18:34Z|00302|binding|INFO|Setting lport 1e7eac16-0466-4938-b460-d43f8a6e5320 up in Southbound Dec 6 05:18:34 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:34.639 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 1e7eac16-0466-4938-b460-d43f8a6e5320 in datapath 43883dce-1590-48c4-987c-a21b63b82a1c bound to our chassis#033[00m Dec 6 05:18:34 localhost nova_compute[282193]: 2025-12-06 10:18:34.639 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:34 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:34.641 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Port 8e499ad3-7646-4ccf-b8fd-bda079c71614 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 6 05:18:34 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:34.641 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43883dce-1590-48c4-987c-a21b63b82a1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:18:34 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:34.642 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[2160f273-e785-4c4b-90fc-15f7b03671cf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:18:34 localhost journal[230404]: ethtool ioctl error on tap1e7eac16-04: No such device Dec 6 05:18:34 localhost nova_compute[282193]: 2025-12-06 10:18:34.660 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:34 localhost journal[230404]: ethtool ioctl error on tap1e7eac16-04: No such device Dec 6 05:18:34 localhost journal[230404]: ethtool ioctl error on tap1e7eac16-04: No such device Dec 6 05:18:34 localhost journal[230404]: ethtool ioctl error on tap1e7eac16-04: No such device Dec 6 05:18:34 localhost journal[230404]: ethtool ioctl error on tap1e7eac16-04: No such device Dec 6 05:18:34 localhost journal[230404]: ethtool ioctl error on tap1e7eac16-04: No such device Dec 6 05:18:34 localhost journal[230404]: ethtool ioctl error on tap1e7eac16-04: No such device Dec 6 05:18:34 localhost journal[230404]: ethtool ioctl error on tap1e7eac16-04: No such device Dec 6 05:18:34 localhost nova_compute[282193]: 2025-12-06 10:18:34.704 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:34 localhost nova_compute[282193]: 2025-12-06 10:18:34.735 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Updating ProviderTree inventory for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Dec 6 05:18:34 localhost nova_compute[282193]: 2025-12-06 10:18:34.736 282197 DEBUG nova.compute.provider_tree [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Updating inventory in ProviderTree for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 6 05:18:34 localhost nova_compute[282193]: 2025-12-06 10:18:34.739 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:34 localhost nova_compute[282193]: 2025-12-06 10:18:34.752 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Refreshing aggregate associations for resource provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Dec 6 05:18:34 localhost nova_compute[282193]: 2025-12-06 10:18:34.770 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Refreshing trait associations for resource provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad, traits: HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_RESCUE_BFV,HW_CPU_X86_AVX2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SHA,HW_CPU_X86_BMI2,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSSE3,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_AVX,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AMD_SVM,HW_CPU_X86_FMA3,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_F16C,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_ABM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Dec 6 05:18:34 localhost nova_compute[282193]: 2025-12-06 10:18:34.807 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:18:34 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:34.831 2 INFO neutron.agent.securitygroups_rpc [None req-d5c62043-5321-4cf5-baec-1c1605bc1cd9 a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']#033[00m Dec 6 05:18:35 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:35.046 2 INFO neutron.agent.securitygroups_rpc [None req-f9e16e57-d76f-4e49-8c54-adacc8516f8a b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['cd56abe4-204c-4363-ad64-0a6840260727']#033[00m Dec 6 05:18:35 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:35.145 2 INFO neutron.agent.securitygroups_rpc [None req-fe503d20-8e49-4871-94e0-efabb011ed42 8705da02a69e4c3281916dd7bc9ac6d1 851f2bb5c4164322946aa41fe266eb66 - - default default] Security group member updated ['6607cea2-9b0f-45af-9864-1af2923eb94b']#033[00m Dec 6 05:18:35 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:18:35 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1644367828' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:18:35 localhost nova_compute[282193]: 2025-12-06 10:18:35.272 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:18:35 localhost nova_compute[282193]: 2025-12-06 10:18:35.278 282197 DEBUG nova.compute.provider_tree [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed in ProviderTree for provider: 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 6 05:18:35 localhost nova_compute[282193]: 2025-12-06 10:18:35.297 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 6 05:18:35 localhost nova_compute[282193]: 2025-12-06 10:18:35.299 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Compute_service record updated for np0005548789.localdomain:np0005548789.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 6 05:18:35 localhost nova_compute[282193]: 2025-12-06 10:18:35.300 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.301s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:18:35 localhost sshd[323059]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:18:35 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:35.361 263652 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:18:35 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:35.423 2 INFO neutron.agent.securitygroups_rpc [None req-f9e16e57-d76f-4e49-8c54-adacc8516f8a b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['cd56abe4-204c-4363-ad64-0a6840260727']#033[00m Dec 6 05:18:35 localhost podman[323083]: Dec 6 05:18:35 localhost podman[323083]: 2025-12-06 10:18:35.602718257 +0000 UTC m=+0.098817409 container create 6a583563545dc7b7ec5c05c29bb95187c6f71b3276ad300d24bae1097cf7b6fc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 6 05:18:35 localhost systemd[1]: Started libpod-conmon-6a583563545dc7b7ec5c05c29bb95187c6f71b3276ad300d24bae1097cf7b6fc.scope. Dec 6 05:18:35 localhost podman[323083]: 2025-12-06 10:18:35.558153015 +0000 UTC m=+0.054252157 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:18:35 localhost systemd[1]: tmp-crun.3QH2a9.mount: Deactivated successfully. Dec 6 05:18:35 localhost systemd[1]: Started libcrun container. Dec 6 05:18:35 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:35.687 2 INFO neutron.agent.securitygroups_rpc [None req-4e477950-abaa-4886-9df8-9dd5bb5175a4 a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']#033[00m Dec 6 05:18:35 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbc77840f75508555765143db8d6e2013f2107a72b75c59be1877c317e703b54/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:18:35 localhost podman[323083]: 2025-12-06 10:18:35.701504753 +0000 UTC m=+0.197603895 container init 6a583563545dc7b7ec5c05c29bb95187c6f71b3276ad300d24bae1097cf7b6fc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:18:35 localhost podman[323083]: 2025-12-06 10:18:35.717325184 +0000 UTC m=+0.213424336 container start 6a583563545dc7b7ec5c05c29bb95187c6f71b3276ad300d24bae1097cf7b6fc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 6 05:18:35 localhost dnsmasq[323102]: started, version 2.85 cachesize 150 Dec 6 05:18:35 localhost dnsmasq[323102]: DNS service limited to local subnets Dec 6 05:18:35 localhost dnsmasq[323102]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:18:35 localhost dnsmasq[323102]: warning: no upstream servers configured Dec 6 05:18:35 localhost dnsmasq[323102]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 0 addresses Dec 6 05:18:35 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:35.786 263652 INFO neutron.agent.dhcp.agent [None req-0a7ab7d8-06a3-4d20-b52b-e59c9a3a54a4 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:18:34Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=317f9495-2ed2-45a2-afeb-683254f0b250, ip_allocation=immediate, mac_address=fa:16:3e:0b:59:61, name=tempest-NetworksTestDHCPv6-1599368332, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:17:28Z, description=, dns_domain=, id=43883dce-1590-48c4-987c-a21b63b82a1c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-1975538139, port_security_enabled=True, project_id=34a17eee71de4bac8b71972a4b7b506c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=42818, qos_policy_id=None, revision_number=38, router:external=False, shared=False, standard_attr_id=1415, status=ACTIVE, subnets=['d6c65525-ce8f-4af5-8cc0-7d2130f263c9'], tags=[], tenant_id=34a17eee71de4bac8b71972a4b7b506c, updated_at=2025-12-06T10:18:32Z, vlan_transparent=None, network_id=43883dce-1590-48c4-987c-a21b63b82a1c, port_security_enabled=True, project_id=34a17eee71de4bac8b71972a4b7b506c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['d618a097-5989-47aa-9263-1c8a114ad269'], standard_attr_id=1833, status=DOWN, tags=[], tenant_id=34a17eee71de4bac8b71972a4b7b506c, updated_at=2025-12-06T10:18:34Z on network 43883dce-1590-48c4-987c-a21b63b82a1c#033[00m Dec 6 05:18:35 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:35.851 263652 INFO neutron.agent.dhcp.agent [None req-dba23f51-a783-4249-adb5-3f64dfb318ca - - - - - -] DHCP configuration for ports {'687d7abb-e6aa-4047-aa26-552c962fcc91'} is completed#033[00m Dec 6 05:18:35 localhost dnsmasq[323102]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 1 addresses Dec 6 05:18:35 localhost podman[323121]: 2025-12-06 10:18:35.996964235 +0000 UTC m=+0.062545208 container kill 6a583563545dc7b7ec5c05c29bb95187c6f71b3276ad300d24bae1097cf7b6fc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:18:36 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:36.116 2 INFO neutron.agent.securitygroups_rpc [None req-14f85306-cb54-46c0-a6f6-e09d3e175b2a b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['cd56abe4-204c-4363-ad64-0a6840260727']#033[00m Dec 6 05:18:36 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:36.269 263652 INFO neutron.agent.dhcp.agent [None req-800a9886-b6d8-4c93-8c12-50ec2af162b9 - - - - - -] DHCP configuration for ports {'317f9495-2ed2-45a2-afeb-683254f0b250'} is completed#033[00m Dec 6 05:18:36 localhost dnsmasq[323102]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 0 addresses Dec 6 05:18:36 localhost podman[323158]: 2025-12-06 10:18:36.338599267 +0000 UTC m=+0.062994821 container kill 6a583563545dc7b7ec5c05c29bb95187c6f71b3276ad300d24bae1097cf7b6fc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 6 05:18:36 localhost systemd[1]: tmp-crun.zFfWmF.mount: Deactivated successfully. Dec 6 05:18:36 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:36.651 2 INFO neutron.agent.securitygroups_rpc [None req-af2f65bf-97b7-4bb0-b9fe-3c28224c3c96 b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['cd56abe4-204c-4363-ad64-0a6840260727']#033[00m Dec 6 05:18:36 localhost dnsmasq[323102]: exiting on receipt of SIGTERM Dec 6 05:18:36 localhost systemd[1]: libpod-6a583563545dc7b7ec5c05c29bb95187c6f71b3276ad300d24bae1097cf7b6fc.scope: Deactivated successfully. Dec 6 05:18:36 localhost podman[323195]: 2025-12-06 10:18:36.852614519 +0000 UTC m=+0.067258991 container kill 6a583563545dc7b7ec5c05c29bb95187c6f71b3276ad300d24bae1097cf7b6fc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:18:36 localhost podman[323209]: 2025-12-06 10:18:36.931386798 +0000 UTC m=+0.066567641 container died 6a583563545dc7b7ec5c05c29bb95187c6f71b3276ad300d24bae1097cf7b6fc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 6 05:18:36 localhost podman[323209]: 2025-12-06 10:18:36.962369897 +0000 UTC m=+0.097550710 container cleanup 6a583563545dc7b7ec5c05c29bb95187c6f71b3276ad300d24bae1097cf7b6fc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125) Dec 6 05:18:36 localhost systemd[1]: libpod-conmon-6a583563545dc7b7ec5c05c29bb95187c6f71b3276ad300d24bae1097cf7b6fc.scope: Deactivated successfully. Dec 6 05:18:37 localhost podman[323216]: 2025-12-06 10:18:37.025226805 +0000 UTC m=+0.146064283 container remove 6a583563545dc7b7ec5c05c29bb95187c6f71b3276ad300d24bae1097cf7b6fc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:18:37 localhost nova_compute[282193]: 2025-12-06 10:18:37.039 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:37 localhost ovn_controller[154851]: 2025-12-06T10:18:37Z|00303|binding|INFO|Releasing lport 1e7eac16-0466-4938-b460-d43f8a6e5320 from this chassis (sb_readonly=0) Dec 6 05:18:37 localhost ovn_controller[154851]: 2025-12-06T10:18:37Z|00304|binding|INFO|Setting lport 1e7eac16-0466-4938-b460-d43f8a6e5320 down in Southbound Dec 6 05:18:37 localhost kernel: device tap1e7eac16-04 left promiscuous mode Dec 6 05:18:37 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:37.048 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe5d:d6f0/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548789.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=1e7eac16-0466-4938-b460-d43f8a6e5320) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:18:37 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:37.050 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 1e7eac16-0466-4938-b460-d43f8a6e5320 in datapath 43883dce-1590-48c4-987c-a21b63b82a1c unbound from our chassis#033[00m Dec 6 05:18:37 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:37.052 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43883dce-1590-48c4-987c-a21b63b82a1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:18:37 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:37.053 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[a0d90bc6-ba93-43c0-b7dc-b11d552f9c45]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:18:37 localhost nova_compute[282193]: 2025-12-06 10:18:37.060 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:37 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:37.364 263652 INFO neutron.agent.dhcp.agent [None req-9204d6d4-cd0f-4bea-8a38-f041253991b6 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:18:37 localhost systemd[1]: var-lib-containers-storage-overlay-fbc77840f75508555765143db8d6e2013f2107a72b75c59be1877c317e703b54-merged.mount: Deactivated successfully. Dec 6 05:18:37 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6a583563545dc7b7ec5c05c29bb95187c6f71b3276ad300d24bae1097cf7b6fc-userdata-shm.mount: Deactivated successfully. Dec 6 05:18:37 localhost systemd[1]: run-netns-qdhcp\x2d43883dce\x2d1590\x2d48c4\x2d987c\x2da21b63b82a1c.mount: Deactivated successfully. Dec 6 05:18:37 localhost nova_compute[282193]: 2025-12-06 10:18:37.926 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:38 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:38.026 2 INFO neutron.agent.securitygroups_rpc [None req-1f216153-8df9-4f5f-9520-bf151df27051 a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']#033[00m Dec 6 05:18:38 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:38.095 263652 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:18:38 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e154 e154: 6 total, 6 up, 6 in Dec 6 05:18:38 localhost nova_compute[282193]: 2025-12-06 10:18:38.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:18:38 localhost nova_compute[282193]: 2025-12-06 10:18:38.182 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:18:38 localhost nova_compute[282193]: 2025-12-06 10:18:38.183 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:18:38 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:38.246 263652 INFO neutron.agent.linux.ip_lib [None req-fe796f91-c16f-44c5-93a2-1c017a79f914 - - - - - -] Device tap2b709350-c2 cannot be used as it has no MAC address#033[00m Dec 6 05:18:38 localhost nova_compute[282193]: 2025-12-06 10:18:38.275 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:38 localhost kernel: device tap2b709350-c2 entered promiscuous mode Dec 6 05:18:38 localhost ovn_controller[154851]: 2025-12-06T10:18:38Z|00305|binding|INFO|Claiming lport 2b709350-c211-4254-8281-b64bea0c6f41 for this chassis. Dec 6 05:18:38 localhost ovn_controller[154851]: 2025-12-06T10:18:38Z|00306|binding|INFO|2b709350-c211-4254-8281-b64bea0c6f41: Claiming unknown Dec 6 05:18:38 localhost nova_compute[282193]: 2025-12-06 10:18:38.283 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:38 localhost NetworkManager[5973]: [1765016318.2840] manager: (tap2b709350-c2): new Generic device (/org/freedesktop/NetworkManager/Devices/51) Dec 6 05:18:38 localhost systemd-udevd[323251]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:18:38 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:18:38 localhost ovn_controller[154851]: 2025-12-06T10:18:38Z|00307|binding|INFO|Setting lport 2b709350-c211-4254-8281-b64bea0c6f41 ovn-installed in OVS Dec 6 05:18:38 localhost nova_compute[282193]: 2025-12-06 10:18:38.293 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:38 localhost ovn_controller[154851]: 2025-12-06T10:18:38Z|00308|binding|INFO|Setting lport 2b709350-c211-4254-8281-b64bea0c6f41 up in Southbound Dec 6 05:18:38 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:38.299 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fef5:2012/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=2b709350-c211-4254-8281-b64bea0c6f41) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:18:38 localhost nova_compute[282193]: 2025-12-06 10:18:38.302 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:38 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:38.303 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 2b709350-c211-4254-8281-b64bea0c6f41 in datapath 43883dce-1590-48c4-987c-a21b63b82a1c bound to our chassis#033[00m Dec 6 05:18:38 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:38.311 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Port 160bb911-7f10-4f65-8dc2-37031a049d2c IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 6 05:18:38 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:38.312 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43883dce-1590-48c4-987c-a21b63b82a1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:18:38 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:38.313 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[ac010a82-b0c6-4e99-89ca-47c35a1025ed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:18:38 localhost journal[230404]: ethtool ioctl error on tap2b709350-c2: No such device Dec 6 05:18:38 localhost journal[230404]: ethtool ioctl error on tap2b709350-c2: No such device Dec 6 05:18:38 localhost nova_compute[282193]: 2025-12-06 10:18:38.322 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:38 localhost journal[230404]: ethtool ioctl error on tap2b709350-c2: No such device Dec 6 05:18:38 localhost journal[230404]: ethtool ioctl error on tap2b709350-c2: No such device Dec 6 05:18:38 localhost journal[230404]: ethtool ioctl error on tap2b709350-c2: No such device Dec 6 05:18:38 localhost journal[230404]: ethtool ioctl error on tap2b709350-c2: No such device Dec 6 05:18:38 localhost journal[230404]: ethtool ioctl error on tap2b709350-c2: No such device Dec 6 05:18:38 localhost journal[230404]: ethtool ioctl error on tap2b709350-c2: No such device Dec 6 05:18:38 localhost nova_compute[282193]: 2025-12-06 10:18:38.357 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:38 localhost nova_compute[282193]: 2025-12-06 10:18:38.387 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:38 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 6 05:18:38 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1177396961' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 6 05:18:38 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 6 05:18:38 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1177396961' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 6 05:18:39 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:39.029 2 INFO neutron.agent.securitygroups_rpc [None req-df5d0c63-3dbc-41ec-8a7e-d627e1beca42 a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']#033[00m Dec 6 05:18:39 localhost nova_compute[282193]: 2025-12-06 10:18:39.106 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:39 localhost nova_compute[282193]: 2025-12-06 10:18:39.202 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:18:39 localhost nova_compute[282193]: 2025-12-06 10:18:39.202 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 6 05:18:39 localhost nova_compute[282193]: 2025-12-06 10:18:39.203 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 6 05:18:39 localhost podman[323322]: Dec 6 05:18:39 localhost podman[323322]: 2025-12-06 10:18:39.21652849 +0000 UTC m=+0.092030003 container create 27452c8dc971f1931cc48233d860a5a4dae6b3dc752ed26dd6692132ab8623c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 6 05:18:39 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:39.254 263652 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:18:39 localhost systemd[1]: Started libpod-conmon-27452c8dc971f1931cc48233d860a5a4dae6b3dc752ed26dd6692132ab8623c4.scope. Dec 6 05:18:39 localhost podman[323322]: 2025-12-06 10:18:39.172992879 +0000 UTC m=+0.048494442 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:18:39 localhost systemd[1]: tmp-crun.kYTSBP.mount: Deactivated successfully. Dec 6 05:18:39 localhost systemd[1]: Started libcrun container. Dec 6 05:18:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b7531a1e7d0325b8ef534d223b499f4804e3bafec7ac5d3faa5dc377932e096/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:18:39 localhost podman[323322]: 2025-12-06 10:18:39.301384203 +0000 UTC m=+0.176885716 container init 27452c8dc971f1931cc48233d860a5a4dae6b3dc752ed26dd6692132ab8623c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 6 05:18:39 localhost podman[323322]: 2025-12-06 10:18:39.313416258 +0000 UTC m=+0.188917781 container start 27452c8dc971f1931cc48233d860a5a4dae6b3dc752ed26dd6692132ab8623c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3) Dec 6 05:18:39 localhost dnsmasq[323340]: started, version 2.85 cachesize 150 Dec 6 05:18:39 localhost dnsmasq[323340]: DNS service limited to local subnets Dec 6 05:18:39 localhost dnsmasq[323340]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:18:39 localhost dnsmasq[323340]: warning: no upstream servers configured Dec 6 05:18:39 localhost dnsmasq-dhcp[323340]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 6 05:18:39 localhost dnsmasq[323340]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 0 addresses Dec 6 05:18:39 localhost dnsmasq-dhcp[323340]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:18:39 localhost dnsmasq-dhcp[323340]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:18:39 localhost nova_compute[282193]: 2025-12-06 10:18:39.326 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 6 05:18:39 localhost nova_compute[282193]: 2025-12-06 10:18:39.326 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquired lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 6 05:18:39 localhost nova_compute[282193]: 2025-12-06 10:18:39.327 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 6 05:18:39 localhost nova_compute[282193]: 2025-12-06 10:18:39.327 282197 DEBUG nova.objects.instance [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 6 05:18:39 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:39.378 263652 INFO neutron.agent.dhcp.agent [None req-fe796f91-c16f-44c5-93a2-1c017a79f914 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:18:37Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=f67fb8ff-e5d2-4a19-a60a-1bd7d5cb713a, ip_allocation=immediate, mac_address=fa:16:3e:a2:18:23, name=tempest-NetworksTestDHCPv6-1097551674, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:17:28Z, description=, dns_domain=, id=43883dce-1590-48c4-987c-a21b63b82a1c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-1975538139, port_security_enabled=True, project_id=34a17eee71de4bac8b71972a4b7b506c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=42818, qos_policy_id=None, revision_number=40, router:external=False, shared=False, standard_attr_id=1415, status=ACTIVE, subnets=['2817cd55-6d79-47cf-850d-829aa44b7048'], tags=[], tenant_id=34a17eee71de4bac8b71972a4b7b506c, updated_at=2025-12-06T10:18:36Z, vlan_transparent=None, network_id=43883dce-1590-48c4-987c-a21b63b82a1c, port_security_enabled=True, project_id=34a17eee71de4bac8b71972a4b7b506c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['d618a097-5989-47aa-9263-1c8a114ad269'], standard_attr_id=1857, status=DOWN, tags=[], tenant_id=34a17eee71de4bac8b71972a4b7b506c, updated_at=2025-12-06T10:18:37Z on network 43883dce-1590-48c4-987c-a21b63b82a1c#033[00m Dec 6 05:18:39 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:39.489 263652 INFO neutron.agent.dhcp.agent [None req-f8aae058-451a-4fb2-a7a9-6f6b2ff0cb84 - - - - - -] DHCP configuration for ports {'687d7abb-e6aa-4047-aa26-552c962fcc91'} is completed#033[00m Dec 6 05:18:39 localhost podman[323360]: 2025-12-06 10:18:39.588196183 +0000 UTC m=+0.062710173 container kill 27452c8dc971f1931cc48233d860a5a4dae6b3dc752ed26dd6692132ab8623c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:18:39 localhost dnsmasq[323340]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 1 addresses Dec 6 05:18:39 localhost dnsmasq-dhcp[323340]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:18:39 localhost dnsmasq-dhcp[323340]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:18:39 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:39.824 263652 INFO neutron.agent.dhcp.agent [None req-ebf14e78-44ca-4922-971d-95494b8ec3df - - - - - -] DHCP configuration for ports {'f67fb8ff-e5d2-4a19-a60a-1bd7d5cb713a'} is completed#033[00m Dec 6 05:18:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:18:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:18:39 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:39.881 2 INFO neutron.agent.securitygroups_rpc [None req-57bddb58-8e7b-4200-a14c-4d9431ae075f b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['cd56abe4-204c-4363-ad64-0a6840260727']#033[00m Dec 6 05:18:39 localhost podman[323396]: 2025-12-06 10:18:39.949563343 +0000 UTC m=+0.104027986 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm) Dec 6 05:18:39 localhost podman[323395]: 2025-12-06 10:18:39.915574823 +0000 UTC m=+0.077068318 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, name=ubi9-minimal, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, build-date=2025-08-20T13:12:41, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible) Dec 6 05:18:39 localhost podman[323396]: 2025-12-06 10:18:39.997138856 +0000 UTC m=+0.151603439 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS) Dec 6 05:18:40 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:18:40 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:40.016 263652 INFO neutron.agent.linux.ip_lib [None req-69fc1829-97fc-48f8-9b00-6782cc4cdde0 - - - - - -] Device tapdfa03f50-39 cannot be used as it has no MAC address#033[00m Dec 6 05:18:40 localhost dnsmasq[323340]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 0 addresses Dec 6 05:18:40 localhost dnsmasq-dhcp[323340]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:18:40 localhost dnsmasq-dhcp[323340]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:18:40 localhost podman[323421]: 2025-12-06 10:18:40.040573264 +0000 UTC m=+0.141639557 container kill 27452c8dc971f1931cc48233d860a5a4dae6b3dc752ed26dd6692132ab8623c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 6 05:18:40 localhost podman[323395]: 2025-12-06 10:18:40.04737298 +0000 UTC m=+0.208866495 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, name=ubi9-minimal, vendor=Red Hat, Inc., release=1755695350, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 6 05:18:40 localhost nova_compute[282193]: 2025-12-06 10:18:40.046 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:40 localhost kernel: device tapdfa03f50-39 entered promiscuous mode Dec 6 05:18:40 localhost ovn_controller[154851]: 2025-12-06T10:18:40Z|00309|binding|INFO|Claiming lport dfa03f50-3905-4292-9cae-c03579192e4f for this chassis. Dec 6 05:18:40 localhost ovn_controller[154851]: 2025-12-06T10:18:40Z|00310|binding|INFO|dfa03f50-3905-4292-9cae-c03579192e4f: Claiming unknown Dec 6 05:18:40 localhost NetworkManager[5973]: [1765016320.0558] manager: (tapdfa03f50-39): new Generic device (/org/freedesktop/NetworkManager/Devices/52) Dec 6 05:18:40 localhost nova_compute[282193]: 2025-12-06 10:18:40.056 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:40 localhost systemd-udevd[323254]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:18:40 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:18:40 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:40.070 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-5d90c1d5-74b2-4b5c-9bf8-25a818641550', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d90c1d5-74b2-4b5c-9bf8-25a818641550', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b1d664fab0f4b7f87439c153244cdc1', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=554a12c4-a3a9-4583-a7ca-9f004018b224, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=dfa03f50-3905-4292-9cae-c03579192e4f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:18:40 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:40.072 160509 INFO neutron.agent.ovn.metadata.agent [-] Port dfa03f50-3905-4292-9cae-c03579192e4f in datapath 5d90c1d5-74b2-4b5c-9bf8-25a818641550 bound to our chassis#033[00m Dec 6 05:18:40 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:40.074 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 5d90c1d5-74b2-4b5c-9bf8-25a818641550 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:18:40 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:40.075 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[2877f908-f092-4181-8a4e-eb699fd468ef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:18:40 localhost ovn_controller[154851]: 2025-12-06T10:18:40Z|00311|binding|INFO|Setting lport dfa03f50-3905-4292-9cae-c03579192e4f ovn-installed in OVS Dec 6 05:18:40 localhost ovn_controller[154851]: 2025-12-06T10:18:40Z|00312|binding|INFO|Setting lport dfa03f50-3905-4292-9cae-c03579192e4f up in Southbound Dec 6 05:18:40 localhost nova_compute[282193]: 2025-12-06 10:18:40.109 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:40 localhost nova_compute[282193]: 2025-12-06 10:18:40.162 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:40 localhost nova_compute[282193]: 2025-12-06 10:18:40.191 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:40 localhost systemd[1]: tmp-crun.lKkAOH.mount: Deactivated successfully. Dec 6 05:18:40 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:40.271 2 INFO neutron.agent.securitygroups_rpc [None req-dae126c9-280d-4a4d-ad9b-17df376d8729 9b1422e7ba894ba7b8e14df8e50e50d0 2b1d664fab0f4b7f87439c153244cdc1 - - default default] Security group member updated ['ec3ea7b0-8bb2-49ca-ac1e-7fc51bb6b4e0']#033[00m Dec 6 05:18:40 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:40.351 2 INFO neutron.agent.securitygroups_rpc [None req-e4bf1752-f0a3-4484-84a7-e670337a989c b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['cd56abe4-204c-4363-ad64-0a6840260727']#033[00m Dec 6 05:18:40 localhost dnsmasq[323340]: exiting on receipt of SIGTERM Dec 6 05:18:40 localhost podman[323505]: 2025-12-06 10:18:40.571088525 +0000 UTC m=+0.075872022 container kill 27452c8dc971f1931cc48233d860a5a4dae6b3dc752ed26dd6692132ab8623c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS) Dec 6 05:18:40 localhost systemd[1]: libpod-27452c8dc971f1931cc48233d860a5a4dae6b3dc752ed26dd6692132ab8623c4.scope: Deactivated successfully. Dec 6 05:18:40 localhost nova_compute[282193]: 2025-12-06 10:18:40.635 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updating instance_info_cache with network_info: [{"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 6 05:18:40 localhost podman[323522]: 2025-12-06 10:18:40.646724399 +0000 UTC m=+0.060525956 container died 27452c8dc971f1931cc48233d860a5a4dae6b3dc752ed26dd6692132ab8623c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 6 05:18:40 localhost nova_compute[282193]: 2025-12-06 10:18:40.649 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Releasing lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 6 05:18:40 localhost nova_compute[282193]: 2025-12-06 10:18:40.650 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 6 05:18:40 localhost nova_compute[282193]: 2025-12-06 10:18:40.650 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:18:40 localhost nova_compute[282193]: 2025-12-06 10:18:40.650 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:18:40 localhost nova_compute[282193]: 2025-12-06 10:18:40.651 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:18:40 localhost nova_compute[282193]: 2025-12-06 10:18:40.651 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 6 05:18:40 localhost podman[323522]: 2025-12-06 10:18:40.685524357 +0000 UTC m=+0.099325914 container cleanup 27452c8dc971f1931cc48233d860a5a4dae6b3dc752ed26dd6692132ab8623c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 6 05:18:40 localhost systemd[1]: libpod-conmon-27452c8dc971f1931cc48233d860a5a4dae6b3dc752ed26dd6692132ab8623c4.scope: Deactivated successfully. Dec 6 05:18:40 localhost podman[323524]: 2025-12-06 10:18:40.74000835 +0000 UTC m=+0.144425192 container remove 27452c8dc971f1931cc48233d860a5a4dae6b3dc752ed26dd6692132ab8623c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 6 05:18:40 localhost nova_compute[282193]: 2025-12-06 10:18:40.754 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:40 localhost ovn_controller[154851]: 2025-12-06T10:18:40Z|00313|binding|INFO|Releasing lport 2b709350-c211-4254-8281-b64bea0c6f41 from this chassis (sb_readonly=0) Dec 6 05:18:40 localhost ovn_controller[154851]: 2025-12-06T10:18:40Z|00314|binding|INFO|Setting lport 2b709350-c211-4254-8281-b64bea0c6f41 down in Southbound Dec 6 05:18:40 localhost kernel: device tap2b709350-c2 left promiscuous mode Dec 6 05:18:40 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:40.763 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fef5:2012/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=2b709350-c211-4254-8281-b64bea0c6f41) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:18:40 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:40.765 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 2b709350-c211-4254-8281-b64bea0c6f41 in datapath 43883dce-1590-48c4-987c-a21b63b82a1c unbound from our chassis#033[00m Dec 6 05:18:40 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:40.767 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43883dce-1590-48c4-987c-a21b63b82a1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:18:40 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:40.768 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[3cfa9158-5d43-4ba7-bc04-feb4bbd06c17]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:18:40 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e155 e155: 6 total, 6 up, 6 in Dec 6 05:18:40 localhost nova_compute[282193]: 2025-12-06 10:18:40.779 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:41 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:41.070 263652 INFO neutron.agent.dhcp.agent [None req-9245f6cc-a833-4e02-b7b5-df057860c471 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:18:41 localhost podman[323582]: Dec 6 05:18:41 localhost podman[323582]: 2025-12-06 10:18:41.164858815 +0000 UTC m=+0.068805407 container create b5a7ec053a2387ad1aeced549328179848c9c55d34e92e143426da0a9e07bdb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5d90c1d5-74b2-4b5c-9bf8-25a818641550, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 6 05:18:41 localhost nova_compute[282193]: 2025-12-06 10:18:41.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:18:41 localhost systemd[1]: Started libpod-conmon-b5a7ec053a2387ad1aeced549328179848c9c55d34e92e143426da0a9e07bdb0.scope. Dec 6 05:18:41 localhost systemd[1]: Started libcrun container. Dec 6 05:18:41 localhost systemd[1]: var-lib-containers-storage-overlay-8b7531a1e7d0325b8ef534d223b499f4804e3bafec7ac5d3faa5dc377932e096-merged.mount: Deactivated successfully. Dec 6 05:18:41 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-27452c8dc971f1931cc48233d860a5a4dae6b3dc752ed26dd6692132ab8623c4-userdata-shm.mount: Deactivated successfully. Dec 6 05:18:41 localhost systemd[1]: run-netns-qdhcp\x2d43883dce\x2d1590\x2d48c4\x2d987c\x2da21b63b82a1c.mount: Deactivated successfully. Dec 6 05:18:41 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e45d66b660b9cd6a8b001d4fbf1754f68fedf39a7ac1ddac57f6571d258512a4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:18:41 localhost podman[323582]: 2025-12-06 10:18:41.129365939 +0000 UTC m=+0.033312581 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:18:41 localhost podman[323582]: 2025-12-06 10:18:41.234985282 +0000 UTC m=+0.138931904 container init b5a7ec053a2387ad1aeced549328179848c9c55d34e92e143426da0a9e07bdb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5d90c1d5-74b2-4b5c-9bf8-25a818641550, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:18:41 localhost podman[323582]: 2025-12-06 10:18:41.245155731 +0000 UTC m=+0.149102353 container start b5a7ec053a2387ad1aeced549328179848c9c55d34e92e143426da0a9e07bdb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5d90c1d5-74b2-4b5c-9bf8-25a818641550, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Dec 6 05:18:41 localhost dnsmasq[323601]: started, version 2.85 cachesize 150 Dec 6 05:18:41 localhost dnsmasq[323601]: DNS service limited to local subnets Dec 6 05:18:41 localhost dnsmasq[323601]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:18:41 localhost dnsmasq[323601]: warning: no upstream servers configured Dec 6 05:18:41 localhost dnsmasq-dhcp[323601]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 6 05:18:41 localhost dnsmasq[323601]: read /var/lib/neutron/dhcp/5d90c1d5-74b2-4b5c-9bf8-25a818641550/addn_hosts - 0 addresses Dec 6 05:18:41 localhost dnsmasq-dhcp[323601]: read /var/lib/neutron/dhcp/5d90c1d5-74b2-4b5c-9bf8-25a818641550/host Dec 6 05:18:41 localhost dnsmasq-dhcp[323601]: read /var/lib/neutron/dhcp/5d90c1d5-74b2-4b5c-9bf8-25a818641550/opts Dec 6 05:18:41 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:41.302 263652 INFO neutron.agent.dhcp.agent [None req-69fc1829-97fc-48f8-9b00-6782cc4cdde0 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:18:39Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=8dd2142c-4669-4128-a7be-8660c8e4419c, ip_allocation=immediate, mac_address=fa:16:3e:f2:97:89, name=tempest-AllowedAddressPairIpV6TestJSON-1440071982, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:18:37Z, description=, dns_domain=, id=5d90c1d5-74b2-4b5c-9bf8-25a818641550, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairIpV6TestJSON-test-network-1904552762, port_security_enabled=True, project_id=2b1d664fab0f4b7f87439c153244cdc1, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=27994, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1858, status=ACTIVE, subnets=['cff46a29-a5f7-45a0-9023-17533db086b9'], tags=[], tenant_id=2b1d664fab0f4b7f87439c153244cdc1, updated_at=2025-12-06T10:18:38Z, vlan_transparent=None, network_id=5d90c1d5-74b2-4b5c-9bf8-25a818641550, port_security_enabled=True, project_id=2b1d664fab0f4b7f87439c153244cdc1, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['ec3ea7b0-8bb2-49ca-ac1e-7fc51bb6b4e0'], standard_attr_id=1871, status=DOWN, tags=[], tenant_id=2b1d664fab0f4b7f87439c153244cdc1, updated_at=2025-12-06T10:18:39Z on network 5d90c1d5-74b2-4b5c-9bf8-25a818641550#033[00m Dec 6 05:18:41 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:41.398 263652 INFO neutron.agent.dhcp.agent [None req-0b466d18-97be-4edf-aa1c-6b63c631ffde - - - - - -] DHCP configuration for ports {'b48170d1-76fd-43fa-87cb-3654efb179b3'} is completed#033[00m Dec 6 05:18:41 localhost dnsmasq[323601]: read /var/lib/neutron/dhcp/5d90c1d5-74b2-4b5c-9bf8-25a818641550/addn_hosts - 1 addresses Dec 6 05:18:41 localhost podman[323620]: 2025-12-06 10:18:41.469880857 +0000 UTC m=+0.049266414 container kill b5a7ec053a2387ad1aeced549328179848c9c55d34e92e143426da0a9e07bdb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5d90c1d5-74b2-4b5c-9bf8-25a818641550, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 6 05:18:41 localhost dnsmasq-dhcp[323601]: read /var/lib/neutron/dhcp/5d90c1d5-74b2-4b5c-9bf8-25a818641550/host Dec 6 05:18:41 localhost dnsmasq-dhcp[323601]: read /var/lib/neutron/dhcp/5d90c1d5-74b2-4b5c-9bf8-25a818641550/opts Dec 6 05:18:41 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:41.533 2 INFO neutron.agent.securitygroups_rpc [None req-652b9b12-ec43-4a29-b268-053f1f58f2a3 9b1422e7ba894ba7b8e14df8e50e50d0 2b1d664fab0f4b7f87439c153244cdc1 - - default default] Security group member updated ['ec3ea7b0-8bb2-49ca-ac1e-7fc51bb6b4e0']#033[00m Dec 6 05:18:41 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:41.661 263652 INFO neutron.agent.dhcp.agent [None req-69fc1829-97fc-48f8-9b00-6782cc4cdde0 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:18:41Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=d4d6e3c8-5f61-4270-981b-d1cb751d801d, ip_allocation=immediate, mac_address=fa:16:3e:36:19:e3, name=tempest-AllowedAddressPairIpV6TestJSON-1440647520, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:18:37Z, description=, dns_domain=, id=5d90c1d5-74b2-4b5c-9bf8-25a818641550, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairIpV6TestJSON-test-network-1904552762, port_security_enabled=True, project_id=2b1d664fab0f4b7f87439c153244cdc1, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=27994, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1858, status=ACTIVE, subnets=['cff46a29-a5f7-45a0-9023-17533db086b9'], tags=[], tenant_id=2b1d664fab0f4b7f87439c153244cdc1, updated_at=2025-12-06T10:18:38Z, vlan_transparent=None, network_id=5d90c1d5-74b2-4b5c-9bf8-25a818641550, port_security_enabled=True, project_id=2b1d664fab0f4b7f87439c153244cdc1, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['ec3ea7b0-8bb2-49ca-ac1e-7fc51bb6b4e0'], standard_attr_id=1881, status=DOWN, tags=[], tenant_id=2b1d664fab0f4b7f87439c153244cdc1, updated_at=2025-12-06T10:18:41Z on network 5d90c1d5-74b2-4b5c-9bf8-25a818641550#033[00m Dec 6 05:18:41 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:41.746 263652 INFO neutron.agent.dhcp.agent [None req-21430afc-6124-45c3-bb3a-71c4ca114888 - - - - - -] DHCP configuration for ports {'8dd2142c-4669-4128-a7be-8660c8e4419c'} is completed#033[00m Dec 6 05:18:41 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e156 e156: 6 total, 6 up, 6 in Dec 6 05:18:41 localhost dnsmasq[323601]: read /var/lib/neutron/dhcp/5d90c1d5-74b2-4b5c-9bf8-25a818641550/addn_hosts - 2 addresses Dec 6 05:18:41 localhost dnsmasq-dhcp[323601]: read /var/lib/neutron/dhcp/5d90c1d5-74b2-4b5c-9bf8-25a818641550/host Dec 6 05:18:41 localhost dnsmasq-dhcp[323601]: read /var/lib/neutron/dhcp/5d90c1d5-74b2-4b5c-9bf8-25a818641550/opts Dec 6 05:18:41 localhost podman[323658]: 2025-12-06 10:18:41.892383723 +0000 UTC m=+0.076123920 container kill b5a7ec053a2387ad1aeced549328179848c9c55d34e92e143426da0a9e07bdb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5d90c1d5-74b2-4b5c-9bf8-25a818641550, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:18:42 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:42.023 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:18:41Z, description=, device_id=a193f2ef-cf4d-4f20-be3b-f48f023f218a, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=c2313164-caaf-42ef-9d7f-0859c1ed319e, ip_allocation=immediate, mac_address=fa:16:3e:62:bd:78, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1888, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-06T10:18:41Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:18:42 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:42.209 263652 INFO neutron.agent.dhcp.agent [None req-bc5fd4a8-f087-4927-9652-e510a2ff81ad - - - - - -] DHCP configuration for ports {'d4d6e3c8-5f61-4270-981b-d1cb751d801d'} is completed#033[00m Dec 6 05:18:42 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 2 addresses Dec 6 05:18:42 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:18:42 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:18:42 localhost podman[323697]: 2025-12-06 10:18:42.259779726 +0000 UTC m=+0.068098016 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:18:42 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:42.322 2 INFO neutron.agent.securitygroups_rpc [None req-b75aa7a3-5af1-4cd3-b1b1-cfd423d7e2ab 9b1422e7ba894ba7b8e14df8e50e50d0 2b1d664fab0f4b7f87439c153244cdc1 - - default default] Security group member updated ['ec3ea7b0-8bb2-49ca-ac1e-7fc51bb6b4e0']#033[00m Dec 6 05:18:42 localhost podman[323737]: 2025-12-06 10:18:42.586690902 +0000 UTC m=+0.074038307 container kill b5a7ec053a2387ad1aeced549328179848c9c55d34e92e143426da0a9e07bdb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5d90c1d5-74b2-4b5c-9bf8-25a818641550, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 6 05:18:42 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:42.587 2 INFO neutron.agent.securitygroups_rpc [None req-f4287026-ab49-4456-bd2b-fbcf52c0630e a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']#033[00m Dec 6 05:18:42 localhost dnsmasq[323601]: read /var/lib/neutron/dhcp/5d90c1d5-74b2-4b5c-9bf8-25a818641550/addn_hosts - 1 addresses Dec 6 05:18:42 localhost dnsmasq-dhcp[323601]: read /var/lib/neutron/dhcp/5d90c1d5-74b2-4b5c-9bf8-25a818641550/host Dec 6 05:18:42 localhost dnsmasq-dhcp[323601]: read /var/lib/neutron/dhcp/5d90c1d5-74b2-4b5c-9bf8-25a818641550/opts Dec 6 05:18:42 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:42.600 263652 INFO neutron.agent.dhcp.agent [None req-e9c9e6ea-11c3-461e-9133-d3211ce5e3e2 - - - - - -] DHCP configuration for ports {'c2313164-caaf-42ef-9d7f-0859c1ed319e'} is completed#033[00m Dec 6 05:18:42 localhost nova_compute[282193]: 2025-12-06 10:18:42.963 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:43 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:43.023 2 INFO neutron.agent.securitygroups_rpc [None req-55643603-9572-45b2-ae71-f445e3294506 9b1422e7ba894ba7b8e14df8e50e50d0 2b1d664fab0f4b7f87439c153244cdc1 - - default default] Security group member updated ['ec3ea7b0-8bb2-49ca-ac1e-7fc51bb6b4e0']#033[00m Dec 6 05:18:43 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:43.073 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:18:42Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=13eda6cd-ab50-4961-a56d-6f8d6f8094f4, ip_allocation=immediate, mac_address=fa:16:3e:c8:5e:55, name=tempest-AllowedAddressPairIpV6TestJSON-484857425, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:18:37Z, description=, dns_domain=, id=5d90c1d5-74b2-4b5c-9bf8-25a818641550, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairIpV6TestJSON-test-network-1904552762, port_security_enabled=True, project_id=2b1d664fab0f4b7f87439c153244cdc1, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=27994, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1858, status=ACTIVE, subnets=['cff46a29-a5f7-45a0-9023-17533db086b9'], tags=[], tenant_id=2b1d664fab0f4b7f87439c153244cdc1, updated_at=2025-12-06T10:18:38Z, vlan_transparent=None, network_id=5d90c1d5-74b2-4b5c-9bf8-25a818641550, port_security_enabled=True, project_id=2b1d664fab0f4b7f87439c153244cdc1, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['ec3ea7b0-8bb2-49ca-ac1e-7fc51bb6b4e0'], standard_attr_id=1897, status=DOWN, tags=[], tenant_id=2b1d664fab0f4b7f87439c153244cdc1, updated_at=2025-12-06T10:18:42Z on network 5d90c1d5-74b2-4b5c-9bf8-25a818641550#033[00m Dec 6 05:18:43 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:43.113 263652 INFO neutron.agent.linux.ip_lib [None req-80824c13-7455-4cc5-9ad8-0a2dcaae04fd - - - - - -] Device tape81dd3d5-52 cannot be used as it has no MAC address#033[00m Dec 6 05:18:43 localhost nova_compute[282193]: 2025-12-06 10:18:43.148 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:43 localhost kernel: device tape81dd3d5-52 entered promiscuous mode Dec 6 05:18:43 localhost NetworkManager[5973]: [1765016323.1592] manager: (tape81dd3d5-52): new Generic device (/org/freedesktop/NetworkManager/Devices/53) Dec 6 05:18:43 localhost nova_compute[282193]: 2025-12-06 10:18:43.163 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:43 localhost ovn_controller[154851]: 2025-12-06T10:18:43Z|00315|binding|INFO|Claiming lport e81dd3d5-5246-4ac5-a2a9-4f11cf2d2ff6 for this chassis. Dec 6 05:18:43 localhost ovn_controller[154851]: 2025-12-06T10:18:43Z|00316|binding|INFO|e81dd3d5-5246-4ac5-a2a9-4f11cf2d2ff6: Claiming unknown Dec 6 05:18:43 localhost systemd-udevd[323783]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:18:43 localhost ovn_controller[154851]: 2025-12-06T10:18:43Z|00317|binding|INFO|Setting lport e81dd3d5-5246-4ac5-a2a9-4f11cf2d2ff6 up in Southbound Dec 6 05:18:43 localhost ovn_controller[154851]: 2025-12-06T10:18:43Z|00318|binding|INFO|Setting lport e81dd3d5-5246-4ac5-a2a9-4f11cf2d2ff6 ovn-installed in OVS Dec 6 05:18:43 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:43.177 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fee6:65a3/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=e81dd3d5-5246-4ac5-a2a9-4f11cf2d2ff6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:18:43 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:43.179 160509 INFO neutron.agent.ovn.metadata.agent [-] Port e81dd3d5-5246-4ac5-a2a9-4f11cf2d2ff6 in datapath 43883dce-1590-48c4-987c-a21b63b82a1c bound to our chassis#033[00m Dec 6 05:18:43 localhost nova_compute[282193]: 2025-12-06 10:18:43.181 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:43 localhost nova_compute[282193]: 2025-12-06 10:18:43.183 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:18:43 localhost nova_compute[282193]: 2025-12-06 10:18:43.184 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:43 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:43.185 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Port 94117367-0e38-4e81-8f6f-3ab8f4ef7f72 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 6 05:18:43 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:43.186 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43883dce-1590-48c4-987c-a21b63b82a1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:18:43 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:43.187 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[ff25a746-24bd-480f-b56c-f4826fcbc3d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:18:43 localhost nova_compute[282193]: 2025-12-06 10:18:43.215 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:43 localhost nova_compute[282193]: 2025-12-06 10:18:43.262 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:43 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:18:43 localhost nova_compute[282193]: 2025-12-06 10:18:43.294 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:43 localhost dnsmasq[323601]: read /var/lib/neutron/dhcp/5d90c1d5-74b2-4b5c-9bf8-25a818641550/addn_hosts - 2 addresses Dec 6 05:18:43 localhost podman[323795]: 2025-12-06 10:18:43.31888266 +0000 UTC m=+0.072978894 container kill b5a7ec053a2387ad1aeced549328179848c9c55d34e92e143426da0a9e07bdb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5d90c1d5-74b2-4b5c-9bf8-25a818641550, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 6 05:18:43 localhost dnsmasq-dhcp[323601]: read /var/lib/neutron/dhcp/5d90c1d5-74b2-4b5c-9bf8-25a818641550/host Dec 6 05:18:43 localhost dnsmasq-dhcp[323601]: read /var/lib/neutron/dhcp/5d90c1d5-74b2-4b5c-9bf8-25a818641550/opts Dec 6 05:18:43 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:43.707 263652 INFO neutron.agent.dhcp.agent [None req-8432ed46-288b-4870-95ba-d6a625fdb94d - - - - - -] DHCP configuration for ports {'13eda6cd-ab50-4961-a56d-6f8d6f8094f4'} is completed#033[00m Dec 6 05:18:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:18:43 localhost podman[323842]: 2025-12-06 10:18:43.965306578 +0000 UTC m=+0.093389564 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 6 05:18:43 localhost podman[323842]: 2025-12-06 10:18:43.980149338 +0000 UTC m=+0.108232344 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:18:43 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:18:44 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:44.059 2 INFO neutron.agent.securitygroups_rpc [None req-255af1be-4d8f-48ce-b409-88074fe3f28a a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']#033[00m Dec 6 05:18:44 localhost nova_compute[282193]: 2025-12-06 10:18:44.137 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:44 localhost podman[323886]: Dec 6 05:18:44 localhost podman[323886]: 2025-12-06 10:18:44.323522993 +0000 UTC m=+0.102481170 container create de7254f971a2477af23af1e54a0d0065537b2127e25260365f151fd088e6b93b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 6 05:18:44 localhost systemd[1]: Started libpod-conmon-de7254f971a2477af23af1e54a0d0065537b2127e25260365f151fd088e6b93b.scope. Dec 6 05:18:44 localhost podman[323886]: 2025-12-06 10:18:44.276506587 +0000 UTC m=+0.055464764 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:18:44 localhost systemd[1]: Started libcrun container. Dec 6 05:18:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6081e04002876701154dea738aa910ba591c2cf9e99dc2fa391f57536d1f8d4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:18:44 localhost podman[323886]: 2025-12-06 10:18:44.416592846 +0000 UTC m=+0.195551063 container init de7254f971a2477af23af1e54a0d0065537b2127e25260365f151fd088e6b93b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 6 05:18:44 localhost podman[323886]: 2025-12-06 10:18:44.433374205 +0000 UTC m=+0.212332382 container start de7254f971a2477af23af1e54a0d0065537b2127e25260365f151fd088e6b93b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Dec 6 05:18:44 localhost dnsmasq[323903]: started, version 2.85 cachesize 150 Dec 6 05:18:44 localhost dnsmasq[323903]: DNS service limited to local subnets Dec 6 05:18:44 localhost dnsmasq[323903]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:18:44 localhost dnsmasq[323903]: warning: no upstream servers configured Dec 6 05:18:44 localhost dnsmasq[323903]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 0 addresses Dec 6 05:18:44 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:44.503 263652 INFO neutron.agent.dhcp.agent [None req-80824c13-7455-4cc5-9ad8-0a2dcaae04fd - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:18:42Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=fc6df7b3-ec8e-4358-8cef-d1710c9514a1, ip_allocation=immediate, mac_address=fa:16:3e:6e:ef:c6, name=tempest-NetworksTestDHCPv6-461980463, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:17:28Z, description=, dns_domain=, id=43883dce-1590-48c4-987c-a21b63b82a1c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-1975538139, port_security_enabled=True, project_id=34a17eee71de4bac8b71972a4b7b506c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=42818, qos_policy_id=None, revision_number=42, router:external=False, shared=False, standard_attr_id=1415, status=ACTIVE, subnets=['4ba92dd9-8cae-472b-acec-9fa2369c51eb'], tags=[], tenant_id=34a17eee71de4bac8b71972a4b7b506c, updated_at=2025-12-06T10:18:41Z, vlan_transparent=None, network_id=43883dce-1590-48c4-987c-a21b63b82a1c, port_security_enabled=True, project_id=34a17eee71de4bac8b71972a4b7b506c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['d618a097-5989-47aa-9263-1c8a114ad269'], standard_attr_id=1890, status=DOWN, tags=[], tenant_id=34a17eee71de4bac8b71972a4b7b506c, updated_at=2025-12-06T10:18:42Z on network 43883dce-1590-48c4-987c-a21b63b82a1c#033[00m Dec 6 05:18:44 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:44.617 263652 INFO neutron.agent.dhcp.agent [None req-e8cfefeb-67b3-4d91-a17e-b985596e9ec1 - - - - - -] DHCP configuration for ports {'687d7abb-e6aa-4047-aa26-552c962fcc91'} is completed#033[00m Dec 6 05:18:44 localhost dnsmasq[323903]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 1 addresses Dec 6 05:18:44 localhost podman[323920]: 2025-12-06 10:18:44.72334556 +0000 UTC m=+0.065727935 container kill de7254f971a2477af23af1e54a0d0065537b2127e25260365f151fd088e6b93b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Dec 6 05:18:44 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:44.760 2 INFO neutron.agent.securitygroups_rpc [None req-f9323177-d4e4-4dce-bd8f-2cc985b7b1dc 9b1422e7ba894ba7b8e14df8e50e50d0 2b1d664fab0f4b7f87439c153244cdc1 - - default default] Security group member updated ['ec3ea7b0-8bb2-49ca-ac1e-7fc51bb6b4e0']#033[00m Dec 6 05:18:45 localhost dnsmasq[323601]: read /var/lib/neutron/dhcp/5d90c1d5-74b2-4b5c-9bf8-25a818641550/addn_hosts - 1 addresses Dec 6 05:18:45 localhost podman[323959]: 2025-12-06 10:18:45.022508684 +0000 UTC m=+0.061151116 container kill b5a7ec053a2387ad1aeced549328179848c9c55d34e92e143426da0a9e07bdb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5d90c1d5-74b2-4b5c-9bf8-25a818641550, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:18:45 localhost dnsmasq-dhcp[323601]: read /var/lib/neutron/dhcp/5d90c1d5-74b2-4b5c-9bf8-25a818641550/host Dec 6 05:18:45 localhost dnsmasq-dhcp[323601]: read /var/lib/neutron/dhcp/5d90c1d5-74b2-4b5c-9bf8-25a818641550/opts Dec 6 05:18:45 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:45.062 263652 INFO neutron.agent.dhcp.agent [None req-d3d5c886-4c06-4e86-9ecb-f0a4eee6bfa8 - - - - - -] DHCP configuration for ports {'fc6df7b3-ec8e-4358-8cef-d1710c9514a1'} is completed#033[00m Dec 6 05:18:45 localhost nova_compute[282193]: 2025-12-06 10:18:45.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:18:45 localhost nova_compute[282193]: 2025-12-06 10:18:45.182 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Dec 6 05:18:45 localhost systemd[1]: tmp-crun.rmsrpE.mount: Deactivated successfully. Dec 6 05:18:45 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:45.342 2 INFO neutron.agent.securitygroups_rpc [None req-ec83bf10-c909-4c2a-a1a4-0827521eece8 9b1422e7ba894ba7b8e14df8e50e50d0 2b1d664fab0f4b7f87439c153244cdc1 - - default default] Security group member updated ['ec3ea7b0-8bb2-49ca-ac1e-7fc51bb6b4e0']#033[00m Dec 6 05:18:45 localhost systemd[1]: tmp-crun.ubZWop.mount: Deactivated successfully. Dec 6 05:18:45 localhost dnsmasq[323903]: exiting on receipt of SIGTERM Dec 6 05:18:45 localhost systemd[1]: libpod-de7254f971a2477af23af1e54a0d0065537b2127e25260365f151fd088e6b93b.scope: Deactivated successfully. Dec 6 05:18:45 localhost podman[323997]: 2025-12-06 10:18:45.354536965 +0000 UTC m=+0.078802211 container kill de7254f971a2477af23af1e54a0d0065537b2127e25260365f151fd088e6b93b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 6 05:18:45 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:45.362 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:18:45Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=cf17c12b-b261-4ebe-bdee-84820cc74501, ip_allocation=immediate, mac_address=fa:16:3e:50:03:6f, name=tempest-AllowedAddressPairIpV6TestJSON-192075535, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:18:37Z, description=, dns_domain=, id=5d90c1d5-74b2-4b5c-9bf8-25a818641550, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairIpV6TestJSON-test-network-1904552762, port_security_enabled=True, project_id=2b1d664fab0f4b7f87439c153244cdc1, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=27994, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1858, status=ACTIVE, subnets=['cff46a29-a5f7-45a0-9023-17533db086b9'], tags=[], tenant_id=2b1d664fab0f4b7f87439c153244cdc1, updated_at=2025-12-06T10:18:38Z, vlan_transparent=None, network_id=5d90c1d5-74b2-4b5c-9bf8-25a818641550, port_security_enabled=True, project_id=2b1d664fab0f4b7f87439c153244cdc1, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['ec3ea7b0-8bb2-49ca-ac1e-7fc51bb6b4e0'], standard_attr_id=1913, status=DOWN, tags=[], tenant_id=2b1d664fab0f4b7f87439c153244cdc1, updated_at=2025-12-06T10:18:45Z on network 5d90c1d5-74b2-4b5c-9bf8-25a818641550#033[00m Dec 6 05:18:45 localhost podman[324009]: 2025-12-06 10:18:45.424104366 +0000 UTC m=+0.054542236 container died de7254f971a2477af23af1e54a0d0065537b2127e25260365f151fd088e6b93b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 6 05:18:45 localhost podman[324009]: 2025-12-06 10:18:45.562472462 +0000 UTC m=+0.192910292 container cleanup de7254f971a2477af23af1e54a0d0065537b2127e25260365f151fd088e6b93b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:18:45 localhost systemd[1]: libpod-conmon-de7254f971a2477af23af1e54a0d0065537b2127e25260365f151fd088e6b93b.scope: Deactivated successfully. Dec 6 05:18:45 localhost podman[324011]: 2025-12-06 10:18:45.587076969 +0000 UTC m=+0.208415904 container remove de7254f971a2477af23af1e54a0d0065537b2127e25260365f151fd088e6b93b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 6 05:18:45 localhost nova_compute[282193]: 2025-12-06 10:18:45.606 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:45 localhost kernel: device tape81dd3d5-52 left promiscuous mode Dec 6 05:18:45 localhost ovn_controller[154851]: 2025-12-06T10:18:45Z|00319|binding|INFO|Releasing lport e81dd3d5-5246-4ac5-a2a9-4f11cf2d2ff6 from this chassis (sb_readonly=0) Dec 6 05:18:45 localhost ovn_controller[154851]: 2025-12-06T10:18:45Z|00320|binding|INFO|Setting lport e81dd3d5-5246-4ac5-a2a9-4f11cf2d2ff6 down in Southbound Dec 6 05:18:45 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:45.619 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548789.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=e81dd3d5-5246-4ac5-a2a9-4f11cf2d2ff6) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:18:45 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:45.621 160509 INFO neutron.agent.ovn.metadata.agent [-] Port e81dd3d5-5246-4ac5-a2a9-4f11cf2d2ff6 in datapath 43883dce-1590-48c4-987c-a21b63b82a1c unbound from our chassis#033[00m Dec 6 05:18:45 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:45.624 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43883dce-1590-48c4-987c-a21b63b82a1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:18:45 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:45.625 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[88537893-366a-4bd5-9d1b-0ef0a580c20b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:18:45 localhost nova_compute[282193]: 2025-12-06 10:18:45.631 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:45 localhost dnsmasq[323601]: read /var/lib/neutron/dhcp/5d90c1d5-74b2-4b5c-9bf8-25a818641550/addn_hosts - 2 addresses Dec 6 05:18:45 localhost dnsmasq-dhcp[323601]: read /var/lib/neutron/dhcp/5d90c1d5-74b2-4b5c-9bf8-25a818641550/host Dec 6 05:18:45 localhost dnsmasq-dhcp[323601]: read /var/lib/neutron/dhcp/5d90c1d5-74b2-4b5c-9bf8-25a818641550/opts Dec 6 05:18:45 localhost podman[324054]: 2025-12-06 10:18:45.656895456 +0000 UTC m=+0.122933830 container kill b5a7ec053a2387ad1aeced549328179848c9c55d34e92e143426da0a9e07bdb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5d90c1d5-74b2-4b5c-9bf8-25a818641550, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:18:45 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 1 addresses Dec 6 05:18:45 localhost podman[324084]: 2025-12-06 10:18:45.731038625 +0000 UTC m=+0.065567500 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:18:45 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:18:45 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:18:45 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:45.935 263652 INFO neutron.agent.dhcp.agent [None req-ce87d88a-68dd-48b6-823b-2e5c390614cb - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:18:45 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:45.937 263652 INFO neutron.agent.dhcp.agent [None req-ce87d88a-68dd-48b6-823b-2e5c390614cb - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:18:46 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:46.017 263652 INFO neutron.agent.dhcp.agent [None req-33f79474-9ca9-4963-91c5-f8c6e4b145a6 - - - - - -] DHCP configuration for ports {'cf17c12b-b261-4ebe-bdee-84820cc74501'} is completed#033[00m Dec 6 05:18:46 localhost nova_compute[282193]: 2025-12-06 10:18:46.203 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:18:46 localhost systemd[1]: var-lib-containers-storage-overlay-d6081e04002876701154dea738aa910ba591c2cf9e99dc2fa391f57536d1f8d4-merged.mount: Deactivated successfully. Dec 6 05:18:46 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-de7254f971a2477af23af1e54a0d0065537b2127e25260365f151fd088e6b93b-userdata-shm.mount: Deactivated successfully. Dec 6 05:18:46 localhost systemd[1]: run-netns-qdhcp\x2d43883dce\x2d1590\x2d48c4\x2d987c\x2da21b63b82a1c.mount: Deactivated successfully. Dec 6 05:18:46 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:46.531 2 INFO neutron.agent.securitygroups_rpc [None req-28a20db1-a3bb-47de-ad39-5c494614c36d 9b1422e7ba894ba7b8e14df8e50e50d0 2b1d664fab0f4b7f87439c153244cdc1 - - default default] Security group member updated ['ec3ea7b0-8bb2-49ca-ac1e-7fc51bb6b4e0']#033[00m Dec 6 05:18:46 localhost openstack_network_exporter[243110]: ERROR 10:18:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:18:46 localhost openstack_network_exporter[243110]: ERROR 10:18:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:18:46 localhost openstack_network_exporter[243110]: ERROR 10:18:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:18:46 localhost openstack_network_exporter[243110]: ERROR 10:18:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:18:46 localhost openstack_network_exporter[243110]: Dec 6 05:18:46 localhost openstack_network_exporter[243110]: ERROR 10:18:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:18:46 localhost openstack_network_exporter[243110]: Dec 6 05:18:46 localhost dnsmasq[323601]: read /var/lib/neutron/dhcp/5d90c1d5-74b2-4b5c-9bf8-25a818641550/addn_hosts - 1 addresses Dec 6 05:18:46 localhost podman[324130]: 2025-12-06 10:18:46.825379148 +0000 UTC m=+0.067357734 container kill b5a7ec053a2387ad1aeced549328179848c9c55d34e92e143426da0a9e07bdb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5d90c1d5-74b2-4b5c-9bf8-25a818641550, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:18:46 localhost dnsmasq-dhcp[323601]: read /var/lib/neutron/dhcp/5d90c1d5-74b2-4b5c-9bf8-25a818641550/host Dec 6 05:18:46 localhost dnsmasq-dhcp[323601]: read /var/lib/neutron/dhcp/5d90c1d5-74b2-4b5c-9bf8-25a818641550/opts Dec 6 05:18:46 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:46.903 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:18:46Z, description=, device_id=29ab744a-d31c-4586-ae5e-41341709a166, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=b23af29b-8104-471f-94ba-80e710a74404, ip_allocation=immediate, mac_address=fa:16:3e:24:1e:64, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1921, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-06T10:18:46Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:18:47 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 2 addresses Dec 6 05:18:47 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:18:47 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:18:47 localhost podman[324166]: 2025-12-06 10:18:47.163489564 +0000 UTC m=+0.076140622 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:18:47 localhost systemd[1]: tmp-crun.PQejOh.mount: Deactivated successfully. Dec 6 05:18:47 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e157 e157: 6 total, 6 up, 6 in Dec 6 05:18:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:47.307 160509 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:18:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:47.308 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:18:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:47.309 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:18:47 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:47.313 2 INFO neutron.agent.securitygroups_rpc [None req-e6ce568b-b382-4ede-9132-8960f2608a77 9b1422e7ba894ba7b8e14df8e50e50d0 2b1d664fab0f4b7f87439c153244cdc1 - - default default] Security group member updated ['ec3ea7b0-8bb2-49ca-ac1e-7fc51bb6b4e0']#033[00m Dec 6 05:18:47 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:47.349 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:18:46Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=14568f10-d8d5-4f60-9847-c06c6ffea82d, ip_allocation=immediate, mac_address=fa:16:3e:fc:46:f8, name=tempest-AllowedAddressPairIpV6TestJSON-1644300393, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:18:37Z, description=, dns_domain=, id=5d90c1d5-74b2-4b5c-9bf8-25a818641550, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairIpV6TestJSON-test-network-1904552762, port_security_enabled=True, project_id=2b1d664fab0f4b7f87439c153244cdc1, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=27994, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1858, status=ACTIVE, subnets=['cff46a29-a5f7-45a0-9023-17533db086b9'], tags=[], tenant_id=2b1d664fab0f4b7f87439c153244cdc1, updated_at=2025-12-06T10:18:38Z, vlan_transparent=None, network_id=5d90c1d5-74b2-4b5c-9bf8-25a818641550, port_security_enabled=True, project_id=2b1d664fab0f4b7f87439c153244cdc1, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['ec3ea7b0-8bb2-49ca-ac1e-7fc51bb6b4e0'], standard_attr_id=1927, status=DOWN, tags=[], tenant_id=2b1d664fab0f4b7f87439c153244cdc1, updated_at=2025-12-06T10:18:46Z on network 5d90c1d5-74b2-4b5c-9bf8-25a818641550#033[00m Dec 6 05:18:47 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:47.499 263652 INFO neutron.agent.linux.ip_lib [None req-9de59c5c-4f9e-45fd-b701-32de4ffd1a2a - - - - - -] Device tapfca03880-79 cannot be used as it has no MAC address#033[00m Dec 6 05:18:47 localhost nova_compute[282193]: 2025-12-06 10:18:47.537 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:47 localhost kernel: device tapfca03880-79 entered promiscuous mode Dec 6 05:18:47 localhost nova_compute[282193]: 2025-12-06 10:18:47.545 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:47 localhost ovn_controller[154851]: 2025-12-06T10:18:47Z|00321|binding|INFO|Claiming lport fca03880-79ab-46f3-909b-a19baa4b2eea for this chassis. Dec 6 05:18:47 localhost ovn_controller[154851]: 2025-12-06T10:18:47Z|00322|binding|INFO|fca03880-79ab-46f3-909b-a19baa4b2eea: Claiming unknown Dec 6 05:18:47 localhost NetworkManager[5973]: [1765016327.5469] manager: (tapfca03880-79): new Generic device (/org/freedesktop/NetworkManager/Devices/54) Dec 6 05:18:47 localhost systemd-udevd[324224]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:18:47 localhost ovn_controller[154851]: 2025-12-06T10:18:47Z|00323|binding|INFO|Setting lport fca03880-79ab-46f3-909b-a19baa4b2eea ovn-installed in OVS Dec 6 05:18:47 localhost nova_compute[282193]: 2025-12-06 10:18:47.553 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:47 localhost ovn_controller[154851]: 2025-12-06T10:18:47Z|00324|binding|INFO|Setting lport fca03880-79ab-46f3-909b-a19baa4b2eea up in Southbound Dec 6 05:18:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:47.556 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe35:2078/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=fca03880-79ab-46f3-909b-a19baa4b2eea) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:18:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:47.558 160509 INFO neutron.agent.ovn.metadata.agent [-] Port fca03880-79ab-46f3-909b-a19baa4b2eea in datapath 43883dce-1590-48c4-987c-a21b63b82a1c bound to our chassis#033[00m Dec 6 05:18:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:47.564 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Port 05d71170-47d2-4bec-b410-aaa8f7634d28 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 6 05:18:47 localhost nova_compute[282193]: 2025-12-06 10:18:47.567 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:47.565 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43883dce-1590-48c4-987c-a21b63b82a1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:18:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:47.568 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[173466d8-9b6a-4a7a-a511-e22ca27fc8de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:18:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:18:47 localhost podman[324210]: 2025-12-06 10:18:47.580841122 +0000 UTC m=+0.076186312 container kill b5a7ec053a2387ad1aeced549328179848c9c55d34e92e143426da0a9e07bdb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5d90c1d5-74b2-4b5c-9bf8-25a818641550, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 6 05:18:47 localhost dnsmasq[323601]: read /var/lib/neutron/dhcp/5d90c1d5-74b2-4b5c-9bf8-25a818641550/addn_hosts - 2 addresses Dec 6 05:18:47 localhost dnsmasq-dhcp[323601]: read /var/lib/neutron/dhcp/5d90c1d5-74b2-4b5c-9bf8-25a818641550/host Dec 6 05:18:47 localhost dnsmasq-dhcp[323601]: read /var/lib/neutron/dhcp/5d90c1d5-74b2-4b5c-9bf8-25a818641550/opts Dec 6 05:18:47 localhost nova_compute[282193]: 2025-12-06 10:18:47.589 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:47 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:47.595 263652 INFO neutron.agent.dhcp.agent [None req-aa15b631-6ac2-4c35-8ca0-3b26e595ee64 - - - - - -] DHCP configuration for ports {'b23af29b-8104-471f-94ba-80e710a74404'} is completed#033[00m Dec 6 05:18:47 localhost nova_compute[282193]: 2025-12-06 10:18:47.637 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:47 localhost podman[324228]: 2025-12-06 10:18:47.674996258 +0000 UTC m=+0.086574617 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 6 05:18:47 localhost nova_compute[282193]: 2025-12-06 10:18:47.678 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:47 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:47.682 2 INFO neutron.agent.securitygroups_rpc [None req-7750bef0-0e9a-45d8-b031-72812daa7ba7 a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']#033[00m Dec 6 05:18:47 localhost podman[324228]: 2025-12-06 10:18:47.714249378 +0000 UTC m=+0.125827697 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 6 05:18:47 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:18:47 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:47.954 263652 INFO neutron.agent.dhcp.agent [None req-6fc97f4b-a09e-4296-9bf1-3d8361758acd - - - - - -] DHCP configuration for ports {'14568f10-d8d5-4f60-9847-c06c6ffea82d'} is completed#033[00m Dec 6 05:18:47 localhost nova_compute[282193]: 2025-12-06 10:18:47.995 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:48 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:18:48 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:48.427 2 INFO neutron.agent.securitygroups_rpc [None req-ca0f0e3b-d5e2-4383-9ed8-27fd62359bae 9b1422e7ba894ba7b8e14df8e50e50d0 2b1d664fab0f4b7f87439c153244cdc1 - - default default] Security group member updated ['ec3ea7b0-8bb2-49ca-ac1e-7fc51bb6b4e0']#033[00m Dec 6 05:18:48 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:48.515 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:18:47Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=f99905b4-4fb8-4a99-9e15-56d731748b6a, ip_allocation=immediate, mac_address=fa:16:3e:99:fd:f6, name=tempest-AllowedAddressPairIpV6TestJSON-1427594491, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:18:37Z, description=, dns_domain=, id=5d90c1d5-74b2-4b5c-9bf8-25a818641550, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairIpV6TestJSON-test-network-1904552762, port_security_enabled=True, project_id=2b1d664fab0f4b7f87439c153244cdc1, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=27994, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1858, status=ACTIVE, subnets=['cff46a29-a5f7-45a0-9023-17533db086b9'], tags=[], tenant_id=2b1d664fab0f4b7f87439c153244cdc1, updated_at=2025-12-06T10:18:38Z, vlan_transparent=None, network_id=5d90c1d5-74b2-4b5c-9bf8-25a818641550, port_security_enabled=True, project_id=2b1d664fab0f4b7f87439c153244cdc1, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['ec3ea7b0-8bb2-49ca-ac1e-7fc51bb6b4e0'], standard_attr_id=1930, status=DOWN, tags=[], tenant_id=2b1d664fab0f4b7f87439c153244cdc1, updated_at=2025-12-06T10:18:47Z on network 5d90c1d5-74b2-4b5c-9bf8-25a818641550#033[00m Dec 6 05:18:48 localhost podman[324311]: Dec 6 05:18:48 localhost podman[324311]: 2025-12-06 10:18:48.611901116 +0000 UTC m=+0.097960882 container create 10ef478f22f60a15bbd047555d12625c31619ccf5f16a21eb30af56364adeca9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3) Dec 6 05:18:48 localhost systemd[1]: Started libpod-conmon-10ef478f22f60a15bbd047555d12625c31619ccf5f16a21eb30af56364adeca9.scope. Dec 6 05:18:48 localhost systemd[1]: Started libcrun container. Dec 6 05:18:48 localhost podman[324311]: 2025-12-06 10:18:48.565946992 +0000 UTC m=+0.052006798 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:18:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f37c9dc86603901ba2225ec8af68b9f38ff4f01b21be5bbf89c6cec69efffdc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:18:48 localhost podman[324311]: 2025-12-06 10:18:48.680581439 +0000 UTC m=+0.166641215 container init 10ef478f22f60a15bbd047555d12625c31619ccf5f16a21eb30af56364adeca9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:18:48 localhost podman[324311]: 2025-12-06 10:18:48.689542961 +0000 UTC m=+0.175602737 container start 10ef478f22f60a15bbd047555d12625c31619ccf5f16a21eb30af56364adeca9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Dec 6 05:18:48 localhost dnsmasq[324353]: started, version 2.85 cachesize 150 Dec 6 05:18:48 localhost dnsmasq[324353]: DNS service limited to local subnets Dec 6 05:18:48 localhost dnsmasq[324353]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:18:48 localhost dnsmasq[324353]: warning: no upstream servers configured Dec 6 05:18:48 localhost dnsmasq-dhcp[324353]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 6 05:18:48 localhost dnsmasq[324353]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 0 addresses Dec 6 05:18:48 localhost dnsmasq-dhcp[324353]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:18:48 localhost dnsmasq-dhcp[324353]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:18:48 localhost dnsmasq[323601]: read /var/lib/neutron/dhcp/5d90c1d5-74b2-4b5c-9bf8-25a818641550/addn_hosts - 3 addresses Dec 6 05:18:48 localhost dnsmasq-dhcp[323601]: read /var/lib/neutron/dhcp/5d90c1d5-74b2-4b5c-9bf8-25a818641550/host Dec 6 05:18:48 localhost dnsmasq-dhcp[323601]: read /var/lib/neutron/dhcp/5d90c1d5-74b2-4b5c-9bf8-25a818641550/opts Dec 6 05:18:48 localhost podman[324346]: 2025-12-06 10:18:48.750004005 +0000 UTC m=+0.073930944 container kill b5a7ec053a2387ad1aeced549328179848c9c55d34e92e143426da0a9e07bdb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5d90c1d5-74b2-4b5c-9bf8-25a818641550, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 6 05:18:48 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:48.755 263652 INFO neutron.agent.dhcp.agent [None req-9de59c5c-4f9e-45fd-b701-32de4ffd1a2a - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:18:47Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=69f92907-09fa-434d-8c15-91951719363e, ip_allocation=immediate, mac_address=fa:16:3e:be:7d:f4, name=tempest-NetworksTestDHCPv6-554797201, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:17:28Z, description=, dns_domain=, id=43883dce-1590-48c4-987c-a21b63b82a1c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-1975538139, port_security_enabled=True, project_id=34a17eee71de4bac8b71972a4b7b506c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=42818, qos_policy_id=None, revision_number=44, router:external=False, shared=False, standard_attr_id=1415, status=ACTIVE, subnets=['18b1569e-7ab6-4e66-a6b6-7c0f09d7e1be'], tags=[], tenant_id=34a17eee71de4bac8b71972a4b7b506c, updated_at=2025-12-06T10:18:45Z, vlan_transparent=None, network_id=43883dce-1590-48c4-987c-a21b63b82a1c, port_security_enabled=True, project_id=34a17eee71de4bac8b71972a4b7b506c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['d618a097-5989-47aa-9263-1c8a114ad269'], standard_attr_id=1929, status=DOWN, tags=[], tenant_id=34a17eee71de4bac8b71972a4b7b506c, updated_at=2025-12-06T10:18:47Z on network 43883dce-1590-48c4-987c-a21b63b82a1c#033[00m Dec 6 05:18:48 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:48.810 2 INFO neutron.agent.securitygroups_rpc [None req-18dbc826-4a96-46e3-9a3f-9599a1372c95 a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']#033[00m Dec 6 05:18:48 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:48.816 263652 INFO neutron.agent.dhcp.agent [None req-f417bb3a-9f20-4ad4-9c9e-06b88ce4ba31 - - - - - -] DHCP configuration for ports {'687d7abb-e6aa-4047-aa26-552c962fcc91'} is completed#033[00m Dec 6 05:18:48 localhost dnsmasq[324353]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 1 addresses Dec 6 05:18:48 localhost dnsmasq-dhcp[324353]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:18:48 localhost dnsmasq-dhcp[324353]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:18:48 localhost podman[324385]: 2025-12-06 10:18:48.983448235 +0000 UTC m=+0.068473947 container kill 10ef478f22f60a15bbd047555d12625c31619ccf5f16a21eb30af56364adeca9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 6 05:18:49 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:49.075 263652 INFO neutron.agent.dhcp.agent [None req-058db5ee-ef5a-4464-87e3-4f6af86c6840 - - - - - -] DHCP configuration for ports {'f99905b4-4fb8-4a99-9e15-56d731748b6a'} is completed#033[00m Dec 6 05:18:49 localhost nova_compute[282193]: 2025-12-06 10:18:49.137 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:49 localhost nova_compute[282193]: 2025-12-06 10:18:49.139 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:49 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:49.296 263652 INFO neutron.agent.dhcp.agent [None req-4cfc3920-e4cf-4f6b-a719-8d75342b3d22 - - - - - -] DHCP configuration for ports {'69f92907-09fa-434d-8c15-91951719363e'} is completed#033[00m Dec 6 05:18:49 localhost dnsmasq[324353]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 0 addresses Dec 6 05:18:49 localhost podman[324424]: 2025-12-06 10:18:49.407846999 +0000 UTC m=+0.064520048 container kill 10ef478f22f60a15bbd047555d12625c31619ccf5f16a21eb30af56364adeca9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 6 05:18:49 localhost dnsmasq-dhcp[324353]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:18:49 localhost dnsmasq-dhcp[324353]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:18:49 localhost dnsmasq[324353]: exiting on receipt of SIGTERM Dec 6 05:18:49 localhost podman[324460]: 2025-12-06 10:18:49.887621191 +0000 UTC m=+0.064000793 container kill 10ef478f22f60a15bbd047555d12625c31619ccf5f16a21eb30af56364adeca9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS) Dec 6 05:18:49 localhost systemd[1]: libpod-10ef478f22f60a15bbd047555d12625c31619ccf5f16a21eb30af56364adeca9.scope: Deactivated successfully. Dec 6 05:18:49 localhost podman[324474]: 2025-12-06 10:18:49.956669495 +0000 UTC m=+0.056344771 container died 10ef478f22f60a15bbd047555d12625c31619ccf5f16a21eb30af56364adeca9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 6 05:18:50 localhost podman[324474]: 2025-12-06 10:18:50.045287263 +0000 UTC m=+0.144962499 container cleanup 10ef478f22f60a15bbd047555d12625c31619ccf5f16a21eb30af56364adeca9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 6 05:18:50 localhost systemd[1]: libpod-conmon-10ef478f22f60a15bbd047555d12625c31619ccf5f16a21eb30af56364adeca9.scope: Deactivated successfully. Dec 6 05:18:50 localhost podman[324481]: 2025-12-06 10:18:50.070375514 +0000 UTC m=+0.156774997 container remove 10ef478f22f60a15bbd047555d12625c31619ccf5f16a21eb30af56364adeca9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 6 05:18:50 localhost ovn_controller[154851]: 2025-12-06T10:18:50Z|00325|binding|INFO|Releasing lport fca03880-79ab-46f3-909b-a19baa4b2eea from this chassis (sb_readonly=0) Dec 6 05:18:50 localhost nova_compute[282193]: 2025-12-06 10:18:50.083 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:50 localhost ovn_controller[154851]: 2025-12-06T10:18:50Z|00326|binding|INFO|Setting lport fca03880-79ab-46f3-909b-a19baa4b2eea down in Southbound Dec 6 05:18:50 localhost kernel: device tapfca03880-79 left promiscuous mode Dec 6 05:18:50 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:50.093 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=fca03880-79ab-46f3-909b-a19baa4b2eea) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:18:50 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:50.095 160509 INFO neutron.agent.ovn.metadata.agent [-] Port fca03880-79ab-46f3-909b-a19baa4b2eea in datapath 43883dce-1590-48c4-987c-a21b63b82a1c unbound from our chassis#033[00m Dec 6 05:18:50 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:50.098 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43883dce-1590-48c4-987c-a21b63b82a1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:18:50 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:50.099 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[53d3ed5f-5ebb-461c-94fc-5cd17ea26ba4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:18:50 localhost nova_compute[282193]: 2025-12-06 10:18:50.108 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:50.334 263652 INFO neutron.agent.dhcp.agent [None req-34d19e8a-49b1-4bde-bfa5-07750f222db2 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:18:50 localhost systemd[1]: var-lib-containers-storage-overlay-0f37c9dc86603901ba2225ec8af68b9f38ff4f01b21be5bbf89c6cec69efffdc-merged.mount: Deactivated successfully. Dec 6 05:18:50 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-10ef478f22f60a15bbd047555d12625c31619ccf5f16a21eb30af56364adeca9-userdata-shm.mount: Deactivated successfully. Dec 6 05:18:50 localhost systemd[1]: run-netns-qdhcp\x2d43883dce\x2d1590\x2d48c4\x2d987c\x2da21b63b82a1c.mount: Deactivated successfully. Dec 6 05:18:50 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:50.837 2 INFO neutron.agent.securitygroups_rpc [None req-a73968c5-9b01-4d0f-920e-d4625a021612 9b1422e7ba894ba7b8e14df8e50e50d0 2b1d664fab0f4b7f87439c153244cdc1 - - default default] Security group member updated ['ec3ea7b0-8bb2-49ca-ac1e-7fc51bb6b4e0']#033[00m Dec 6 05:18:51 localhost dnsmasq[323601]: read /var/lib/neutron/dhcp/5d90c1d5-74b2-4b5c-9bf8-25a818641550/addn_hosts - 2 addresses Dec 6 05:18:51 localhost dnsmasq-dhcp[323601]: read /var/lib/neutron/dhcp/5d90c1d5-74b2-4b5c-9bf8-25a818641550/host Dec 6 05:18:51 localhost dnsmasq-dhcp[323601]: read /var/lib/neutron/dhcp/5d90c1d5-74b2-4b5c-9bf8-25a818641550/opts Dec 6 05:18:51 localhost podman[324518]: 2025-12-06 10:18:51.109160631 +0000 UTC m=+0.062823836 container kill b5a7ec053a2387ad1aeced549328179848c9c55d34e92e143426da0a9e07bdb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5d90c1d5-74b2-4b5c-9bf8-25a818641550, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:18:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:18:51 localhost podman[324532]: 2025-12-06 10:18:51.219047464 +0000 UTC m=+0.081058800 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:18:51 localhost podman[324532]: 2025-12-06 10:18:51.262631396 +0000 UTC m=+0.124642742 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible) Dec 6 05:18:51 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:18:51 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:51.361 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:18:50Z, description=, device_id=0c9cf27e-a1eb-4ff5-a93a-5a27f1ed1aa2, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=1f968fe0-9266-48e7-b2ab-afd298c2d69e, ip_allocation=immediate, mac_address=fa:16:3e:50:84:2a, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1938, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-06T10:18:50Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:18:51 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:51.392 2 INFO neutron.agent.securitygroups_rpc [None req-3e522469-6174-4bd8-8fa4-46c3281a8670 b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['cd56abe4-204c-4363-ad64-0a6840260727']#033[00m Dec 6 05:18:51 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:51.422 2 INFO neutron.agent.securitygroups_rpc [None req-c981ff7c-a5c4-4968-95e1-73b35c2abc32 9b1422e7ba894ba7b8e14df8e50e50d0 2b1d664fab0f4b7f87439c153244cdc1 - - default default] Security group member updated ['ec3ea7b0-8bb2-49ca-ac1e-7fc51bb6b4e0']#033[00m Dec 6 05:18:51 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 3 addresses Dec 6 05:18:51 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:18:51 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:18:51 localhost podman[324596]: 2025-12-06 10:18:51.620883953 +0000 UTC m=+0.069917672 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:18:51 localhost podman[324608]: 2025-12-06 10:18:51.657408341 +0000 UTC m=+0.065527309 container kill b5a7ec053a2387ad1aeced549328179848c9c55d34e92e143426da0a9e07bdb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5d90c1d5-74b2-4b5c-9bf8-25a818641550, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 6 05:18:51 localhost dnsmasq[323601]: read /var/lib/neutron/dhcp/5d90c1d5-74b2-4b5c-9bf8-25a818641550/addn_hosts - 1 addresses Dec 6 05:18:51 localhost dnsmasq-dhcp[323601]: read /var/lib/neutron/dhcp/5d90c1d5-74b2-4b5c-9bf8-25a818641550/host Dec 6 05:18:51 localhost dnsmasq-dhcp[323601]: read /var/lib/neutron/dhcp/5d90c1d5-74b2-4b5c-9bf8-25a818641550/opts Dec 6 05:18:51 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:51.784 2 INFO neutron.agent.securitygroups_rpc [None req-57ba5b5d-0488-433d-83ba-25f85616d546 9b1422e7ba894ba7b8e14df8e50e50d0 2b1d664fab0f4b7f87439c153244cdc1 - - default default] Security group member updated ['ec3ea7b0-8bb2-49ca-ac1e-7fc51bb6b4e0']#033[00m Dec 6 05:18:51 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:51.864 263652 INFO neutron.agent.linux.ip_lib [None req-92ab4545-1724-4aa7-8600-d5c37e6ae29d - - - - - -] Device tap7236378f-2c cannot be used as it has no MAC address#033[00m Dec 6 05:18:51 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:51.875 263652 INFO neutron.agent.dhcp.agent [None req-2b36f965-b4f9-4d9e-98b6-b6817e6edcbb - - - - - -] DHCP configuration for ports {'1f968fe0-9266-48e7-b2ab-afd298c2d69e'} is completed#033[00m Dec 6 05:18:51 localhost nova_compute[282193]: 2025-12-06 10:18:51.894 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:51 localhost kernel: device tap7236378f-2c entered promiscuous mode Dec 6 05:18:51 localhost NetworkManager[5973]: [1765016331.9039] manager: (tap7236378f-2c): new Generic device (/org/freedesktop/NetworkManager/Devices/55) Dec 6 05:18:51 localhost ovn_controller[154851]: 2025-12-06T10:18:51Z|00327|binding|INFO|Claiming lport 7236378f-2c6f-4e9d-a6e5-4e6b08ae62be for this chassis. Dec 6 05:18:51 localhost ovn_controller[154851]: 2025-12-06T10:18:51Z|00328|binding|INFO|7236378f-2c6f-4e9d-a6e5-4e6b08ae62be: Claiming unknown Dec 6 05:18:51 localhost systemd-udevd[324649]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:18:51 localhost nova_compute[282193]: 2025-12-06 10:18:51.909 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:51 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:51.917 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=7236378f-2c6f-4e9d-a6e5-4e6b08ae62be) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:18:51 localhost ovn_controller[154851]: 2025-12-06T10:18:51Z|00329|binding|INFO|Setting lport 7236378f-2c6f-4e9d-a6e5-4e6b08ae62be up in Southbound Dec 6 05:18:51 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:51.919 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 7236378f-2c6f-4e9d-a6e5-4e6b08ae62be in datapath 43883dce-1590-48c4-987c-a21b63b82a1c bound to our chassis#033[00m Dec 6 05:18:51 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:51.921 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Port cf9229a3-adf0-4ec8-9186-9b1891e2a252 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 6 05:18:51 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:51.921 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43883dce-1590-48c4-987c-a21b63b82a1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:18:51 localhost ovn_controller[154851]: 2025-12-06T10:18:51Z|00330|binding|INFO|Setting lport 7236378f-2c6f-4e9d-a6e5-4e6b08ae62be ovn-installed in OVS Dec 6 05:18:51 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:51.922 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[bc8045b4-3528-4b8e-b960-48532f4ea7d1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:18:51 localhost nova_compute[282193]: 2025-12-06 10:18:51.922 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:51 localhost nova_compute[282193]: 2025-12-06 10:18:51.925 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:51 localhost nova_compute[282193]: 2025-12-06 10:18:51.943 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:51 localhost nova_compute[282193]: 2025-12-06 10:18:51.992 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:52 localhost nova_compute[282193]: 2025-12-06 10:18:52.024 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:52 localhost podman[324676]: 2025-12-06 10:18:52.094792127 +0000 UTC m=+0.072690516 container kill b5a7ec053a2387ad1aeced549328179848c9c55d34e92e143426da0a9e07bdb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5d90c1d5-74b2-4b5c-9bf8-25a818641550, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 6 05:18:52 localhost dnsmasq[323601]: read /var/lib/neutron/dhcp/5d90c1d5-74b2-4b5c-9bf8-25a818641550/addn_hosts - 0 addresses Dec 6 05:18:52 localhost dnsmasq-dhcp[323601]: read /var/lib/neutron/dhcp/5d90c1d5-74b2-4b5c-9bf8-25a818641550/host Dec 6 05:18:52 localhost dnsmasq-dhcp[323601]: read /var/lib/neutron/dhcp/5d90c1d5-74b2-4b5c-9bf8-25a818641550/opts Dec 6 05:18:52 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:52.374 2 INFO neutron.agent.securitygroups_rpc [None req-db878ba9-86fb-4d9b-8254-a77b4f9b264f a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']#033[00m Dec 6 05:18:52 localhost ovn_controller[154851]: 2025-12-06T10:18:52Z|00331|binding|INFO|Removing iface tapdfa03f50-39 ovn-installed in OVS Dec 6 05:18:52 localhost ovn_controller[154851]: 2025-12-06T10:18:52Z|00332|binding|INFO|Removing lport dfa03f50-3905-4292-9cae-c03579192e4f ovn-installed in OVS Dec 6 05:18:52 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:52.432 160509 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 58ad6037-7d9f-4f1f-85d0-c2c8d7ef9544 with type ""#033[00m Dec 6 05:18:52 localhost nova_compute[282193]: 2025-12-06 10:18:52.434 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:52 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:52.434 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-5d90c1d5-74b2-4b5c-9bf8-25a818641550', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d90c1d5-74b2-4b5c-9bf8-25a818641550', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b1d664fab0f4b7f87439c153244cdc1', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548789.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=554a12c4-a3a9-4583-a7ca-9f004018b224, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=dfa03f50-3905-4292-9cae-c03579192e4f) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:18:52 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:52.437 160509 INFO neutron.agent.ovn.metadata.agent [-] Port dfa03f50-3905-4292-9cae-c03579192e4f in datapath 5d90c1d5-74b2-4b5c-9bf8-25a818641550 unbound from our chassis#033[00m Dec 6 05:18:52 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:52.438 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 5d90c1d5-74b2-4b5c-9bf8-25a818641550 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:18:52 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:52.439 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[dc3cc018-e45e-4520-980e-60ebcbfadceb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:18:52 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:52.443 2 INFO neutron.agent.securitygroups_rpc [None req-d89000d1-9304-4659-90ea-3fec18561423 b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['cd56abe4-204c-4363-ad64-0a6840260727']#033[00m Dec 6 05:18:52 localhost nova_compute[282193]: 2025-12-06 10:18:52.443 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:52 localhost dnsmasq[323601]: exiting on receipt of SIGTERM Dec 6 05:18:52 localhost podman[324732]: 2025-12-06 10:18:52.565984299 +0000 UTC m=+0.068570811 container kill b5a7ec053a2387ad1aeced549328179848c9c55d34e92e143426da0a9e07bdb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5d90c1d5-74b2-4b5c-9bf8-25a818641550, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:18:52 localhost systemd[1]: libpod-b5a7ec053a2387ad1aeced549328179848c9c55d34e92e143426da0a9e07bdb0.scope: Deactivated successfully. Dec 6 05:18:52 localhost podman[324749]: 2025-12-06 10:18:52.643405318 +0000 UTC m=+0.064175058 container died b5a7ec053a2387ad1aeced549328179848c9c55d34e92e143426da0a9e07bdb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5d90c1d5-74b2-4b5c-9bf8-25a818641550, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:18:52 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b5a7ec053a2387ad1aeced549328179848c9c55d34e92e143426da0a9e07bdb0-userdata-shm.mount: Deactivated successfully. Dec 6 05:18:52 localhost systemd[1]: var-lib-containers-storage-overlay-e45d66b660b9cd6a8b001d4fbf1754f68fedf39a7ac1ddac57f6571d258512a4-merged.mount: Deactivated successfully. Dec 6 05:18:52 localhost podman[324749]: 2025-12-06 10:18:52.686504185 +0000 UTC m=+0.107273885 container cleanup b5a7ec053a2387ad1aeced549328179848c9c55d34e92e143426da0a9e07bdb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5d90c1d5-74b2-4b5c-9bf8-25a818641550, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:18:52 localhost systemd[1]: libpod-conmon-b5a7ec053a2387ad1aeced549328179848c9c55d34e92e143426da0a9e07bdb0.scope: Deactivated successfully. Dec 6 05:18:52 localhost podman[324751]: 2025-12-06 10:18:52.72360729 +0000 UTC m=+0.135786420 container remove b5a7ec053a2387ad1aeced549328179848c9c55d34e92e143426da0a9e07bdb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5d90c1d5-74b2-4b5c-9bf8-25a818641550, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 6 05:18:52 localhost kernel: device tapdfa03f50-39 left promiscuous mode Dec 6 05:18:52 localhost nova_compute[282193]: 2025-12-06 10:18:52.777 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:52 localhost nova_compute[282193]: 2025-12-06 10:18:52.791 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:52 localhost ovn_controller[154851]: 2025-12-06T10:18:52Z|00333|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:18:52 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:52.812 263652 INFO neutron.agent.dhcp.agent [None req-ab2b6543-bbea-467f-9a8d-936f28379033 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:18:52 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:52.813 263652 INFO neutron.agent.dhcp.agent [None req-ab2b6543-bbea-467f-9a8d-936f28379033 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:18:52 localhost nova_compute[282193]: 2025-12-06 10:18:52.841 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:52 localhost podman[324797]: Dec 6 05:18:52 localhost podman[324797]: 2025-12-06 10:18:52.943962094 +0000 UTC m=+0.088049101 container create f41ba5599c7eac6f7717b71ccd15b6985b89c6a43acec1218873c0dcb2997997 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Dec 6 05:18:52 localhost systemd[1]: Started libpod-conmon-f41ba5599c7eac6f7717b71ccd15b6985b89c6a43acec1218873c0dcb2997997.scope. Dec 6 05:18:52 localhost systemd[1]: Started libcrun container. Dec 6 05:18:52 localhost nova_compute[282193]: 2025-12-06 10:18:52.998 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:53 localhost podman[324797]: 2025-12-06 10:18:52.900008451 +0000 UTC m=+0.044095508 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:18:53 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4343818f1678526de5c96f3f64894c1a06c47ad9faa4d29e817addbb8dbc3a52/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:18:53 localhost podman[324797]: 2025-12-06 10:18:53.013895126 +0000 UTC m=+0.157982133 container init f41ba5599c7eac6f7717b71ccd15b6985b89c6a43acec1218873c0dcb2997997 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125) Dec 6 05:18:53 localhost podman[324797]: 2025-12-06 10:18:53.022254899 +0000 UTC m=+0.166341896 container start f41ba5599c7eac6f7717b71ccd15b6985b89c6a43acec1218873c0dcb2997997 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 6 05:18:53 localhost dnsmasq[324815]: started, version 2.85 cachesize 150 Dec 6 05:18:53 localhost dnsmasq[324815]: DNS service limited to local subnets Dec 6 05:18:53 localhost dnsmasq[324815]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:18:53 localhost dnsmasq[324815]: warning: no upstream servers configured Dec 6 05:18:53 localhost dnsmasq-dhcp[324815]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 6 05:18:53 localhost dnsmasq[324815]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 0 addresses Dec 6 05:18:53 localhost dnsmasq-dhcp[324815]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:18:53 localhost dnsmasq-dhcp[324815]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:18:53 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:53.062 263652 INFO neutron.agent.dhcp.agent [None req-92ab4545-1724-4aa7-8600-d5c37e6ae29d - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:18:51Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=1d094a5c-894f-4d7e-b6c1-2cb8c545fd07, ip_allocation=immediate, mac_address=fa:16:3e:c1:fc:7e, name=tempest-NetworksTestDHCPv6-264061737, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:17:28Z, description=, dns_domain=, id=43883dce-1590-48c4-987c-a21b63b82a1c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-1975538139, port_security_enabled=True, project_id=34a17eee71de4bac8b71972a4b7b506c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=42818, qos_policy_id=None, revision_number=46, router:external=False, shared=False, standard_attr_id=1415, status=ACTIVE, subnets=['b7683417-f5ad-4eab-8e1d-6d4e9c90ba29'], tags=[], tenant_id=34a17eee71de4bac8b71972a4b7b506c, updated_at=2025-12-06T10:18:50Z, vlan_transparent=None, network_id=43883dce-1590-48c4-987c-a21b63b82a1c, port_security_enabled=True, project_id=34a17eee71de4bac8b71972a4b7b506c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['d618a097-5989-47aa-9263-1c8a114ad269'], standard_attr_id=1956, status=DOWN, tags=[], tenant_id=34a17eee71de4bac8b71972a4b7b506c, updated_at=2025-12-06T10:18:51Z on network 43883dce-1590-48c4-987c-a21b63b82a1c#033[00m Dec 6 05:18:53 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:53.137 263652 INFO neutron.agent.dhcp.agent [None req-1c7dc463-69a7-4193-80ee-d2386385cfc3 - - - - - -] DHCP configuration for ports {'687d7abb-e6aa-4047-aa26-552c962fcc91'} is completed#033[00m Dec 6 05:18:53 localhost dnsmasq[324815]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 1 addresses Dec 6 05:18:53 localhost dnsmasq-dhcp[324815]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:18:53 localhost dnsmasq-dhcp[324815]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:18:53 localhost podman[324834]: 2025-12-06 10:18:53.273965154 +0000 UTC m=+0.066110067 container kill f41ba5599c7eac6f7717b71ccd15b6985b89c6a43acec1218873c0dcb2997997 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:18:53 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:18:53 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:53.561 263652 INFO neutron.agent.dhcp.agent [None req-39997d76-9c32-4ac3-814f-c2fb3c1592b0 - - - - - -] DHCP configuration for ports {'1d094a5c-894f-4d7e-b6c1-2cb8c545fd07'} is completed#033[00m Dec 6 05:18:53 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:53.632 2 INFO neutron.agent.securitygroups_rpc [None req-7967e84a-5cad-44b5-8ea9-0783854ccdc0 b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['cd56abe4-204c-4363-ad64-0a6840260727']#033[00m Dec 6 05:18:53 localhost systemd[1]: run-netns-qdhcp\x2d5d90c1d5\x2d74b2\x2d4b5c\x2d9bf8\x2d25a818641550.mount: Deactivated successfully. Dec 6 05:18:53 localhost podman[241090]: time="2025-12-06T10:18:53Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:18:53 localhost podman[241090]: @ - - [06/Dec/2025:10:18:53 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157921 "" "Go-http-client/1.1" Dec 6 05:18:53 localhost nova_compute[282193]: 2025-12-06 10:18:53.943 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:53 localhost podman[241090]: @ - - [06/Dec/2025:10:18:53 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19736 "" "Go-http-client/1.1" Dec 6 05:18:54 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:54.133 2 INFO neutron.agent.securitygroups_rpc [None req-47656a87-41ce-4f30-afbc-1720c247f9e1 a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']#033[00m Dec 6 05:18:54 localhost nova_compute[282193]: 2025-12-06 10:18:54.141 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:54 localhost dnsmasq[324815]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 0 addresses Dec 6 05:18:54 localhost dnsmasq-dhcp[324815]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:18:54 localhost dnsmasq-dhcp[324815]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:18:54 localhost podman[324871]: 2025-12-06 10:18:54.369666979 +0000 UTC m=+0.060649271 container kill f41ba5599c7eac6f7717b71ccd15b6985b89c6a43acec1218873c0dcb2997997 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:18:54 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:54.902 2 INFO neutron.agent.securitygroups_rpc [None req-fc107b59-6198-4194-ae82-133aadd3bf55 b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['cd56abe4-204c-4363-ad64-0a6840260727']#033[00m Dec 6 05:18:55 localhost dnsmasq[324815]: exiting on receipt of SIGTERM Dec 6 05:18:55 localhost podman[324906]: 2025-12-06 10:18:55.171067825 +0000 UTC m=+0.061217618 container kill f41ba5599c7eac6f7717b71ccd15b6985b89c6a43acec1218873c0dcb2997997 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Dec 6 05:18:55 localhost systemd[1]: libpod-f41ba5599c7eac6f7717b71ccd15b6985b89c6a43acec1218873c0dcb2997997.scope: Deactivated successfully. Dec 6 05:18:55 localhost podman[324919]: 2025-12-06 10:18:55.211288505 +0000 UTC m=+0.032905678 container died f41ba5599c7eac6f7717b71ccd15b6985b89c6a43acec1218873c0dcb2997997 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 6 05:18:55 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f41ba5599c7eac6f7717b71ccd15b6985b89c6a43acec1218873c0dcb2997997-userdata-shm.mount: Deactivated successfully. Dec 6 05:18:55 localhost systemd[1]: var-lib-containers-storage-overlay-4343818f1678526de5c96f3f64894c1a06c47ad9faa4d29e817addbb8dbc3a52-merged.mount: Deactivated successfully. Dec 6 05:18:55 localhost podman[324919]: 2025-12-06 10:18:55.242367628 +0000 UTC m=+0.063984751 container cleanup f41ba5599c7eac6f7717b71ccd15b6985b89c6a43acec1218873c0dcb2997997 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 6 05:18:55 localhost systemd[1]: libpod-conmon-f41ba5599c7eac6f7717b71ccd15b6985b89c6a43acec1218873c0dcb2997997.scope: Deactivated successfully. Dec 6 05:18:55 localhost podman[324926]: 2025-12-06 10:18:55.336076531 +0000 UTC m=+0.143041350 container remove f41ba5599c7eac6f7717b71ccd15b6985b89c6a43acec1218873c0dcb2997997 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 6 05:18:55 localhost kernel: device tap7236378f-2c left promiscuous mode Dec 6 05:18:55 localhost ovn_controller[154851]: 2025-12-06T10:18:55Z|00334|binding|INFO|Releasing lport 7236378f-2c6f-4e9d-a6e5-4e6b08ae62be from this chassis (sb_readonly=0) Dec 6 05:18:55 localhost ovn_controller[154851]: 2025-12-06T10:18:55Z|00335|binding|INFO|Setting lport 7236378f-2c6f-4e9d-a6e5-4e6b08ae62be down in Southbound Dec 6 05:18:55 localhost nova_compute[282193]: 2025-12-06 10:18:55.351 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:55 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:55.353 2 INFO neutron.agent.securitygroups_rpc [None req-6d09d5a9-00b8-48ae-971d-442c4fe5ddd4 b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['cd56abe4-204c-4363-ad64-0a6840260727']#033[00m Dec 6 05:18:55 localhost nova_compute[282193]: 2025-12-06 10:18:55.371 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:55 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:55.371 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548789.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=7236378f-2c6f-4e9d-a6e5-4e6b08ae62be) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:18:55 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:55.374 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 7236378f-2c6f-4e9d-a6e5-4e6b08ae62be in datapath 43883dce-1590-48c4-987c-a21b63b82a1c unbound from our chassis#033[00m Dec 6 05:18:55 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:55.377 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43883dce-1590-48c4-987c-a21b63b82a1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:18:55 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:55.378 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[3d9ac1eb-4f31-41db-80eb-21ee01e4a4a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:18:55 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:55.616 263652 INFO neutron.agent.dhcp.agent [None req-f072d2b9-6577-4a38-86e7-fa140192b932 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:18:55 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 2 addresses Dec 6 05:18:55 localhost podman[324966]: 2025-12-06 10:18:55.629902543 +0000 UTC m=+0.066527529 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:18:55 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:18:55 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:18:55 localhost ovn_controller[154851]: 2025-12-06T10:18:55Z|00336|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:18:55 localhost nova_compute[282193]: 2025-12-06 10:18:55.688 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:56 localhost nova_compute[282193]: 2025-12-06 10:18:56.061 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:18:56 localhost nova_compute[282193]: 2025-12-06 10:18:56.083 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Triggering sync for uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m Dec 6 05:18:56 localhost nova_compute[282193]: 2025-12-06 10:18:56.085 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:18:56 localhost nova_compute[282193]: 2025-12-06 10:18:56.085 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" acquired by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:18:56 localhost nova_compute[282193]: 2025-12-06 10:18:56.113 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" "released" by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" :: held 0.028s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:18:56 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:56.151 2 INFO neutron.agent.securitygroups_rpc [None req-49caf4e8-1cf3-4ac4-8ed0-7c14787e9a49 b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['cd56abe4-204c-4363-ad64-0a6840260727']#033[00m Dec 6 05:18:56 localhost systemd[1]: run-netns-qdhcp\x2d43883dce\x2d1590\x2d48c4\x2d987c\x2da21b63b82a1c.mount: Deactivated successfully. Dec 6 05:18:56 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 1 addresses Dec 6 05:18:56 localhost systemd[1]: tmp-crun.jcIxUK.mount: Deactivated successfully. Dec 6 05:18:56 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:18:56 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:18:56 localhost podman[325004]: 2025-12-06 10:18:56.523892268 +0000 UTC m=+0.052562855 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:18:56 localhost ovn_controller[154851]: 2025-12-06T10:18:56Z|00337|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:18:56 localhost nova_compute[282193]: 2025-12-06 10:18:56.592 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:56 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:56.708 263652 INFO neutron.agent.linux.ip_lib [None req-3076b480-6a7a-4055-8fb1-ad825cedf939 - - - - - -] Device tapbb0ffdc5-89 cannot be used as it has no MAC address#033[00m Dec 6 05:18:56 localhost nova_compute[282193]: 2025-12-06 10:18:56.733 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:56 localhost kernel: device tapbb0ffdc5-89 entered promiscuous mode Dec 6 05:18:56 localhost NetworkManager[5973]: [1765016336.7413] manager: (tapbb0ffdc5-89): new Generic device (/org/freedesktop/NetworkManager/Devices/56) Dec 6 05:18:56 localhost ovn_controller[154851]: 2025-12-06T10:18:56Z|00338|binding|INFO|Claiming lport bb0ffdc5-896f-4862-9530-52cf0e1e0cda for this chassis. Dec 6 05:18:56 localhost ovn_controller[154851]: 2025-12-06T10:18:56Z|00339|binding|INFO|bb0ffdc5-896f-4862-9530-52cf0e1e0cda: Claiming unknown Dec 6 05:18:56 localhost nova_compute[282193]: 2025-12-06 10:18:56.742 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:56 localhost systemd-udevd[325037]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:18:56 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:56.762 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe61:486b/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=bb0ffdc5-896f-4862-9530-52cf0e1e0cda) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:18:56 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:56.766 160509 INFO neutron.agent.ovn.metadata.agent [-] Port bb0ffdc5-896f-4862-9530-52cf0e1e0cda in datapath 43883dce-1590-48c4-987c-a21b63b82a1c bound to our chassis#033[00m Dec 6 05:18:56 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:56.769 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Port 5c72d867-377b-401f-8969-dccc4aa84140 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 6 05:18:56 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:56.769 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43883dce-1590-48c4-987c-a21b63b82a1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:18:56 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:56.770 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[658e5e46-fc08-4e1e-b915-1910a57b4232]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:18:56 localhost ovn_controller[154851]: 2025-12-06T10:18:56Z|00340|binding|INFO|Setting lport bb0ffdc5-896f-4862-9530-52cf0e1e0cda ovn-installed in OVS Dec 6 05:18:56 localhost ovn_controller[154851]: 2025-12-06T10:18:56Z|00341|binding|INFO|Setting lport bb0ffdc5-896f-4862-9530-52cf0e1e0cda up in Southbound Dec 6 05:18:56 localhost nova_compute[282193]: 2025-12-06 10:18:56.799 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:56 localhost nova_compute[282193]: 2025-12-06 10:18:56.838 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:56 localhost nova_compute[282193]: 2025-12-06 10:18:56.869 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:57 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:57.318 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:18:57Z, description=, device_id=01a5b100-c4e3-4e32-bdad-09ba7b504295, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=5e599779-403a-41e1-af63-c0b813d18ceb, ip_allocation=immediate, mac_address=fa:16:3e:e9:c7:0c, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1979, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-06T10:18:57Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:18:57 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:57.373 2 INFO neutron.agent.securitygroups_rpc [None req-371fcf7f-6858-4f55-8627-7d44578a7f6c a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']#033[00m Dec 6 05:18:57 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 2 addresses Dec 6 05:18:57 localhost podman[325087]: 2025-12-06 10:18:57.519136445 +0000 UTC m=+0.046855452 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:18:57 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:18:57 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:18:57 localhost podman[325126]: Dec 6 05:18:57 localhost podman[325126]: 2025-12-06 10:18:57.711938903 +0000 UTC m=+0.097102376 container create 859e6766fde586991ea9a9ea8b31b56366fcfc45fa8fbfd4233ca17be599de96 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 6 05:18:57 localhost systemd[1]: Started libpod-conmon-859e6766fde586991ea9a9ea8b31b56366fcfc45fa8fbfd4233ca17be599de96.scope. Dec 6 05:18:57 localhost systemd[1]: Started libcrun container. Dec 6 05:18:57 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb4226fe08368ea2d11dcba02b5011cfc19a25160fc2fe36ef2d9719be1bb1a8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:18:57 localhost podman[325126]: 2025-12-06 10:18:57.664452363 +0000 UTC m=+0.049615886 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:18:57 localhost podman[325126]: 2025-12-06 10:18:57.771420537 +0000 UTC m=+0.156584020 container init 859e6766fde586991ea9a9ea8b31b56366fcfc45fa8fbfd4233ca17be599de96 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS) Dec 6 05:18:57 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:57.773 263652 INFO neutron.agent.dhcp.agent [None req-e13c65ba-d601-48b5-984e-bae0d4ee9d75 - - - - - -] DHCP configuration for ports {'5e599779-403a-41e1-af63-c0b813d18ceb'} is completed#033[00m Dec 6 05:18:57 localhost podman[325126]: 2025-12-06 10:18:57.781742871 +0000 UTC m=+0.166906344 container start 859e6766fde586991ea9a9ea8b31b56366fcfc45fa8fbfd4233ca17be599de96 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125) Dec 6 05:18:57 localhost dnsmasq[325184]: started, version 2.85 cachesize 150 Dec 6 05:18:57 localhost dnsmasq[325184]: DNS service limited to local subnets Dec 6 05:18:57 localhost dnsmasq[325184]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:18:57 localhost dnsmasq[325184]: warning: no upstream servers configured Dec 6 05:18:57 localhost dnsmasq[325184]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 0 addresses Dec 6 05:18:58 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:58.031 263652 INFO neutron.agent.dhcp.agent [None req-48da9735-826a-4eba-baeb-4b3cfee13085 - - - - - -] DHCP configuration for ports {'687d7abb-e6aa-4047-aa26-552c962fcc91'} is completed#033[00m Dec 6 05:18:58 localhost nova_compute[282193]: 2025-12-06 10:18:58.031 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:58 localhost dnsmasq[325184]: exiting on receipt of SIGTERM Dec 6 05:18:58 localhost podman[325217]: 2025-12-06 10:18:58.205226736 +0000 UTC m=+0.043916294 container kill 859e6766fde586991ea9a9ea8b31b56366fcfc45fa8fbfd4233ca17be599de96 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3) Dec 6 05:18:58 localhost systemd[1]: libpod-859e6766fde586991ea9a9ea8b31b56366fcfc45fa8fbfd4233ca17be599de96.scope: Deactivated successfully. Dec 6 05:18:58 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:58.223 2 INFO neutron.agent.securitygroups_rpc [None req-dcd65c97-9e9b-434b-8250-c459c3b8a42b a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']#033[00m Dec 6 05:18:58 localhost podman[325236]: 2025-12-06 10:18:58.284151449 +0000 UTC m=+0.058635019 container died 859e6766fde586991ea9a9ea8b31b56366fcfc45fa8fbfd4233ca17be599de96 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:18:58 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:18:58 localhost podman[325236]: 2025-12-06 10:18:58.338619522 +0000 UTC m=+0.113103062 container remove 859e6766fde586991ea9a9ea8b31b56366fcfc45fa8fbfd4233ca17be599de96 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:18:58 localhost systemd[1]: libpod-conmon-859e6766fde586991ea9a9ea8b31b56366fcfc45fa8fbfd4233ca17be599de96.scope: Deactivated successfully. Dec 6 05:18:58 localhost systemd[1]: var-lib-containers-storage-overlay-eb4226fe08368ea2d11dcba02b5011cfc19a25160fc2fe36ef2d9719be1bb1a8-merged.mount: Deactivated successfully. Dec 6 05:18:58 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-859e6766fde586991ea9a9ea8b31b56366fcfc45fa8fbfd4233ca17be599de96-userdata-shm.mount: Deactivated successfully. Dec 6 05:18:59 localhost nova_compute[282193]: 2025-12-06 10:18:59.181 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:59 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:18:59 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:18:59 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:18:59.696 263652 INFO neutron.agent.linux.ip_lib [None req-1209a318-25c3-4834-870a-ee0da04dc988 - - - - - -] Device tapf8d80242-07 cannot be used as it has no MAC address#033[00m Dec 6 05:18:59 localhost nova_compute[282193]: 2025-12-06 10:18:59.726 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:59 localhost kernel: device tapf8d80242-07 entered promiscuous mode Dec 6 05:18:59 localhost NetworkManager[5973]: [1765016339.7343] manager: (tapf8d80242-07): new Generic device (/org/freedesktop/NetworkManager/Devices/57) Dec 6 05:18:59 localhost nova_compute[282193]: 2025-12-06 10:18:59.738 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:59 localhost neutron_sriov_agent[256690]: 2025-12-06 10:18:59.741 2 INFO neutron.agent.securitygroups_rpc [None req-39208e2c-f7c0-489d-b264-ceaa95c43793 1333c58cfc75447fad1b488a958549ce a269d8afc49848fbb8ce5cdb49ef37dc - - default default] Security group member updated ['a71b7ce5-d152-4b06-83b8-76d380ec29b6']#033[00m Dec 6 05:18:59 localhost ovn_controller[154851]: 2025-12-06T10:18:59Z|00342|binding|INFO|Claiming lport f8d80242-07b6-493f-8d5a-67d3b413e7c2 for this chassis. Dec 6 05:18:59 localhost ovn_controller[154851]: 2025-12-06T10:18:59Z|00343|binding|INFO|f8d80242-07b6-493f-8d5a-67d3b413e7c2: Claiming unknown Dec 6 05:18:59 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:59.762 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-b84b6f67-f6c6-431b-82dc-4d4f6b20b084', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b84b6f67-f6c6-431b-82dc-4d4f6b20b084', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a269d8afc49848fbb8ce5cdb49ef37dc', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4b3e9ece-98ce-425e-b1a7-ae8b3622954c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=f8d80242-07b6-493f-8d5a-67d3b413e7c2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:18:59 localhost podman[325351]: Dec 6 05:18:59 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:59.764 160509 INFO neutron.agent.ovn.metadata.agent [-] Port f8d80242-07b6-493f-8d5a-67d3b413e7c2 in datapath b84b6f67-f6c6-431b-82dc-4d4f6b20b084 bound to our chassis#033[00m Dec 6 05:18:59 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:59.766 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Port 1da81fc3-b0d7-4158-9ff3-f6eb5f1de696 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 6 05:18:59 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:59.766 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b84b6f67-f6c6-431b-82dc-4d4f6b20b084, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:18:59 localhost ovn_metadata_agent[160504]: 2025-12-06 10:18:59.767 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[763121e0-d652-4e64-8219-a8f2e31b16be]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:18:59 localhost journal[230404]: ethtool ioctl error on tapf8d80242-07: No such device Dec 6 05:18:59 localhost podman[325351]: 2025-12-06 10:18:59.77532294 +0000 UTC m=+0.104329076 container create 1356753d4b307eabcd1c93210192bf33b7751c5598848d4ac324142d8d20a976 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:18:59 localhost journal[230404]: ethtool ioctl error on tapf8d80242-07: No such device Dec 6 05:18:59 localhost nova_compute[282193]: 2025-12-06 10:18:59.778 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:59 localhost ovn_controller[154851]: 2025-12-06T10:18:59Z|00344|binding|INFO|Setting lport f8d80242-07b6-493f-8d5a-67d3b413e7c2 ovn-installed in OVS Dec 6 05:18:59 localhost ovn_controller[154851]: 2025-12-06T10:18:59Z|00345|binding|INFO|Setting lport f8d80242-07b6-493f-8d5a-67d3b413e7c2 up in Southbound Dec 6 05:18:59 localhost journal[230404]: ethtool ioctl error on tapf8d80242-07: No such device Dec 6 05:18:59 localhost nova_compute[282193]: 2025-12-06 10:18:59.791 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:59 localhost journal[230404]: ethtool ioctl error on tapf8d80242-07: No such device Dec 6 05:18:59 localhost journal[230404]: ethtool ioctl error on tapf8d80242-07: No such device Dec 6 05:18:59 localhost journal[230404]: ethtool ioctl error on tapf8d80242-07: No such device Dec 6 05:18:59 localhost journal[230404]: ethtool ioctl error on tapf8d80242-07: No such device Dec 6 05:18:59 localhost journal[230404]: ethtool ioctl error on tapf8d80242-07: No such device Dec 6 05:18:59 localhost nova_compute[282193]: 2025-12-06 10:18:59.829 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:59 localhost systemd[1]: Started libpod-conmon-1356753d4b307eabcd1c93210192bf33b7751c5598848d4ac324142d8d20a976.scope. Dec 6 05:18:59 localhost podman[325351]: 2025-12-06 10:18:59.730842381 +0000 UTC m=+0.059848517 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:18:59 localhost nova_compute[282193]: 2025-12-06 10:18:59.861 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:18:59 localhost systemd[1]: Started libcrun container. Dec 6 05:18:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b64405891051d8c84369c2a60215779dd9c625690bbd5ae710c835d97b69be8e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:18:59 localhost podman[325351]: 2025-12-06 10:18:59.881363006 +0000 UTC m=+0.210369122 container init 1356753d4b307eabcd1c93210192bf33b7751c5598848d4ac324142d8d20a976 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:18:59 localhost podman[325351]: 2025-12-06 10:18:59.891151223 +0000 UTC m=+0.220157339 container start 1356753d4b307eabcd1c93210192bf33b7751c5598848d4ac324142d8d20a976 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:18:59 localhost dnsmasq[325399]: started, version 2.85 cachesize 150 Dec 6 05:18:59 localhost dnsmasq[325399]: DNS service limited to local subnets Dec 6 05:18:59 localhost dnsmasq[325399]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:18:59 localhost dnsmasq[325399]: warning: no upstream servers configured Dec 6 05:18:59 localhost dnsmasq-dhcp[325399]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d Dec 6 05:18:59 localhost dnsmasq[325399]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 2 addresses Dec 6 05:18:59 localhost dnsmasq-dhcp[325399]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:18:59 localhost dnsmasq-dhcp[325399]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:19:00 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:00.300 263652 INFO neutron.agent.dhcp.agent [None req-d3b0ffe0-b1fb-4593-929c-3e31bcad28f9 - - - - - -] DHCP configuration for ports {'687d7abb-e6aa-4047-aa26-552c962fcc91', '7368b3e8-d58f-4836-b822-3889974b0257', 'bb0ffdc5-896f-4862-9530-52cf0e1e0cda'} is completed#033[00m Dec 6 05:19:00 localhost dnsmasq[325399]: exiting on receipt of SIGTERM Dec 6 05:19:00 localhost podman[325435]: 2025-12-06 10:19:00.418958142 +0000 UTC m=+0.098563321 container kill 1356753d4b307eabcd1c93210192bf33b7751c5598848d4ac324142d8d20a976 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 6 05:19:00 localhost systemd[1]: libpod-1356753d4b307eabcd1c93210192bf33b7751c5598848d4ac324142d8d20a976.scope: Deactivated successfully. Dec 6 05:19:00 localhost podman[325452]: 2025-12-06 10:19:00.494238145 +0000 UTC m=+0.060738323 container died 1356753d4b307eabcd1c93210192bf33b7751c5598848d4ac324142d8d20a976 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:19:00 localhost podman[325452]: 2025-12-06 10:19:00.54318559 +0000 UTC m=+0.109685738 container cleanup 1356753d4b307eabcd1c93210192bf33b7751c5598848d4ac324142d8d20a976 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 6 05:19:00 localhost systemd[1]: libpod-conmon-1356753d4b307eabcd1c93210192bf33b7751c5598848d4ac324142d8d20a976.scope: Deactivated successfully. Dec 6 05:19:00 localhost podman[325459]: 2025-12-06 10:19:00.582041528 +0000 UTC m=+0.129361624 container remove 1356753d4b307eabcd1c93210192bf33b7751c5598848d4ac324142d8d20a976 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 6 05:19:00 localhost ovn_controller[154851]: 2025-12-06T10:19:00Z|00346|binding|INFO|Releasing lport bb0ffdc5-896f-4862-9530-52cf0e1e0cda from this chassis (sb_readonly=0) Dec 6 05:19:00 localhost kernel: device tapbb0ffdc5-89 left promiscuous mode Dec 6 05:19:00 localhost ovn_controller[154851]: 2025-12-06T10:19:00Z|00347|binding|INFO|Setting lport bb0ffdc5-896f-4862-9530-52cf0e1e0cda down in Southbound Dec 6 05:19:00 localhost nova_compute[282193]: 2025-12-06 10:19:00.593 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:00 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:00.601 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548789.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=bb0ffdc5-896f-4862-9530-52cf0e1e0cda) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:19:00 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:00.602 160509 INFO neutron.agent.ovn.metadata.agent [-] Port bb0ffdc5-896f-4862-9530-52cf0e1e0cda in datapath 43883dce-1590-48c4-987c-a21b63b82a1c unbound from our chassis#033[00m Dec 6 05:19:00 localhost systemd[1]: var-lib-containers-storage-overlay-b64405891051d8c84369c2a60215779dd9c625690bbd5ae710c835d97b69be8e-merged.mount: Deactivated successfully. Dec 6 05:19:00 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:00.603 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43883dce-1590-48c4-987c-a21b63b82a1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:19:00 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1356753d4b307eabcd1c93210192bf33b7751c5598848d4ac324142d8d20a976-userdata-shm.mount: Deactivated successfully. Dec 6 05:19:00 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:00.603 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[f65736ca-fdd7-4132-b10c-e08c90589b19]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:19:00 localhost nova_compute[282193]: 2025-12-06 10:19:00.612 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:00 localhost podman[325501]: Dec 6 05:19:00 localhost podman[325501]: 2025-12-06 10:19:00.742140815 +0000 UTC m=+0.067822009 container create 0816641e50d3344d04d04913a8061a709484ebf89102d281cfe008aeef60ed2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b84b6f67-f6c6-431b-82dc-4d4f6b20b084, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0) Dec 6 05:19:00 localhost systemd[1]: Started libpod-conmon-0816641e50d3344d04d04913a8061a709484ebf89102d281cfe008aeef60ed2f.scope. Dec 6 05:19:00 localhost systemd[1]: Started libcrun container. Dec 6 05:19:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75d9cf4f7f0f4147182b036eddac499be359e70f248dd76b6f98dcb4d0de61ef/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:19:00 localhost podman[325501]: 2025-12-06 10:19:00.703818142 +0000 UTC m=+0.029499336 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:19:00 localhost podman[325501]: 2025-12-06 10:19:00.804050513 +0000 UTC m=+0.129731707 container init 0816641e50d3344d04d04913a8061a709484ebf89102d281cfe008aeef60ed2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b84b6f67-f6c6-431b-82dc-4d4f6b20b084, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 6 05:19:00 localhost podman[325501]: 2025-12-06 10:19:00.814932963 +0000 UTC m=+0.140614157 container start 0816641e50d3344d04d04913a8061a709484ebf89102d281cfe008aeef60ed2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b84b6f67-f6c6-431b-82dc-4d4f6b20b084, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0) Dec 6 05:19:00 localhost dnsmasq[325519]: started, version 2.85 cachesize 150 Dec 6 05:19:00 localhost dnsmasq[325519]: DNS service limited to local subnets Dec 6 05:19:00 localhost dnsmasq[325519]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:19:00 localhost dnsmasq[325519]: warning: no upstream servers configured Dec 6 05:19:00 localhost dnsmasq-dhcp[325519]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 6 05:19:00 localhost dnsmasq[325519]: read /var/lib/neutron/dhcp/b84b6f67-f6c6-431b-82dc-4d4f6b20b084/addn_hosts - 0 addresses Dec 6 05:19:00 localhost dnsmasq-dhcp[325519]: read /var/lib/neutron/dhcp/b84b6f67-f6c6-431b-82dc-4d4f6b20b084/host Dec 6 05:19:00 localhost dnsmasq-dhcp[325519]: read /var/lib/neutron/dhcp/b84b6f67-f6c6-431b-82dc-4d4f6b20b084/opts Dec 6 05:19:00 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:00.887 263652 INFO neutron.agent.dhcp.agent [None req-584281be-9132-4dac-b7f4-2dada17bd960 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:18:59Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=e398fe9a-c93c-418f-8af0-1f4523efdbbb, ip_allocation=immediate, mac_address=fa:16:3e:e5:dc:d9, name=tempest-ExtraDHCPOptionsTestJSON-1296193310, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:18:56Z, description=, dns_domain=, id=b84b6f67-f6c6-431b-82dc-4d4f6b20b084, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ExtraDHCPOptionsTestJSON-test-network-1729131517, port_security_enabled=True, project_id=a269d8afc49848fbb8ce5cdb49ef37dc, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=34332, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1975, status=ACTIVE, subnets=['c069e8b4-a3d2-4787-b409-2897a52a3b9a'], tags=[], tenant_id=a269d8afc49848fbb8ce5cdb49ef37dc, updated_at=2025-12-06T10:18:57Z, vlan_transparent=None, network_id=b84b6f67-f6c6-431b-82dc-4d4f6b20b084, port_security_enabled=True, project_id=a269d8afc49848fbb8ce5cdb49ef37dc, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['a71b7ce5-d152-4b06-83b8-76d380ec29b6'], standard_attr_id=1997, status=DOWN, tags=[], tenant_id=a269d8afc49848fbb8ce5cdb49ef37dc, updated_at=2025-12-06T10:18:59Z on network b84b6f67-f6c6-431b-82dc-4d4f6b20b084#033[00m Dec 6 05:19:00 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:00.962 263652 INFO neutron.agent.dhcp.agent [None req-fb636907-d79c-4e5c-b427-aa5653be8ca5 - - - - - -] DHCP configuration for ports {'bc74c544-ee9e-41e3-9bd5-023a359c4f8c'} is completed#033[00m Dec 6 05:19:01 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:01.003 263652 INFO neutron.agent.dhcp.agent [None req-e3f33f0f-0988-46f8-babd-9c853e433e06 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:19:01 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:01.004 263652 INFO neutron.agent.dhcp.agent [None req-e3f33f0f-0988-46f8-babd-9c853e433e06 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:19:01 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 1 addresses Dec 6 05:19:01 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:19:01 localhost podman[325539]: 2025-12-06 10:19:01.014826896 +0000 UTC m=+0.052843444 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 6 05:19:01 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:19:01 localhost dnsmasq[325519]: read /var/lib/neutron/dhcp/b84b6f67-f6c6-431b-82dc-4d4f6b20b084/addn_hosts - 1 addresses Dec 6 05:19:01 localhost podman[325568]: 2025-12-06 10:19:01.150857012 +0000 UTC m=+0.056594708 container kill 0816641e50d3344d04d04913a8061a709484ebf89102d281cfe008aeef60ed2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b84b6f67-f6c6-431b-82dc-4d4f6b20b084, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 6 05:19:01 localhost dnsmasq-dhcp[325519]: read /var/lib/neutron/dhcp/b84b6f67-f6c6-431b-82dc-4d4f6b20b084/host Dec 6 05:19:01 localhost dnsmasq-dhcp[325519]: read /var/lib/neutron/dhcp/b84b6f67-f6c6-431b-82dc-4d4f6b20b084/opts Dec 6 05:19:01 localhost neutron_sriov_agent[256690]: 2025-12-06 10:19:01.285 2 INFO neutron.agent.securitygroups_rpc [None req-7d4e4684-fcf3-4893-8a23-2e4dea6b64ed 1333c58cfc75447fad1b488a958549ce a269d8afc49848fbb8ce5cdb49ef37dc - - default default] Security group member updated ['a71b7ce5-d152-4b06-83b8-76d380ec29b6']#033[00m Dec 6 05:19:01 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:01.358 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:19:00Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[, , ], fixed_ips=[], id=ff894145-754e-443e-acc5-3e6fc6144ba8, ip_allocation=immediate, mac_address=fa:16:3e:c1:22:84, name=tempest-ExtraDHCPOptionsTestJSON-1755302425, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:18:56Z, description=, dns_domain=, id=b84b6f67-f6c6-431b-82dc-4d4f6b20b084, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ExtraDHCPOptionsTestJSON-test-network-1729131517, port_security_enabled=True, project_id=a269d8afc49848fbb8ce5cdb49ef37dc, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=34332, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1975, status=ACTIVE, subnets=['c069e8b4-a3d2-4787-b409-2897a52a3b9a'], tags=[], tenant_id=a269d8afc49848fbb8ce5cdb49ef37dc, updated_at=2025-12-06T10:18:57Z, vlan_transparent=None, network_id=b84b6f67-f6c6-431b-82dc-4d4f6b20b084, port_security_enabled=True, project_id=a269d8afc49848fbb8ce5cdb49ef37dc, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['a71b7ce5-d152-4b06-83b8-76d380ec29b6'], standard_attr_id=2005, status=DOWN, tags=[], tenant_id=a269d8afc49848fbb8ce5cdb49ef37dc, updated_at=2025-12-06T10:19:00Z on network b84b6f67-f6c6-431b-82dc-4d4f6b20b084#033[00m Dec 6 05:19:01 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:01.415 263652 INFO neutron.agent.dhcp.agent [None req-44acbcb1-23a6-438c-b439-cbc8cb560404 - - - - - -] DHCP configuration for ports {'e398fe9a-c93c-418f-8af0-1f4523efdbbb'} is completed#033[00m Dec 6 05:19:01 localhost dnsmasq[325519]: read /var/lib/neutron/dhcp/b84b6f67-f6c6-431b-82dc-4d4f6b20b084/addn_hosts - 2 addresses Dec 6 05:19:01 localhost podman[325611]: 2025-12-06 10:19:01.559135965 +0000 UTC m=+0.059395012 container kill 0816641e50d3344d04d04913a8061a709484ebf89102d281cfe008aeef60ed2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b84b6f67-f6c6-431b-82dc-4d4f6b20b084, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:19:01 localhost dnsmasq-dhcp[325519]: read /var/lib/neutron/dhcp/b84b6f67-f6c6-431b-82dc-4d4f6b20b084/host Dec 6 05:19:01 localhost dnsmasq-dhcp[325519]: read /var/lib/neutron/dhcp/b84b6f67-f6c6-431b-82dc-4d4f6b20b084/opts Dec 6 05:19:01 localhost systemd[1]: run-netns-qdhcp\x2d43883dce\x2d1590\x2d48c4\x2d987c\x2da21b63b82a1c.mount: Deactivated successfully. Dec 6 05:19:01 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:01.857 263652 INFO neutron.agent.dhcp.agent [None req-a6daa11e-a530-47c2-be2f-dac497fef326 - - - - - -] DHCP configuration for ports {'ff894145-754e-443e-acc5-3e6fc6144ba8'} is completed#033[00m Dec 6 05:19:02 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:02.059 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:19:01Z, description=, device_id=af23047c-2c71-405c-b139-618f66efd627, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=de89ee12-af27-4b3e-a063-945dcc47a32d, ip_allocation=immediate, mac_address=fa:16:3e:be:47:18, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2020, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-06T10:19:01Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:19:02 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 2 addresses Dec 6 05:19:02 localhost podman[325648]: 2025-12-06 10:19:02.279097643 +0000 UTC m=+0.064391514 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Dec 6 05:19:02 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:19:02 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:19:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:19:02 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:02.317 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:3e:86 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-9beccfed-6ce7-4343-a09a-a10df412729f', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9beccfed-6ce7-4343-a09a-a10df412729f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f8e1c4c589749b99178bbc7c2bea3f0', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd8b2850-e3e7-477f-8017-199231500400, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=5e418b23-64fb-4cc3-b4f5-351454b6f675) old=Port_Binding(mac=['fa:16:3e:fe:3e:86 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-9beccfed-6ce7-4343-a09a-a10df412729f', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9beccfed-6ce7-4343-a09a-a10df412729f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f8e1c4c589749b99178bbc7c2bea3f0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:19:02 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:02.319 160509 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 5e418b23-64fb-4cc3-b4f5-351454b6f675 in datapath 9beccfed-6ce7-4343-a09a-a10df412729f updated#033[00m Dec 6 05:19:02 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:02.322 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9beccfed-6ce7-4343-a09a-a10df412729f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:19:02 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:02.323 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[2be73684-346c-47fd-a8d5-71d0843bdd82]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:19:02 localhost podman[325661]: 2025-12-06 10:19:02.388276714 +0000 UTC m=+0.086552656 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Dec 6 05:19:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:19:02 localhost podman[325661]: 2025-12-06 10:19:02.419706047 +0000 UTC m=+0.117982009 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:19:02 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:19:02 localhost podman[325685]: 2025-12-06 10:19:02.479223053 +0000 UTC m=+0.066413225 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 6 05:19:02 localhost podman[325685]: 2025-12-06 10:19:02.488464903 +0000 UTC m=+0.075655095 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 6 05:19:02 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:19:02 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:02.548 263652 INFO neutron.agent.dhcp.agent [None req-ff7f66ba-a35f-4dd6-aee1-32b85ffe6de7 - - - - - -] DHCP configuration for ports {'de89ee12-af27-4b3e-a063-945dcc47a32d'} is completed#033[00m Dec 6 05:19:02 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:19:02 localhost neutron_sriov_agent[256690]: 2025-12-06 10:19:02.686 2 INFO neutron.agent.securitygroups_rpc [None req-84b84473-0a2d-4cb4-b946-8bdffecc1ba7 1333c58cfc75447fad1b488a958549ce a269d8afc49848fbb8ce5cdb49ef37dc - - default default] Security group member updated ['a71b7ce5-d152-4b06-83b8-76d380ec29b6']#033[00m Dec 6 05:19:02 localhost neutron_sriov_agent[256690]: 2025-12-06 10:19:02.870 2 INFO neutron.agent.securitygroups_rpc [None req-9f09c577-620d-43b3-bb16-a6c9b188fc98 b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['cd56abe4-204c-4363-ad64-0a6840260727']#033[00m Dec 6 05:19:03 localhost podman[325727]: 2025-12-06 10:19:03.017685446 +0000 UTC m=+0.061283910 container kill 0816641e50d3344d04d04913a8061a709484ebf89102d281cfe008aeef60ed2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b84b6f67-f6c6-431b-82dc-4d4f6b20b084, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS) Dec 6 05:19:03 localhost dnsmasq[325519]: read /var/lib/neutron/dhcp/b84b6f67-f6c6-431b-82dc-4d4f6b20b084/addn_hosts - 1 addresses Dec 6 05:19:03 localhost dnsmasq-dhcp[325519]: read /var/lib/neutron/dhcp/b84b6f67-f6c6-431b-82dc-4d4f6b20b084/host Dec 6 05:19:03 localhost dnsmasq-dhcp[325519]: read /var/lib/neutron/dhcp/b84b6f67-f6c6-431b-82dc-4d4f6b20b084/opts Dec 6 05:19:03 localhost nova_compute[282193]: 2025-12-06 10:19:03.070 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:03 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:19:03 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:03.366 263652 INFO neutron.agent.linux.ip_lib [None req-37bfc739-3e5f-4189-852a-3ee87f21c002 - - - - - -] Device tap06b75261-40 cannot be used as it has no MAC address#033[00m Dec 6 05:19:03 localhost nova_compute[282193]: 2025-12-06 10:19:03.503 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:03 localhost kernel: device tap06b75261-40 entered promiscuous mode Dec 6 05:19:03 localhost ovn_controller[154851]: 2025-12-06T10:19:03Z|00348|binding|INFO|Claiming lport 06b75261-40e0-4712-ac9e-63a2586b3a8c for this chassis. Dec 6 05:19:03 localhost ovn_controller[154851]: 2025-12-06T10:19:03Z|00349|binding|INFO|06b75261-40e0-4712-ac9e-63a2586b3a8c: Claiming unknown Dec 6 05:19:03 localhost NetworkManager[5973]: [1765016343.5099] manager: (tap06b75261-40): new Generic device (/org/freedesktop/NetworkManager/Devices/58) Dec 6 05:19:03 localhost nova_compute[282193]: 2025-12-06 10:19:03.511 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:03 localhost ovn_controller[154851]: 2025-12-06T10:19:03Z|00350|binding|INFO|Setting lport 06b75261-40e0-4712-ac9e-63a2586b3a8c ovn-installed in OVS Dec 6 05:19:03 localhost systemd-udevd[325760]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:19:03 localhost nova_compute[282193]: 2025-12-06 10:19:03.516 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:03 localhost nova_compute[282193]: 2025-12-06 10:19:03.518 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:03 localhost ovn_controller[154851]: 2025-12-06T10:19:03Z|00351|binding|INFO|Setting lport 06b75261-40e0-4712-ac9e-63a2586b3a8c up in Southbound Dec 6 05:19:03 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:03.519 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe86:a151/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=06b75261-40e0-4712-ac9e-63a2586b3a8c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:19:03 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:03.521 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 06b75261-40e0-4712-ac9e-63a2586b3a8c in datapath 43883dce-1590-48c4-987c-a21b63b82a1c bound to our chassis#033[00m Dec 6 05:19:03 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:03.522 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Port 81d07f5f-20db-4fca-88f9-09f5f7d63de6 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 6 05:19:03 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:03.522 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43883dce-1590-48c4-987c-a21b63b82a1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:19:03 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:03.523 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[6dc0a258-5b44-42a4-b663-94c18d7f1c2b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:19:03 localhost journal[230404]: ethtool ioctl error on tap06b75261-40: No such device Dec 6 05:19:03 localhost journal[230404]: ethtool ioctl error on tap06b75261-40: No such device Dec 6 05:19:03 localhost nova_compute[282193]: 2025-12-06 10:19:03.545 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:03 localhost journal[230404]: ethtool ioctl error on tap06b75261-40: No such device Dec 6 05:19:03 localhost journal[230404]: ethtool ioctl error on tap06b75261-40: No such device Dec 6 05:19:03 localhost journal[230404]: ethtool ioctl error on tap06b75261-40: No such device Dec 6 05:19:03 localhost journal[230404]: ethtool ioctl error on tap06b75261-40: No such device Dec 6 05:19:03 localhost neutron_sriov_agent[256690]: 2025-12-06 10:19:03.570 2 INFO neutron.agent.securitygroups_rpc [None req-dcc65382-3d9e-4656-a85e-ef8650caf3cc b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['cd56abe4-204c-4363-ad64-0a6840260727']#033[00m Dec 6 05:19:03 localhost journal[230404]: ethtool ioctl error on tap06b75261-40: No such device Dec 6 05:19:03 localhost journal[230404]: ethtool ioctl error on tap06b75261-40: No such device Dec 6 05:19:03 localhost nova_compute[282193]: 2025-12-06 10:19:03.578 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:03 localhost nova_compute[282193]: 2025-12-06 10:19:03.602 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:03 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:03.783 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:18:59Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[, , ], fixed_ips=[], id=e398fe9a-c93c-418f-8af0-1f4523efdbbb, ip_allocation=immediate, mac_address=fa:16:3e:e5:dc:d9, name=tempest-new-port-name-159393482, network_id=b84b6f67-f6c6-431b-82dc-4d4f6b20b084, port_security_enabled=True, project_id=a269d8afc49848fbb8ce5cdb49ef37dc, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['a71b7ce5-d152-4b06-83b8-76d380ec29b6'], standard_attr_id=1997, status=DOWN, tags=[], tenant_id=a269d8afc49848fbb8ce5cdb49ef37dc, updated_at=2025-12-06T10:19:03Z on network b84b6f67-f6c6-431b-82dc-4d4f6b20b084#033[00m Dec 6 05:19:04 localhost systemd[1]: tmp-crun.PEEiMd.mount: Deactivated successfully. Dec 6 05:19:04 localhost podman[325819]: 2025-12-06 10:19:04.038936091 +0000 UTC m=+0.069332993 container kill 0816641e50d3344d04d04913a8061a709484ebf89102d281cfe008aeef60ed2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b84b6f67-f6c6-431b-82dc-4d4f6b20b084, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true) Dec 6 05:19:04 localhost dnsmasq[325519]: read /var/lib/neutron/dhcp/b84b6f67-f6c6-431b-82dc-4d4f6b20b084/addn_hosts - 1 addresses Dec 6 05:19:04 localhost dnsmasq-dhcp[325519]: read /var/lib/neutron/dhcp/b84b6f67-f6c6-431b-82dc-4d4f6b20b084/host Dec 6 05:19:04 localhost dnsmasq-dhcp[325519]: read /var/lib/neutron/dhcp/b84b6f67-f6c6-431b-82dc-4d4f6b20b084/opts Dec 6 05:19:04 localhost nova_compute[282193]: 2025-12-06 10:19:04.212 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:04 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:04.336 263652 INFO neutron.agent.dhcp.agent [None req-e3d4edda-8596-49dc-bcb8-365e58fdd54c - - - - - -] DHCP configuration for ports {'e398fe9a-c93c-418f-8af0-1f4523efdbbb'} is completed#033[00m Dec 6 05:19:04 localhost podman[325869]: Dec 6 05:19:04 localhost podman[325869]: 2025-12-06 10:19:04.43388751 +0000 UTC m=+0.078208582 container create 00206d12079198ff6a67ddc2240431deb4867e7f4507e18861ce3653f2eafae1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:19:04 localhost neutron_sriov_agent[256690]: 2025-12-06 10:19:04.433 2 INFO neutron.agent.securitygroups_rpc [None req-debf1305-ba6e-49c2-9083-8908dd68e972 a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']#033[00m Dec 6 05:19:04 localhost podman[325869]: 2025-12-06 10:19:04.387432962 +0000 UTC m=+0.031754044 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:19:04 localhost systemd[1]: Started libpod-conmon-00206d12079198ff6a67ddc2240431deb4867e7f4507e18861ce3653f2eafae1.scope. Dec 6 05:19:04 localhost systemd[1]: Started libcrun container. Dec 6 05:19:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/faf3f490170fa54cdf29bff10f6748300c6eee29114bdae5d18931b9dc259de4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:19:04 localhost podman[325869]: 2025-12-06 10:19:04.522695235 +0000 UTC m=+0.167016277 container init 00206d12079198ff6a67ddc2240431deb4867e7f4507e18861ce3653f2eafae1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:19:04 localhost podman[325869]: 2025-12-06 10:19:04.531847562 +0000 UTC m=+0.176168604 container start 00206d12079198ff6a67ddc2240431deb4867e7f4507e18861ce3653f2eafae1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:19:04 localhost dnsmasq[325887]: started, version 2.85 cachesize 150 Dec 6 05:19:04 localhost dnsmasq[325887]: DNS service limited to local subnets Dec 6 05:19:04 localhost dnsmasq[325887]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:19:04 localhost dnsmasq[325887]: warning: no upstream servers configured Dec 6 05:19:04 localhost dnsmasq-dhcp[325887]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 6 05:19:04 localhost dnsmasq[325887]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 0 addresses Dec 6 05:19:04 localhost dnsmasq-dhcp[325887]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:19:04 localhost dnsmasq-dhcp[325887]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:19:04 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:04.704 263652 INFO neutron.agent.dhcp.agent [None req-3b43b9a3-cc0e-4969-b8ee-5f3a8d2a2048 - - - - - -] DHCP configuration for ports {'687d7abb-e6aa-4047-aa26-552c962fcc91'} is completed#033[00m Dec 6 05:19:04 localhost dnsmasq[325887]: exiting on receipt of SIGTERM Dec 6 05:19:04 localhost podman[325905]: 2025-12-06 10:19:04.93136565 +0000 UTC m=+0.069118228 container kill 00206d12079198ff6a67ddc2240431deb4867e7f4507e18861ce3653f2eafae1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 6 05:19:04 localhost systemd[1]: libpod-00206d12079198ff6a67ddc2240431deb4867e7f4507e18861ce3653f2eafae1.scope: Deactivated successfully. Dec 6 05:19:05 localhost neutron_sriov_agent[256690]: 2025-12-06 10:19:05.016 2 INFO neutron.agent.securitygroups_rpc [None req-91092cc5-6d84-4cc3-b0ed-55c483b81857 b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['cd56abe4-204c-4363-ad64-0a6840260727']#033[00m Dec 6 05:19:05 localhost podman[325919]: 2025-12-06 10:19:05.018485403 +0000 UTC m=+0.068628182 container died 00206d12079198ff6a67ddc2240431deb4867e7f4507e18861ce3653f2eafae1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:19:05 localhost systemd[1]: tmp-crun.w6McMt.mount: Deactivated successfully. Dec 6 05:19:05 localhost podman[325919]: 2025-12-06 10:19:05.080582326 +0000 UTC m=+0.130725065 container cleanup 00206d12079198ff6a67ddc2240431deb4867e7f4507e18861ce3653f2eafae1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 6 05:19:05 localhost systemd[1]: libpod-conmon-00206d12079198ff6a67ddc2240431deb4867e7f4507e18861ce3653f2eafae1.scope: Deactivated successfully. Dec 6 05:19:05 localhost podman[325921]: 2025-12-06 10:19:05.103683777 +0000 UTC m=+0.145326749 container remove 00206d12079198ff6a67ddc2240431deb4867e7f4507e18861ce3653f2eafae1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:19:05 localhost neutron_sriov_agent[256690]: 2025-12-06 10:19:05.579 2 INFO neutron.agent.securitygroups_rpc [None req-03766a42-54e7-4e6a-a01a-d12c463a6613 a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']#033[00m Dec 6 05:19:05 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e158 e158: 6 total, 6 up, 6 in Dec 6 05:19:05 localhost neutron_sriov_agent[256690]: 2025-12-06 10:19:05.723 2 INFO neutron.agent.securitygroups_rpc [None req-9d59c4b8-d3a8-40d9-8d73-2b90f45c1e12 1333c58cfc75447fad1b488a958549ce a269d8afc49848fbb8ce5cdb49ef37dc - - default default] Security group member updated ['a71b7ce5-d152-4b06-83b8-76d380ec29b6']#033[00m Dec 6 05:19:06 localhost dnsmasq[325519]: read /var/lib/neutron/dhcp/b84b6f67-f6c6-431b-82dc-4d4f6b20b084/addn_hosts - 0 addresses Dec 6 05:19:06 localhost dnsmasq-dhcp[325519]: read /var/lib/neutron/dhcp/b84b6f67-f6c6-431b-82dc-4d4f6b20b084/host Dec 6 05:19:06 localhost dnsmasq-dhcp[325519]: read /var/lib/neutron/dhcp/b84b6f67-f6c6-431b-82dc-4d4f6b20b084/opts Dec 6 05:19:06 localhost podman[325981]: 2025-12-06 10:19:06.002647273 +0000 UTC m=+0.061324500 container kill 0816641e50d3344d04d04913a8061a709484ebf89102d281cfe008aeef60ed2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b84b6f67-f6c6-431b-82dc-4d4f6b20b084, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Dec 6 05:19:06 localhost systemd[1]: var-lib-containers-storage-overlay-faf3f490170fa54cdf29bff10f6748300c6eee29114bdae5d18931b9dc259de4-merged.mount: Deactivated successfully. Dec 6 05:19:06 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-00206d12079198ff6a67ddc2240431deb4867e7f4507e18861ce3653f2eafae1-userdata-shm.mount: Deactivated successfully. Dec 6 05:19:06 localhost neutron_sriov_agent[256690]: 2025-12-06 10:19:06.283 2 INFO neutron.agent.securitygroups_rpc [None req-13d35407-bef4-4c5e-baaa-9390a0fcd613 b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['cd56abe4-204c-4363-ad64-0a6840260727']#033[00m Dec 6 05:19:06 localhost systemd[1]: tmp-crun.JCj6ZZ.mount: Deactivated successfully. Dec 6 05:19:06 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 1 addresses Dec 6 05:19:06 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:19:06 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:19:06 localhost podman[326026]: 2025-12-06 10:19:06.311275086 +0000 UTC m=+0.077193043 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 6 05:19:06 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 6 05:19:06 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3936622661' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 6 05:19:06 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 6 05:19:06 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3936622661' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 6 05:19:06 localhost ovn_controller[154851]: 2025-12-06T10:19:06Z|00352|binding|INFO|Removing iface tapf8d80242-07 ovn-installed in OVS Dec 6 05:19:06 localhost ovn_controller[154851]: 2025-12-06T10:19:06Z|00353|binding|INFO|Removing lport f8d80242-07b6-493f-8d5a-67d3b413e7c2 ovn-installed in OVS Dec 6 05:19:06 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:06.570 160509 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 1da81fc3-b0d7-4158-9ff3-f6eb5f1de696 with type ""#033[00m Dec 6 05:19:06 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:06.601 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-b84b6f67-f6c6-431b-82dc-4d4f6b20b084', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b84b6f67-f6c6-431b-82dc-4d4f6b20b084', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a269d8afc49848fbb8ce5cdb49ef37dc', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548789.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4b3e9ece-98ce-425e-b1a7-ae8b3622954c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=f8d80242-07b6-493f-8d5a-67d3b413e7c2) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:19:06 localhost nova_compute[282193]: 2025-12-06 10:19:06.601 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:06 localhost nova_compute[282193]: 2025-12-06 10:19:06.602 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:06 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:06.603 160509 INFO neutron.agent.ovn.metadata.agent [-] Port f8d80242-07b6-493f-8d5a-67d3b413e7c2 in datapath b84b6f67-f6c6-431b-82dc-4d4f6b20b084 unbound from our chassis#033[00m Dec 6 05:19:06 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:06.605 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b84b6f67-f6c6-431b-82dc-4d4f6b20b084, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:19:06 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:06.605 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[176f7abd-d36a-4feb-8abd-f9867d93b368]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:19:06 localhost dnsmasq[325519]: exiting on receipt of SIGTERM Dec 6 05:19:06 localhost podman[326080]: 2025-12-06 10:19:06.654110684 +0000 UTC m=+0.052875315 container kill 0816641e50d3344d04d04913a8061a709484ebf89102d281cfe008aeef60ed2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b84b6f67-f6c6-431b-82dc-4d4f6b20b084, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 6 05:19:06 localhost systemd[1]: libpod-0816641e50d3344d04d04913a8061a709484ebf89102d281cfe008aeef60ed2f.scope: Deactivated successfully. Dec 6 05:19:06 localhost podman[326095]: Dec 6 05:19:06 localhost podman[326095]: 2025-12-06 10:19:06.703984447 +0000 UTC m=+0.075646296 container create 23114774a6fb8cff7210fda601c8d1bd2082af42df010baf7781c360d2ff8fa6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3) Dec 6 05:19:06 localhost podman[326113]: 2025-12-06 10:19:06.740477744 +0000 UTC m=+0.061477386 container died 0816641e50d3344d04d04913a8061a709484ebf89102d281cfe008aeef60ed2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b84b6f67-f6c6-431b-82dc-4d4f6b20b084, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 6 05:19:06 localhost systemd[1]: Started libpod-conmon-23114774a6fb8cff7210fda601c8d1bd2082af42df010baf7781c360d2ff8fa6.scope. Dec 6 05:19:06 localhost systemd[1]: Started libcrun container. Dec 6 05:19:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84a2207a38c3ce9b023fb76311185a6d9de756fe5150fd8266ec626a8df1dbd6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:19:06 localhost podman[326095]: 2025-12-06 10:19:06.666340624 +0000 UTC m=+0.038002474 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:19:06 localhost podman[326095]: 2025-12-06 10:19:06.774567228 +0000 UTC m=+0.146229087 container init 23114774a6fb8cff7210fda601c8d1bd2082af42df010baf7781c360d2ff8fa6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 6 05:19:06 localhost podman[326095]: 2025-12-06 10:19:06.786663424 +0000 UTC m=+0.158325283 container start 23114774a6fb8cff7210fda601c8d1bd2082af42df010baf7781c360d2ff8fa6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:19:06 localhost dnsmasq[326142]: started, version 2.85 cachesize 150 Dec 6 05:19:06 localhost dnsmasq[326142]: DNS service limited to local subnets Dec 6 05:19:06 localhost dnsmasq[326142]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:19:06 localhost dnsmasq[326142]: warning: no upstream servers configured Dec 6 05:19:06 localhost dnsmasq-dhcp[326142]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d Dec 6 05:19:06 localhost dnsmasq-dhcp[326142]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 6 05:19:06 localhost dnsmasq[326142]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 2 addresses Dec 6 05:19:06 localhost dnsmasq-dhcp[326142]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:19:06 localhost dnsmasq-dhcp[326142]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:19:06 localhost podman[326113]: 2025-12-06 10:19:06.838486676 +0000 UTC m=+0.159486238 container remove 0816641e50d3344d04d04913a8061a709484ebf89102d281cfe008aeef60ed2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b84b6f67-f6c6-431b-82dc-4d4f6b20b084, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:19:06 localhost systemd[1]: libpod-conmon-0816641e50d3344d04d04913a8061a709484ebf89102d281cfe008aeef60ed2f.scope: Deactivated successfully. Dec 6 05:19:06 localhost nova_compute[282193]: 2025-12-06 10:19:06.854 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:06 localhost kernel: device tapf8d80242-07 left promiscuous mode Dec 6 05:19:06 localhost nova_compute[282193]: 2025-12-06 10:19:06.867 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:06 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:06.919 263652 INFO neutron.agent.dhcp.agent [None req-85610208-1faa-4255-90a4-b8674ea410b5 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:19:06 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:06.921 263652 INFO neutron.agent.dhcp.agent [None req-85610208-1faa-4255-90a4-b8674ea410b5 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:19:06 localhost ovn_controller[154851]: 2025-12-06T10:19:06Z|00354|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:19:06 localhost nova_compute[282193]: 2025-12-06 10:19:06.973 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:07 localhost systemd[1]: tmp-crun.4KrnKN.mount: Deactivated successfully. Dec 6 05:19:07 localhost systemd[1]: var-lib-containers-storage-overlay-75d9cf4f7f0f4147182b036eddac499be359e70f248dd76b6f98dcb4d0de61ef-merged.mount: Deactivated successfully. Dec 6 05:19:07 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0816641e50d3344d04d04913a8061a709484ebf89102d281cfe008aeef60ed2f-userdata-shm.mount: Deactivated successfully. Dec 6 05:19:07 localhost systemd[1]: run-netns-qdhcp\x2db84b6f67\x2df6c6\x2d431b\x2d82dc\x2d4d4f6b20b084.mount: Deactivated successfully. Dec 6 05:19:07 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:07.072 263652 INFO neutron.agent.dhcp.agent [None req-96df3089-74ee-416a-a56a-68f56bfaa1f4 - - - - - -] DHCP configuration for ports {'06b75261-40e0-4712-ac9e-63a2586b3a8c', '687d7abb-e6aa-4047-aa26-552c962fcc91', '96643b3a-2587-4ba3-bc78-4df17e224aeb'} is completed#033[00m Dec 6 05:19:07 localhost dnsmasq[326142]: exiting on receipt of SIGTERM Dec 6 05:19:07 localhost podman[326165]: 2025-12-06 10:19:07.110085745 +0000 UTC m=+0.066858079 container kill 23114774a6fb8cff7210fda601c8d1bd2082af42df010baf7781c360d2ff8fa6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Dec 6 05:19:07 localhost systemd[1]: libpod-23114774a6fb8cff7210fda601c8d1bd2082af42df010baf7781c360d2ff8fa6.scope: Deactivated successfully. Dec 6 05:19:07 localhost podman[326179]: 2025-12-06 10:19:07.181709206 +0000 UTC m=+0.059799064 container died 23114774a6fb8cff7210fda601c8d1bd2082af42df010baf7781c360d2ff8fa6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 6 05:19:07 localhost systemd[1]: tmp-crun.KBD7F5.mount: Deactivated successfully. Dec 6 05:19:07 localhost podman[326179]: 2025-12-06 10:19:07.223132953 +0000 UTC m=+0.101222771 container cleanup 23114774a6fb8cff7210fda601c8d1bd2082af42df010baf7781c360d2ff8fa6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Dec 6 05:19:07 localhost systemd[1]: libpod-conmon-23114774a6fb8cff7210fda601c8d1bd2082af42df010baf7781c360d2ff8fa6.scope: Deactivated successfully. Dec 6 05:19:07 localhost podman[326186]: 2025-12-06 10:19:07.268961393 +0000 UTC m=+0.135045947 container remove 23114774a6fb8cff7210fda601c8d1bd2082af42df010baf7781c360d2ff8fa6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:19:07 localhost ovn_controller[154851]: 2025-12-06T10:19:07Z|00355|binding|INFO|Releasing lport 06b75261-40e0-4712-ac9e-63a2586b3a8c from this chassis (sb_readonly=0) Dec 6 05:19:07 localhost nova_compute[282193]: 2025-12-06 10:19:07.284 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:07 localhost ovn_controller[154851]: 2025-12-06T10:19:07Z|00356|binding|INFO|Setting lport 06b75261-40e0-4712-ac9e-63a2586b3a8c down in Southbound Dec 6 05:19:07 localhost kernel: device tap06b75261-40 left promiscuous mode Dec 6 05:19:07 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:07.292 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1::2/64 2001:db8::f816:3eff:fe86:a151/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548789.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=06b75261-40e0-4712-ac9e-63a2586b3a8c) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:19:07 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:07.294 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 06b75261-40e0-4712-ac9e-63a2586b3a8c in datapath 43883dce-1590-48c4-987c-a21b63b82a1c unbound from our chassis#033[00m Dec 6 05:19:07 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:07.297 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43883dce-1590-48c4-987c-a21b63b82a1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:19:07 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:07.297 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[aa961c16-70ac-48cb-95f9-f223cf1c9d1c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:19:07 localhost nova_compute[282193]: 2025-12-06 10:19:07.307 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:07 localhost nova_compute[282193]: 2025-12-06 10:19:07.308 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:07 localhost sshd[326209]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.916 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'name': 'test', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005548789.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'hostId': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.917 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.918 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.922 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd2f9f95d-484c-4677-bb02-787c8d0e9731', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:19:07.918181', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'ffe96546-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12766.167552279, 'message_signature': 'f6aeacf924948555e23a2bd9f6348258df0fa8d387a23da12c00f174211240b5'}]}, 'timestamp': '2025-12-06 10:19:07.922984', '_unique_id': 'e31ecbe835604ff38f61a1061a76491a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.925 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.926 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.944 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/cpu volume: 17470000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'cfafb631-b885-424e-a8c6-d48159746871', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 17470000000, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'timestamp': '2025-12-06T10:19:07.927080', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': 'ffecbe30-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12766.193450295, 'message_signature': 'be4ae09f4ac18ce6a03f59e295fe61aefe11b2c462ceb0b24a2c5d8ba31deb96'}]}, 'timestamp': '2025-12-06 10:19:07.944816', '_unique_id': '345d8ab88cb34a51b16e3331824065ce'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.945 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.947 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.981 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.latency volume: 1525105336 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.981 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.latency volume: 106716064 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '339b7501-3744-41e4-b1d2-7f581ea58dbc', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1525105336, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:19:07.947429', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'fff26092-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12766.196807337, 'message_signature': 'b8edd53c700d6f1624588cdafba775a33025062f4369dcec11e2e6d7c81854f1'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 106716064, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:19:07.947429', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'fff27712-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12766.196807337, 'message_signature': '05dd2f7aea62eb74001a6d21572e871933f37659e70f1f0e3959eedd465786eb'}]}, 'timestamp': '2025-12-06 10:19:07.982228', '_unique_id': 'b51bca0332d142f2b893758024a66a72'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.983 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.984 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.996 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.996 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'bd6220e1-841c-4e19-813e-3148c9b21301', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:19:07.984877', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'fff4aa96-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12766.234253903, 'message_signature': 'd03c1e3dd5be741e2d047c4704397b57c7a424a8c3b3466eeb245ec6cadc6b94'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:19:07.984877', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'fff4c120-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12766.234253903, 'message_signature': 'c3e5a564f72c900a80786c36f88accb5d88a82c8fae29a8d12249108e6509e08'}]}, 'timestamp': '2025-12-06 10:19:07.997240', '_unique_id': 'a5d9e361b2a54a9ebf055d039d2ae3b0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:19:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.998 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.999 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:07.999 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets volume: 60 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1aa42e51-c552-4785-9d37-01cf736e54ce', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 60, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:19:07.999749', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'fff53d1c-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12766.167552279, 'message_signature': 'b62af848adb52288fe9cd5bc410412f02ce16e89971bba3eca3603c337188051'}]}, 'timestamp': '2025-12-06 10:19:08.000520', '_unique_id': '378b2001b45f4246800d53a5af820644'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.001 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.002 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.002 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.003 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5a919f91-9035-4fb3-81a9-3a334487b80a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:19:08.002846', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'fff5b17a-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12766.234253903, 'message_signature': '5d3b58493e19e195cf450d75496d7273f6bd83d3614ae4e9361590e4ae0fcc28'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:19:08.002846', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'fff5c656-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12766.234253903, 'message_signature': '581500f7f0f38002a189581145c4dc6110332d34b8eca8f19ac9a1367515d093'}]}, 'timestamp': '2025-12-06 10:19:08.003995', '_unique_id': '54794c3d8d77494b99d61bcfbd73a4a2'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.005 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.006 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.006 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a5e2b0a1-e337-4498-aac8-a60b7a8c9882', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:19:08.006347', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'fff63a46-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12766.167552279, 'message_signature': '7cf7a5b86e8c2766e37112cd18e47ad68223426a4e406600c5a87719c9cf2e6b'}]}, 'timestamp': '2025-12-06 10:19:08.006980', '_unique_id': '613afb679a394a3996e6240c36b73583'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.007 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.009 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.009 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.latency volume: 1252245154 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.009 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.latency volume: 27668224 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'dae32c01-7911-44f5-9de6-77f3d2ffbae2', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1252245154, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:19:08.009263', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'fff6abde-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12766.196807337, 'message_signature': 'e3044fd548e398700884e1b92def3b95d22d650dace5bc95f00c53fc025b0474'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 27668224, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:19:08.009263', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'fff6c29a-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12766.196807337, 'message_signature': '4517e382ddcd91889af05c34529bf1f94874e9e063fcb4d8667aa6bcb588d15e'}]}, 'timestamp': '2025-12-06 10:19:08.010377', '_unique_id': '307fc986014843c882a9bcc7e7d458a4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.011 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.012 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.012 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.requests volume: 47 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.013 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f67b1c54-79da-4e2f-811d-8b5e352bc629', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 47, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:19:08.012712', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'fff73464-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12766.196807337, 'message_signature': '8046d7311fcdb1525c062ab99dfec0c1f53627d092fd44695f71f020be047120'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:19:08.012712', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'fff747ba-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12766.196807337, 'message_signature': 'c22cb81ff697ec9701609e89afade7a218d7a84bf82df06988b595130d2d8d4b'}]}, 'timestamp': '2025-12-06 10:19:08.013835', '_unique_id': '822a66e13c4646308ad1f7a42f24d097'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.014 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.016 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.016 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets volume: 114 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '082205a4-8bf7-417c-a1e6-c2c1743ffacf', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 114, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:19:08.016316', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'fff7bf9c-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12766.167552279, 'message_signature': '39b2e546eacb4306c48b930ae11cf7a3c17911ee7c0f5b17805a7db20bf7fbc0'}]}, 'timestamp': '2025-12-06 10:19:08.016978', '_unique_id': '53615a78db484d34a74b52ec593bb492'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.017 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.019 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.019 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.019 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.019 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9780f78c-a064-4c0e-94f3-d5f1290f1a0d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:19:08.019630', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'fff8448a-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12766.167552279, 'message_signature': 'b95073e888c48c66143cfe6dc60c6d4116e64712f60fd250bd258ed8010e0789'}]}, 'timestamp': '2025-12-06 10:19:08.020404', '_unique_id': 'aca2592797c641bd96773fb68b0b09a5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.021 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.022 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.022 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2643cbad-ed83-407f-94cc-1dec9e6b31db', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:19:08.022661', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'fff8b94c-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12766.167552279, 'message_signature': '5e105f525e447ea790e5912aceea253477bad5fea49644cc27a902fbb76e6809'}]}, 'timestamp': '2025-12-06 10:19:08.023300', '_unique_id': 'd5aca57787d04639bddbce8e0f7504a5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.024 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.025 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.025 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.bytes volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0fdf6b6b-857c-41a4-a861-aa07db9ba4bb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:19:08.025883', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'fff9357a-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12766.167552279, 'message_signature': 'f0940a554d7d9c05dee34bba90700cce57975eef3995c1bf5e53a23db0fe99f9'}]}, 'timestamp': '2025-12-06 10:19:08.026476', '_unique_id': 'a7b5cc97c153459a90f602f47eff9ded'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.027 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.028 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.028 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.029 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:19:08 localhost systemd[1]: var-lib-containers-storage-overlay-84a2207a38c3ce9b023fb76311185a6d9de756fe5150fd8266ec626a8df1dbd6-merged.mount: Deactivated successfully. Dec 6 05:19:08 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-23114774a6fb8cff7210fda601c8d1bd2082af42df010baf7781c360d2ff8fa6-userdata-shm.mount: Deactivated successfully. Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f6a9d47d-8dc8-4ec9-80d8-ef52280de585', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:19:08.028728', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'fff9a5dc-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12766.234253903, 'message_signature': '2f54158307620dfd2b5bcbc62e99226fee34dd0f231a3e64909b3b231addd5be'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:19:08.028728', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'fff9b9a0-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12766.234253903, 'message_signature': 'ea135b8ba2abc8423311cdafe03f756429385b50870a66f609f67c9d0bb0bff5'}]}, 'timestamp': '2025-12-06 10:19:08.029864', '_unique_id': '27880b051d114bb99870294b17ee0eb3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.030 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.032 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.032 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.032 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a91cdd5c-b60f-46ae-8197-e714eaf29654', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:19:08.032258', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'fffa2e1c-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12766.196807337, 'message_signature': '487b88f707e445507b54dd12e479cb061fe5bcdfe0d08c7e9b42817d2ded2770'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:19:08.032258', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'fffa450a-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12766.196807337, 'message_signature': 'b47bd163846c6296e2106715b7f15405455097eae272de5001f4b95501c64c63'}]}, 'timestamp': '2025-12-06 10:19:08.033402', '_unique_id': '8742d053de364ed68cf543756aff3c47'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.034 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.035 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.035 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.036 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f857c1b0-4c03-434f-8dfd-9da116182cf5', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:19:08.036137', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'fffac3cc-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12766.167552279, 'message_signature': '65ab06bd55847a85b1105251809166c735a561d0b3df43e036fd88857ecd471d'}]}, 'timestamp': '2025-12-06 10:19:08.036653', '_unique_id': 'f7c7375318a44c23a89a9529f65f70da'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.037 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.038 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.039 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.bytes volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f11a26cb-1aac-4ee2-b0de-373d9459be1c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:19:08.039013', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'fffb33d4-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12766.167552279, 'message_signature': 'fad12845c02294857a8b706db9e3fd73330f19de6bed16b3dfa3e0bb79d8ad9a'}]}, 'timestamp': '2025-12-06 10:19:08.039518', '_unique_id': 'e0011b0e6ed04b9b9f689ecfb792f97e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.040 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.041 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.041 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.bytes volume: 389120 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.041 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'bab36104-e0f1-4ad6-84a9-8450d56574ba', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 389120, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:19:08.041413', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'fffb8de8-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12766.196807337, 'message_signature': '4605e77cf9de0c4bce548fe8ef318bfc49f1561eee4c347795de099615de3a89'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:19:08.041413', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'fffb98e2-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12766.196807337, 'message_signature': '526bc678867f909d232e1d4fcafe20ee952ef0632485139eeda9ef5ae8522bcc'}]}, 'timestamp': '2025-12-06 10:19:08.041980', '_unique_id': '34a41bbb7d9b4e5993c603ec082c8bbb'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.042 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.043 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.043 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '30da0d22-728c-442b-bc3d-55d6a18154bb', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:19:08.043310', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'fffbd758-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12766.167552279, 'message_signature': '7859ad67059875dd4e58ca26820091daf3272f9866262ac0ce3723593c9a1e90'}]}, 'timestamp': '2025-12-06 10:19:08.043603', '_unique_id': 'b2c8f406e07c462c93d86b45ac3401f6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.044 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c41677e5-26e6-4a86-8749-80e7b3bce979', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:19:08.044818', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'fffc1092-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12766.196807337, 'message_signature': 'd654997788eae63922a3a7bae2b4f3426b5b23669a2055ba21834daa59fd14d1'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:19:08.044818', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'fffc17a4-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12766.196807337, 'message_signature': 'ce825789d8545c5434c17f0d480ec4fd06f731217f69a2357d5d0fd6cc6528a6'}]}, 'timestamp': '2025-12-06 10:19:08.045187', '_unique_id': 'a570bb79dd2b45d89b2febea495e3690'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.045 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/memory.usage volume: 51.80859375 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8fdbadda-abe2-4663-901f-5521df459efc', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.80859375, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'timestamp': '2025-12-06T10:19:08.046123', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': 'fffc43a0-d28c-11f0-aaf2-fa163e118844', 'monotonic_time': 12766.193450295, 'message_signature': 'f5accdfa69a366b8b0635b57529f5c5a7e389a5642c3fb03a29ed89cb9ecfdf7'}]}, 'timestamp': '2025-12-06 10:19:08.046319', '_unique_id': '8efd25b91dad4598a0a0f8c9cad54af6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:19:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:19:08.046 12 ERROR oslo_messaging.notify.messaging Dec 6 05:19:08 localhost nova_compute[282193]: 2025-12-06 10:19:08.105 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:08 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:19:09 localhost nova_compute[282193]: 2025-12-06 10:19:09.252 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:09 localhost systemd[1]: run-netns-qdhcp\x2d43883dce\x2d1590\x2d48c4\x2d987c\x2da21b63b82a1c.mount: Deactivated successfully. Dec 6 05:19:09 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e159 e159: 6 total, 6 up, 6 in Dec 6 05:19:09 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:09.941 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:19:09Z, description=, device_id=9e1f6eb1-3712-4bad-ae30-ab318957491e, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=9ebf0f4d-a252-46d9-a5a3-4880f54c157b, ip_allocation=immediate, mac_address=fa:16:3e:46:6a:0c, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2038, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-06T10:19:09Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:19:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:19:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:19:10 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:10.206 263652 INFO neutron.agent.linux.ip_lib [None req-b83b2478-c8f5-4578-a617-ed1d2101d359 - - - - - -] Device tap71317000-7e cannot be used as it has no MAC address#033[00m Dec 6 05:19:10 localhost nova_compute[282193]: 2025-12-06 10:19:10.231 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:10 localhost kernel: device tap71317000-7e entered promiscuous mode Dec 6 05:19:10 localhost NetworkManager[5973]: [1765016350.2396] manager: (tap71317000-7e): new Generic device (/org/freedesktop/NetworkManager/Devices/59) Dec 6 05:19:10 localhost ovn_controller[154851]: 2025-12-06T10:19:10Z|00357|binding|INFO|Claiming lport 71317000-7e06-4580-adc9-235e7990a2e9 for this chassis. Dec 6 05:19:10 localhost nova_compute[282193]: 2025-12-06 10:19:10.241 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:10 localhost ovn_controller[154851]: 2025-12-06T10:19:10Z|00358|binding|INFO|71317000-7e06-4580-adc9-235e7990a2e9: Claiming unknown Dec 6 05:19:10 localhost systemd-udevd[326263]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:19:10 localhost ovn_controller[154851]: 2025-12-06T10:19:10Z|00359|binding|INFO|Setting lport 71317000-7e06-4580-adc9-235e7990a2e9 up in Southbound Dec 6 05:19:10 localhost nova_compute[282193]: 2025-12-06 10:19:10.253 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:10 localhost ovn_controller[154851]: 2025-12-06T10:19:10Z|00360|binding|INFO|Setting lport 71317000-7e06-4580-adc9-235e7990a2e9 ovn-installed in OVS Dec 6 05:19:10 localhost nova_compute[282193]: 2025-12-06 10:19:10.255 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:10 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:10.250 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=71317000-7e06-4580-adc9-235e7990a2e9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:19:10 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:10.252 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 71317000-7e06-4580-adc9-235e7990a2e9 in datapath 43883dce-1590-48c4-987c-a21b63b82a1c bound to our chassis#033[00m Dec 6 05:19:10 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:10.254 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Port 71a44691-315d-461a-89ef-7b4a5310a873 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 6 05:19:10 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:10.254 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43883dce-1590-48c4-987c-a21b63b82a1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:19:10 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:10.255 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[93214776-a626-4728-8a2b-bb3efde2dfb8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:19:10 localhost nova_compute[282193]: 2025-12-06 10:19:10.275 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:10 localhost journal[230404]: ethtool ioctl error on tap71317000-7e: No such device Dec 6 05:19:10 localhost journal[230404]: ethtool ioctl error on tap71317000-7e: No such device Dec 6 05:19:10 localhost journal[230404]: ethtool ioctl error on tap71317000-7e: No such device Dec 6 05:19:10 localhost journal[230404]: ethtool ioctl error on tap71317000-7e: No such device Dec 6 05:19:10 localhost journal[230404]: ethtool ioctl error on tap71317000-7e: No such device Dec 6 05:19:10 localhost journal[230404]: ethtool ioctl error on tap71317000-7e: No such device Dec 6 05:19:10 localhost journal[230404]: ethtool ioctl error on tap71317000-7e: No such device Dec 6 05:19:10 localhost journal[230404]: ethtool ioctl error on tap71317000-7e: No such device Dec 6 05:19:10 localhost podman[326250]: 2025-12-06 10:19:10.310335573 +0000 UTC m=+0.114449612 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:19:10 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 2 addresses Dec 6 05:19:10 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:19:10 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:19:10 localhost podman[326222]: 2025-12-06 10:19:10.314938453 +0000 UTC m=+0.197152781 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9) Dec 6 05:19:10 localhost podman[326223]: 2025-12-06 10:19:10.28584968 +0000 UTC m=+0.166386447 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 6 05:19:10 localhost nova_compute[282193]: 2025-12-06 10:19:10.365 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:10 localhost podman[326222]: 2025-12-06 10:19:10.37252694 +0000 UTC m=+0.254741238 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_id=edpm, distribution-scope=public, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, version=9.6, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=) Dec 6 05:19:10 localhost nova_compute[282193]: 2025-12-06 10:19:10.380 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:10 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:19:10 localhost podman[326223]: 2025-12-06 10:19:10.397060684 +0000 UTC m=+0.277597421 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 6 05:19:10 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:19:10 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:10.637 263652 INFO neutron.agent.dhcp.agent [None req-58133c63-44aa-4ca6-9741-dcfa357cfbae - - - - - -] DHCP configuration for ports {'9ebf0f4d-a252-46d9-a5a3-4880f54c157b'} is completed#033[00m Dec 6 05:19:10 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 6 05:19:10 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1079829945' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 6 05:19:10 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 6 05:19:10 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1079829945' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 6 05:19:11 localhost podman[326361]: Dec 6 05:19:11 localhost podman[326361]: 2025-12-06 10:19:11.244653803 +0000 UTC m=+0.083018509 container create 2fa7bd0f92c875d1bd08c0768368cc684a6096ec2ef786a9089f18e133c12dbf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0) Dec 6 05:19:11 localhost podman[326361]: 2025-12-06 10:19:11.196081849 +0000 UTC m=+0.034446595 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:19:11 localhost systemd[1]: Started libpod-conmon-2fa7bd0f92c875d1bd08c0768368cc684a6096ec2ef786a9089f18e133c12dbf.scope. Dec 6 05:19:11 localhost systemd[1]: Started libcrun container. Dec 6 05:19:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25b77508fc5e8b97d0f383ba076d5d2275da596e03d557121924e9aa163151dc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:19:11 localhost podman[326361]: 2025-12-06 10:19:11.326795684 +0000 UTC m=+0.165160400 container init 2fa7bd0f92c875d1bd08c0768368cc684a6096ec2ef786a9089f18e133c12dbf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125) Dec 6 05:19:11 localhost podman[326361]: 2025-12-06 10:19:11.337356035 +0000 UTC m=+0.175720741 container start 2fa7bd0f92c875d1bd08c0768368cc684a6096ec2ef786a9089f18e133c12dbf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:19:11 localhost dnsmasq[326379]: started, version 2.85 cachesize 150 Dec 6 05:19:11 localhost dnsmasq[326379]: DNS service limited to local subnets Dec 6 05:19:11 localhost dnsmasq[326379]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:19:11 localhost dnsmasq[326379]: warning: no upstream servers configured Dec 6 05:19:11 localhost dnsmasq-dhcp[326379]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 6 05:19:11 localhost dnsmasq[326379]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 0 addresses Dec 6 05:19:11 localhost dnsmasq-dhcp[326379]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:19:11 localhost dnsmasq-dhcp[326379]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:19:11 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:11.403 263652 INFO neutron.agent.dhcp.agent [None req-b83b2478-c8f5-4578-a617-ed1d2101d359 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:19:03Z, description=, device_id=, device_owner=, dns_assignment=[, ], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[, ], id=96643b3a-2587-4ba3-bc78-4df17e224aeb, ip_allocation=immediate, mac_address=fa:16:3e:78:6a:c8, name=tempest-NetworksTestDHCPv6-422539605, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:17:28Z, description=, dns_domain=, id=43883dce-1590-48c4-987c-a21b63b82a1c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-1975538139, port_security_enabled=True, project_id=34a17eee71de4bac8b71972a4b7b506c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=42818, qos_policy_id=None, revision_number=53, router:external=False, shared=False, standard_attr_id=1415, status=ACTIVE, subnets=['3be9e423-7cab-42de-a208-d0a663296188', '85ecd64d-20c3-4ae8-82ef-a956d1f47bf6'], tags=[], tenant_id=34a17eee71de4bac8b71972a4b7b506c, updated_at=2025-12-06T10:19:02Z, vlan_transparent=None, network_id=43883dce-1590-48c4-987c-a21b63b82a1c, port_security_enabled=True, project_id=34a17eee71de4bac8b71972a4b7b506c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['d618a097-5989-47aa-9263-1c8a114ad269'], standard_attr_id=2026, status=DOWN, tags=[], tenant_id=34a17eee71de4bac8b71972a4b7b506c, updated_at=2025-12-06T10:19:03Z on network 43883dce-1590-48c4-987c-a21b63b82a1c#033[00m Dec 6 05:19:11 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:11.538 263652 INFO neutron.agent.dhcp.agent [None req-b0518dc2-39d0-4260-824e-ca1a54b787ae - - - - - -] DHCP configuration for ports {'687d7abb-e6aa-4047-aa26-552c962fcc91'} is completed#033[00m Dec 6 05:19:11 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:11.586 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:fa:29 2001:db8:0:1:f816:3eff:fe3f:fa29'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe3f:fa29/64', 'neutron:device_id': 'ovnmeta-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '30', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=687d7abb-e6aa-4047-aa26-552c962fcc91) old=Port_Binding(mac=['fa:16:3e:3f:fa:29 2001:db8::f816:3eff:fe3f:fa29'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe3f:fa29/64', 'neutron:device_id': 'ovnmeta-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '28', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:19:11 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:11.588 160509 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 687d7abb-e6aa-4047-aa26-552c962fcc91 in datapath 43883dce-1590-48c4-987c-a21b63b82a1c updated#033[00m Dec 6 05:19:11 localhost dnsmasq[326379]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 2 addresses Dec 6 05:19:11 localhost dnsmasq-dhcp[326379]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:19:11 localhost dnsmasq-dhcp[326379]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:19:11 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:11.595 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Port 71a44691-315d-461a-89ef-7b4a5310a873 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 6 05:19:11 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:11.595 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43883dce-1590-48c4-987c-a21b63b82a1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:19:11 localhost podman[326397]: 2025-12-06 10:19:11.595612588 +0000 UTC m=+0.067178519 container kill 2fa7bd0f92c875d1bd08c0768368cc684a6096ec2ef786a9089f18e133c12dbf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 6 05:19:11 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:11.596 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[c0ac5b7f-936d-4279-8ebf-ee77d66080e3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:19:11 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:11.855 263652 INFO neutron.agent.dhcp.agent [None req-80729c80-748d-478d-af43-dbc7d2f3b614 - - - - - -] DHCP configuration for ports {'96643b3a-2587-4ba3-bc78-4df17e224aeb'} is completed#033[00m Dec 6 05:19:11 localhost dnsmasq[326379]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 0 addresses Dec 6 05:19:11 localhost podman[326434]: 2025-12-06 10:19:11.927164674 +0000 UTC m=+0.055850735 container kill 2fa7bd0f92c875d1bd08c0768368cc684a6096ec2ef786a9089f18e133c12dbf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Dec 6 05:19:11 localhost dnsmasq-dhcp[326379]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:19:11 localhost dnsmasq-dhcp[326379]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:19:11 localhost neutron_sriov_agent[256690]: 2025-12-06 10:19:11.986 2 INFO neutron.agent.securitygroups_rpc [None req-cdcfd119-194b-4de8-98ff-a5e7eedce5b7 7365839d5bca455283c571ca0abd33bb 12673f85bb004c3c946338dc70e565e7 - - default default] Security group member updated ['5a014cda-2333-483a-bcd0-2243e387c412']#033[00m Dec 6 05:19:12 localhost nova_compute[282193]: 2025-12-06 10:19:12.093 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:12 localhost ovn_controller[154851]: 2025-12-06T10:19:12Z|00361|binding|INFO|Releasing lport 71317000-7e06-4580-adc9-235e7990a2e9 from this chassis (sb_readonly=0) Dec 6 05:19:12 localhost kernel: device tap71317000-7e left promiscuous mode Dec 6 05:19:12 localhost ovn_controller[154851]: 2025-12-06T10:19:12Z|00362|binding|INFO|Setting lport 71317000-7e06-4580-adc9-235e7990a2e9 down in Southbound Dec 6 05:19:12 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:12.105 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548789.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=71317000-7e06-4580-adc9-235e7990a2e9) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:19:12 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:12.107 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 71317000-7e06-4580-adc9-235e7990a2e9 in datapath 43883dce-1590-48c4-987c-a21b63b82a1c unbound from our chassis#033[00m Dec 6 05:19:12 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:12.112 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43883dce-1590-48c4-987c-a21b63b82a1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:19:12 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:12.112 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[5bbd6629-5286-41e1-bd80-a057ca582f16]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:19:12 localhost nova_compute[282193]: 2025-12-06 10:19:12.122 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:12 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:12.184 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:6c:02', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:a8:2f:0c:cb:a1'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:19:12 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:12.185 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 6 05:19:12 localhost nova_compute[282193]: 2025-12-06 10:19:12.185 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:12 localhost systemd[1]: tmp-crun.7ky53u.mount: Deactivated successfully. Dec 6 05:19:12 localhost dnsmasq[326379]: exiting on receipt of SIGTERM Dec 6 05:19:12 localhost podman[326474]: 2025-12-06 10:19:12.644076759 +0000 UTC m=+0.064103095 container kill 2fa7bd0f92c875d1bd08c0768368cc684a6096ec2ef786a9089f18e133c12dbf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 6 05:19:12 localhost systemd[1]: libpod-2fa7bd0f92c875d1bd08c0768368cc684a6096ec2ef786a9089f18e133c12dbf.scope: Deactivated successfully. Dec 6 05:19:12 localhost podman[326487]: 2025-12-06 10:19:12.702722348 +0000 UTC m=+0.045798540 container died 2fa7bd0f92c875d1bd08c0768368cc684a6096ec2ef786a9089f18e133c12dbf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:19:12 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e160 e160: 6 total, 6 up, 6 in Dec 6 05:19:12 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2fa7bd0f92c875d1bd08c0768368cc684a6096ec2ef786a9089f18e133c12dbf-userdata-shm.mount: Deactivated successfully. Dec 6 05:19:12 localhost podman[326487]: 2025-12-06 10:19:12.792046608 +0000 UTC m=+0.135122760 container cleanup 2fa7bd0f92c875d1bd08c0768368cc684a6096ec2ef786a9089f18e133c12dbf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Dec 6 05:19:12 localhost systemd[1]: libpod-conmon-2fa7bd0f92c875d1bd08c0768368cc684a6096ec2ef786a9089f18e133c12dbf.scope: Deactivated successfully. Dec 6 05:19:12 localhost podman[326489]: 2025-12-06 10:19:12.82280337 +0000 UTC m=+0.155556149 container remove 2fa7bd0f92c875d1bd08c0768368cc684a6096ec2ef786a9089f18e133c12dbf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 6 05:19:12 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:12.884 263652 INFO neutron.agent.linux.ip_lib [None req-71a8dcce-ead2-401d-9227-29b367ede1fa - - - - - -] Device tap71317000-7e cannot be used as it has no MAC address#033[00m Dec 6 05:19:12 localhost nova_compute[282193]: 2025-12-06 10:19:12.953 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:12 localhost kernel: device tap71317000-7e entered promiscuous mode Dec 6 05:19:12 localhost NetworkManager[5973]: [1765016352.9633] manager: (tap71317000-7e): new Generic device (/org/freedesktop/NetworkManager/Devices/60) Dec 6 05:19:12 localhost ovn_controller[154851]: 2025-12-06T10:19:12Z|00363|binding|INFO|Claiming lport 71317000-7e06-4580-adc9-235e7990a2e9 for this chassis. Dec 6 05:19:12 localhost ovn_controller[154851]: 2025-12-06T10:19:12Z|00364|binding|INFO|71317000-7e06-4580-adc9-235e7990a2e9: Claiming unknown Dec 6 05:19:12 localhost nova_compute[282193]: 2025-12-06 10:19:12.966 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:12 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:12.972 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe6c:348c/64 2001:db8::2/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=71317000-7e06-4580-adc9-235e7990a2e9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:19:12 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:12.974 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 71317000-7e06-4580-adc9-235e7990a2e9 in datapath 43883dce-1590-48c4-987c-a21b63b82a1c bound to our chassis#033[00m Dec 6 05:19:12 localhost ovn_controller[154851]: 2025-12-06T10:19:12Z|00365|binding|INFO|Setting lport 71317000-7e06-4580-adc9-235e7990a2e9 up in Southbound Dec 6 05:19:12 localhost ovn_controller[154851]: 2025-12-06T10:19:12Z|00366|binding|INFO|Setting lport 71317000-7e06-4580-adc9-235e7990a2e9 ovn-installed in OVS Dec 6 05:19:12 localhost nova_compute[282193]: 2025-12-06 10:19:12.976 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:12 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:12.976 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Port 71a44691-315d-461a-89ef-7b4a5310a873 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 6 05:19:12 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:12.977 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43883dce-1590-48c4-987c-a21b63b82a1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:19:12 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:12.977 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[79484c5c-7005-4c11-a4ba-c41e157848fa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:19:13 localhost nova_compute[282193]: 2025-12-06 10:19:13.005 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:13 localhost nova_compute[282193]: 2025-12-06 10:19:13.043 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:13 localhost nova_compute[282193]: 2025-12-06 10:19:13.066 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:13 localhost nova_compute[282193]: 2025-12-06 10:19:13.108 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:13 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:19:13 localhost systemd[1]: var-lib-containers-storage-overlay-25b77508fc5e8b97d0f383ba076d5d2275da596e03d557121924e9aa163151dc-merged.mount: Deactivated successfully. Dec 6 05:19:13 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e161 e161: 6 total, 6 up, 6 in Dec 6 05:19:13 localhost podman[326573]: Dec 6 05:19:13 localhost podman[326573]: 2025-12-06 10:19:13.879873873 +0000 UTC m=+0.099904611 container create 6858356f985a82d53b77c2c185ff099d70e1233b4e3a6899db120f14fb49b087 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS) Dec 6 05:19:13 localhost systemd[1]: Started libpod-conmon-6858356f985a82d53b77c2c185ff099d70e1233b4e3a6899db120f14fb49b087.scope. Dec 6 05:19:13 localhost podman[326573]: 2025-12-06 10:19:13.835428185 +0000 UTC m=+0.055458953 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:19:13 localhost systemd[1]: Started libcrun container. Dec 6 05:19:13 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/516e33f92756df3ac203a82b130353d936333b23948958938909e6ece2d0c964/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:19:13 localhost podman[326573]: 2025-12-06 10:19:13.961509839 +0000 UTC m=+0.181540577 container init 6858356f985a82d53b77c2c185ff099d70e1233b4e3a6899db120f14fb49b087 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 6 05:19:13 localhost podman[326573]: 2025-12-06 10:19:13.967889823 +0000 UTC m=+0.187920561 container start 6858356f985a82d53b77c2c185ff099d70e1233b4e3a6899db120f14fb49b087 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 6 05:19:13 localhost dnsmasq[326591]: started, version 2.85 cachesize 150 Dec 6 05:19:13 localhost dnsmasq[326591]: DNS service limited to local subnets Dec 6 05:19:13 localhost dnsmasq[326591]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:19:13 localhost dnsmasq[326591]: warning: no upstream servers configured Dec 6 05:19:13 localhost dnsmasq-dhcp[326591]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 6 05:19:13 localhost dnsmasq[326591]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 0 addresses Dec 6 05:19:13 localhost dnsmasq-dhcp[326591]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:19:13 localhost dnsmasq-dhcp[326591]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:19:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:19:14 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:14.121 263652 INFO neutron.agent.dhcp.agent [None req-f2fa038d-9f03-4fd4-9bdf-2bdd29323423 - - - - - -] DHCP configuration for ports {'71317000-7e06-4580-adc9-235e7990a2e9', '687d7abb-e6aa-4047-aa26-552c962fcc91'} is completed#033[00m Dec 6 05:19:14 localhost neutron_sriov_agent[256690]: 2025-12-06 10:19:14.136 2 INFO neutron.agent.securitygroups_rpc [None req-a19ad948-85b8-4074-80f7-d1d223959ce7 a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']#033[00m Dec 6 05:19:14 localhost podman[326592]: 2025-12-06 10:19:14.185254535 +0000 UTC m=+0.092078964 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:19:14 localhost podman[326592]: 2025-12-06 10:19:14.197827937 +0000 UTC m=+0.104652436 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:19:14 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:19:14 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:14.222 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:19:13Z, description=, device_id=, device_owner=, dns_assignment=[, ], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[, ], id=5d1bde55-cf71-4361-9111-0d2bb82ecc07, ip_allocation=immediate, mac_address=fa:16:3e:22:52:10, name=tempest-NetworksTestDHCPv6-1010733462, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:17:28Z, description=, dns_domain=, id=43883dce-1590-48c4-987c-a21b63b82a1c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-1975538139, port_security_enabled=True, project_id=34a17eee71de4bac8b71972a4b7b506c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=42818, qos_policy_id=None, revision_number=57, router:external=False, shared=False, standard_attr_id=1415, status=ACTIVE, subnets=['59a88728-39f6-47b9-a901-d0a4fab9297e', '84b54cff-c31f-4590-8648-7c950796533e'], tags=[], tenant_id=34a17eee71de4bac8b71972a4b7b506c, updated_at=2025-12-06T10:19:09Z, vlan_transparent=None, network_id=43883dce-1590-48c4-987c-a21b63b82a1c, port_security_enabled=True, project_id=34a17eee71de4bac8b71972a4b7b506c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['d618a097-5989-47aa-9263-1c8a114ad269'], standard_attr_id=2062, status=DOWN, tags=[], tenant_id=34a17eee71de4bac8b71972a4b7b506c, updated_at=2025-12-06T10:19:13Z on network 43883dce-1590-48c4-987c-a21b63b82a1c#033[00m Dec 6 05:19:14 localhost nova_compute[282193]: 2025-12-06 10:19:14.288 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:14 localhost dnsmasq[326591]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 2 addresses Dec 6 05:19:14 localhost dnsmasq-dhcp[326591]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:19:14 localhost dnsmasq-dhcp[326591]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:19:14 localhost podman[326629]: 2025-12-06 10:19:14.416038736 +0000 UTC m=+0.060671111 container kill 6858356f985a82d53b77c2c185ff099d70e1233b4e3a6899db120f14fb49b087 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3) Dec 6 05:19:14 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:14.690 263652 INFO neutron.agent.dhcp.agent [None req-67594386-b5ab-4512-8ce1-23ace02a9fb5 - - - - - -] DHCP configuration for ports {'5d1bde55-cf71-4361-9111-0d2bb82ecc07'} is completed#033[00m Dec 6 05:19:14 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 1 addresses Dec 6 05:19:14 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:19:14 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:19:14 localhost podman[326667]: 2025-12-06 10:19:14.698211264 +0000 UTC m=+0.056620028 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:19:14 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 6 05:19:14 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/932893422' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 6 05:19:14 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 6 05:19:14 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/932893422' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 6 05:19:14 localhost neutron_sriov_agent[256690]: 2025-12-06 10:19:14.866 2 INFO neutron.agent.securitygroups_rpc [None req-83d5638e-2f4a-455b-b2d3-487dd6af4b6c a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']#033[00m Dec 6 05:19:15 localhost dnsmasq[326591]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 0 addresses Dec 6 05:19:15 localhost dnsmasq-dhcp[326591]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:19:15 localhost podman[326707]: 2025-12-06 10:19:15.072190128 +0000 UTC m=+0.060289150 container kill 6858356f985a82d53b77c2c185ff099d70e1233b4e3a6899db120f14fb49b087 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Dec 6 05:19:15 localhost dnsmasq-dhcp[326591]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:19:15 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 6 05:19:15 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1061312478' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 6 05:19:15 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 6 05:19:15 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1061312478' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 6 05:19:15 localhost neutron_sriov_agent[256690]: 2025-12-06 10:19:15.912 2 INFO neutron.agent.securitygroups_rpc [None req-ca0b48dc-f1ce-4207-b602-f1515b9dc7e0 b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['cd56abe4-204c-4363-ad64-0a6840260727']#033[00m Dec 6 05:19:16 localhost systemd[1]: tmp-crun.9Tkel7.mount: Deactivated successfully. Dec 6 05:19:16 localhost podman[326743]: 2025-12-06 10:19:16.391859356 +0000 UTC m=+0.072457818 container kill 6858356f985a82d53b77c2c185ff099d70e1233b4e3a6899db120f14fb49b087 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:19:16 localhost dnsmasq[326591]: exiting on receipt of SIGTERM Dec 6 05:19:16 localhost systemd[1]: libpod-6858356f985a82d53b77c2c185ff099d70e1233b4e3a6899db120f14fb49b087.scope: Deactivated successfully. Dec 6 05:19:16 localhost podman[326759]: 2025-12-06 10:19:16.453343061 +0000 UTC m=+0.042225701 container died 6858356f985a82d53b77c2c185ff099d70e1233b4e3a6899db120f14fb49b087 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 6 05:19:16 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6858356f985a82d53b77c2c185ff099d70e1233b4e3a6899db120f14fb49b087-userdata-shm.mount: Deactivated successfully. Dec 6 05:19:16 localhost podman[326759]: 2025-12-06 10:19:16.548621371 +0000 UTC m=+0.137504001 container remove 6858356f985a82d53b77c2c185ff099d70e1233b4e3a6899db120f14fb49b087 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 6 05:19:16 localhost systemd[1]: libpod-conmon-6858356f985a82d53b77c2c185ff099d70e1233b4e3a6899db120f14fb49b087.scope: Deactivated successfully. Dec 6 05:19:16 localhost openstack_network_exporter[243110]: ERROR 10:19:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:19:16 localhost openstack_network_exporter[243110]: ERROR 10:19:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:19:16 localhost openstack_network_exporter[243110]: ERROR 10:19:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:19:16 localhost openstack_network_exporter[243110]: Dec 6 05:19:16 localhost openstack_network_exporter[243110]: ERROR 10:19:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:19:16 localhost openstack_network_exporter[243110]: Dec 6 05:19:16 localhost openstack_network_exporter[243110]: ERROR 10:19:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:19:17 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:17.123 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:19:16Z, description=, device_id=99ad1734-7772-49ab-8fb4-302cb49814eb, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=07d3794f-2f7e-46e2-ac9d-c21c65319152, ip_allocation=immediate, mac_address=fa:16:3e:49:5b:1d, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2078, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-06T10:19:16Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:19:17 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:17.187 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b142a5ef-fbed-4e92-aa78-e3ad080c6370, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 05:19:17 localhost systemd[1]: var-lib-containers-storage-overlay-516e33f92756df3ac203a82b130353d936333b23948958938909e6ece2d0c964-merged.mount: Deactivated successfully. Dec 6 05:19:17 localhost systemd[1]: tmp-crun.CajdON.mount: Deactivated successfully. Dec 6 05:19:17 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 2 addresses Dec 6 05:19:17 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:19:17 localhost podman[326833]: 2025-12-06 10:19:17.405879713 +0000 UTC m=+0.135283614 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 6 05:19:17 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:19:17 localhost podman[326860]: Dec 6 05:19:17 localhost podman[326860]: 2025-12-06 10:19:17.475209667 +0000 UTC m=+0.113615798 container create 1ddc436956cf71e3ab28ae1b4ad9296c7ca896a51f81c3e13b4e1305a0e2fdfc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3) Dec 6 05:19:17 localhost systemd[1]: Started libpod-conmon-1ddc436956cf71e3ab28ae1b4ad9296c7ca896a51f81c3e13b4e1305a0e2fdfc.scope. Dec 6 05:19:17 localhost podman[326860]: 2025-12-06 10:19:17.423833357 +0000 UTC m=+0.062239528 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:19:17 localhost systemd[1]: Started libcrun container. Dec 6 05:19:17 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82c9b7b4cd83264fe5f4adcacf7a3b93a9ed162b717afd8963740bae1eb50e45/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:19:17 localhost podman[326860]: 2025-12-06 10:19:17.549165539 +0000 UTC m=+0.187571670 container init 1ddc436956cf71e3ab28ae1b4ad9296c7ca896a51f81c3e13b4e1305a0e2fdfc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 6 05:19:17 localhost podman[326860]: 2025-12-06 10:19:17.559616897 +0000 UTC m=+0.198023028 container start 1ddc436956cf71e3ab28ae1b4ad9296c7ca896a51f81c3e13b4e1305a0e2fdfc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 6 05:19:17 localhost dnsmasq[326889]: started, version 2.85 cachesize 150 Dec 6 05:19:17 localhost dnsmasq[326889]: DNS service limited to local subnets Dec 6 05:19:17 localhost dnsmasq[326889]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:19:17 localhost dnsmasq[326889]: warning: no upstream servers configured Dec 6 05:19:17 localhost dnsmasq-dhcp[326889]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 6 05:19:17 localhost dnsmasq[326889]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 0 addresses Dec 6 05:19:17 localhost dnsmasq-dhcp[326889]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:19:17 localhost dnsmasq-dhcp[326889]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:19:17 localhost neutron_sriov_agent[256690]: 2025-12-06 10:19:17.660 2 INFO neutron.agent.securitygroups_rpc [None req-09283ffa-3b28-4158-b02d-0d7572bb2b32 b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['cd56abe4-204c-4363-ad64-0a6840260727']#033[00m Dec 6 05:19:17 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:17.694 263652 INFO neutron.agent.dhcp.agent [None req-1f4df94e-b656-4f75-a32e-bd7515effd96 - - - - - -] DHCP configuration for ports {'07d3794f-2f7e-46e2-ac9d-c21c65319152'} is completed#033[00m Dec 6 05:19:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:19:17 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:17.858 263652 INFO neutron.agent.dhcp.agent [None req-521b72fe-5f63-4cbc-8de6-2fa1e9bfebea - - - - - -] DHCP configuration for ports {'71317000-7e06-4580-adc9-235e7990a2e9', '687d7abb-e6aa-4047-aa26-552c962fcc91'} is completed#033[00m Dec 6 05:19:17 localhost podman[326907]: 2025-12-06 10:19:17.918898453 +0000 UTC m=+0.084008568 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 6 05:19:17 localhost podman[326907]: 2025-12-06 10:19:17.93121851 +0000 UTC m=+0.096328635 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 6 05:19:17 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:19:18 localhost dnsmasq[326889]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 0 addresses Dec 6 05:19:18 localhost dnsmasq-dhcp[326889]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:19:18 localhost podman[326911]: 2025-12-06 10:19:18.005626354 +0000 UTC m=+0.159816816 container kill 1ddc436956cf71e3ab28ae1b4ad9296c7ca896a51f81c3e13b4e1305a0e2fdfc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 6 05:19:18 localhost dnsmasq-dhcp[326889]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:19:18 localhost nova_compute[282193]: 2025-12-06 10:19:18.110 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:18 localhost neutron_sriov_agent[256690]: 2025-12-06 10:19:18.247 2 INFO neutron.agent.securitygroups_rpc [None req-c5dcea7d-aa1b-4625-8fc4-dc86e9ad2a1a 7365839d5bca455283c571ca0abd33bb 12673f85bb004c3c946338dc70e565e7 - - default default] Security group member updated ['5a014cda-2333-483a-bcd0-2243e387c412']#033[00m Dec 6 05:19:18 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:18.277 263652 INFO neutron.agent.dhcp.agent [None req-6cefcc3e-a386-4a51-acd2-374cc98f4b85 - - - - - -] DHCP configuration for ports {'71317000-7e06-4580-adc9-235e7990a2e9', '687d7abb-e6aa-4047-aa26-552c962fcc91'} is completed#033[00m Dec 6 05:19:18 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:19:19 localhost neutron_sriov_agent[256690]: 2025-12-06 10:19:19.278 2 INFO neutron.agent.securitygroups_rpc [None req-7362e19c-e595-471d-b005-9585c5cb5a42 b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['cd56abe4-204c-4363-ad64-0a6840260727']#033[00m Dec 6 05:19:19 localhost nova_compute[282193]: 2025-12-06 10:19:19.308 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:20 localhost neutron_sriov_agent[256690]: 2025-12-06 10:19:20.869 2 INFO neutron.agent.securitygroups_rpc [None req-8a7361c8-fe6c-42e8-b9eb-90548f1065a0 b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['cd56abe4-204c-4363-ad64-0a6840260727']#033[00m Dec 6 05:19:21 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 6 05:19:21 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1538185790' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 6 05:19:21 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 6 05:19:21 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1538185790' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 6 05:19:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:19:21 localhost systemd[1]: tmp-crun.uWiS8D.mount: Deactivated successfully. Dec 6 05:19:21 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 1 addresses Dec 6 05:19:21 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:19:21 localhost podman[326967]: 2025-12-06 10:19:21.861552432 +0000 UTC m=+0.076015746 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:19:21 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:19:21 localhost podman[326978]: 2025-12-06 10:19:21.928289271 +0000 UTC m=+0.085492224 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true) Dec 6 05:19:21 localhost podman[326978]: 2025-12-06 10:19:21.999304483 +0000 UTC m=+0.156507446 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 6 05:19:22 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:19:22 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e162 e162: 6 total, 6 up, 6 in Dec 6 05:19:22 localhost sshd[327014]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:19:22 localhost dnsmasq[326889]: exiting on receipt of SIGTERM Dec 6 05:19:22 localhost podman[327033]: 2025-12-06 10:19:22.937806431 +0000 UTC m=+0.063775521 container kill 1ddc436956cf71e3ab28ae1b4ad9296c7ca896a51f81c3e13b4e1305a0e2fdfc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:19:22 localhost systemd[1]: libpod-1ddc436956cf71e3ab28ae1b4ad9296c7ca896a51f81c3e13b4e1305a0e2fdfc.scope: Deactivated successfully. Dec 6 05:19:23 localhost podman[327047]: 2025-12-06 10:19:23.019042014 +0000 UTC m=+0.055602571 container died 1ddc436956cf71e3ab28ae1b4ad9296c7ca896a51f81c3e13b4e1305a0e2fdfc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 6 05:19:23 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1ddc436956cf71e3ab28ae1b4ad9296c7ca896a51f81c3e13b4e1305a0e2fdfc-userdata-shm.mount: Deactivated successfully. Dec 6 05:19:23 localhost systemd[1]: var-lib-containers-storage-overlay-82c9b7b4cd83264fe5f4adcacf7a3b93a9ed162b717afd8963740bae1eb50e45-merged.mount: Deactivated successfully. Dec 6 05:19:23 localhost nova_compute[282193]: 2025-12-06 10:19:23.115 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:23 localhost podman[327047]: 2025-12-06 10:19:23.118399521 +0000 UTC m=+0.154960028 container remove 1ddc436956cf71e3ab28ae1b4ad9296c7ca896a51f81c3e13b4e1305a0e2fdfc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Dec 6 05:19:23 localhost systemd[1]: libpod-conmon-1ddc436956cf71e3ab28ae1b4ad9296c7ca896a51f81c3e13b4e1305a0e2fdfc.scope: Deactivated successfully. Dec 6 05:19:23 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:19:23 localhost sshd[327073]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:19:23 localhost neutron_sriov_agent[256690]: 2025-12-06 10:19:23.769 2 INFO neutron.agent.securitygroups_rpc [None req-7c634670-f27b-4241-a6ed-35c65bde0f68 a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']#033[00m Dec 6 05:19:23 localhost podman[241090]: time="2025-12-06T10:19:23Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:19:23 localhost podman[241090]: @ - - [06/Dec/2025:10:19:23 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156104 "" "Go-http-client/1.1" Dec 6 05:19:23 localhost podman[241090]: @ - - [06/Dec/2025:10:19:23 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19268 "" "Go-http-client/1.1" Dec 6 05:19:24 localhost nova_compute[282193]: 2025-12-06 10:19:24.332 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:24 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:24.698 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:19:24Z, description=, device_id=5e763889-0545-429e-afe5-cf1946a7be48, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=4a3b4a33-39ad-4a3a-af8b-c09fba1ef46f, ip_allocation=immediate, mac_address=fa:16:3e:d7:10:85, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2100, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-06T10:19:24Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:19:24 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 2 addresses Dec 6 05:19:24 localhost podman[327119]: 2025-12-06 10:19:24.927439699 +0000 UTC m=+0.053098484 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 6 05:19:24 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:19:24 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:19:25 localhost podman[327163]: Dec 6 05:19:25 localhost podman[327163]: 2025-12-06 10:19:25.164350181 +0000 UTC m=+0.070155416 container create 2c89eb92c9d1573a87f913aba601dadadec7a0bfb9ab8a92d5709b06e4bf27c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:19:25 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:25.167 263652 INFO neutron.agent.dhcp.agent [None req-cd265b96-18c3-4e54-879c-7483f74d5edc - - - - - -] DHCP configuration for ports {'4a3b4a33-39ad-4a3a-af8b-c09fba1ef46f'} is completed#033[00m Dec 6 05:19:25 localhost systemd[1]: Started libpod-conmon-2c89eb92c9d1573a87f913aba601dadadec7a0bfb9ab8a92d5709b06e4bf27c4.scope. Dec 6 05:19:25 localhost systemd[1]: Started libcrun container. Dec 6 05:19:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/058ece422527a93ea9cfae4ee7d56c460b45c10e9d7892db0a314b3ee73ae7fb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:19:25 localhost podman[327163]: 2025-12-06 10:19:25.131388073 +0000 UTC m=+0.037193308 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:19:25 localhost podman[327163]: 2025-12-06 10:19:25.239814417 +0000 UTC m=+0.145619682 container init 2c89eb92c9d1573a87f913aba601dadadec7a0bfb9ab8a92d5709b06e4bf27c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3) Dec 6 05:19:25 localhost neutron_sriov_agent[256690]: 2025-12-06 10:19:25.243 2 INFO neutron.agent.securitygroups_rpc [None req-e1143dbb-8340-4dac-af2c-b301e23bde0e a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']#033[00m Dec 6 05:19:25 localhost podman[327163]: 2025-12-06 10:19:25.248168692 +0000 UTC m=+0.153973927 container start 2c89eb92c9d1573a87f913aba601dadadec7a0bfb9ab8a92d5709b06e4bf27c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:19:25 localhost dnsmasq[327181]: started, version 2.85 cachesize 150 Dec 6 05:19:25 localhost dnsmasq[327181]: DNS service limited to local subnets Dec 6 05:19:25 localhost dnsmasq[327181]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:19:25 localhost dnsmasq[327181]: warning: no upstream servers configured Dec 6 05:19:25 localhost dnsmasq-dhcp[327181]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d Dec 6 05:19:25 localhost dnsmasq-dhcp[327181]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 6 05:19:25 localhost dnsmasq[327181]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 0 addresses Dec 6 05:19:25 localhost dnsmasq-dhcp[327181]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:19:25 localhost dnsmasq-dhcp[327181]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:19:25 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:25.309 263652 INFO neutron.agent.dhcp.agent [None req-6e1ebb70-7474-4b90-b690-943c03b4357a - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:19:23Z, description=, device_id=, device_owner=, dns_assignment=[, ], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[, ], id=210a490d-79cd-4308-b6fe-935eca96b08e, ip_allocation=immediate, mac_address=fa:16:3e:4b:fd:6a, name=tempest-NetworksTestDHCPv6-825481970, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:17:28Z, description=, dns_domain=, id=43883dce-1590-48c4-987c-a21b63b82a1c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-1975538139, port_security_enabled=True, project_id=34a17eee71de4bac8b71972a4b7b506c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=42818, qos_policy_id=None, revision_number=61, router:external=False, shared=False, standard_attr_id=1415, status=ACTIVE, subnets=['2555de70-b983-4d04-8a68-2427fd11842b', 'b1d2f2d6-9c9e-4054-996f-58f985b37644'], tags=[], tenant_id=34a17eee71de4bac8b71972a4b7b506c, updated_at=2025-12-06T10:19:18Z, vlan_transparent=None, network_id=43883dce-1590-48c4-987c-a21b63b82a1c, port_security_enabled=True, project_id=34a17eee71de4bac8b71972a4b7b506c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['d618a097-5989-47aa-9263-1c8a114ad269'], standard_attr_id=2099, status=DOWN, tags=[], tenant_id=34a17eee71de4bac8b71972a4b7b506c, updated_at=2025-12-06T10:19:23Z on network 43883dce-1590-48c4-987c-a21b63b82a1c#033[00m Dec 6 05:19:25 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e163 e163: 6 total, 6 up, 6 in Dec 6 05:19:25 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:25.476 263652 INFO neutron.agent.dhcp.agent [None req-0c72838e-6c4e-4108-ba4e-af2becef5d48 - - - - - -] DHCP configuration for ports {'687d7abb-e6aa-4047-aa26-552c962fcc91', '71317000-7e06-4580-adc9-235e7990a2e9'} is completed#033[00m Dec 6 05:19:25 localhost dnsmasq[327181]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 2 addresses Dec 6 05:19:25 localhost dnsmasq-dhcp[327181]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:19:25 localhost dnsmasq-dhcp[327181]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:19:25 localhost podman[327201]: 2025-12-06 10:19:25.498808965 +0000 UTC m=+0.066176715 container kill 2c89eb92c9d1573a87f913aba601dadadec7a0bfb9ab8a92d5709b06e4bf27c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 6 05:19:25 localhost neutron_sriov_agent[256690]: 2025-12-06 10:19:25.516 2 INFO neutron.agent.securitygroups_rpc [None req-16b19944-c36d-4221-9d13-f63b2c9f61ac b30ee2b2ade74f9e80de3f1afc291bda 29c573bcf157448abe548893ad01e3d2 - - default default] Security group member updated ['96d14d8a-5f78-4831-ba37-3f88bccdbe58']#033[00m Dec 6 05:19:25 localhost neutron_sriov_agent[256690]: 2025-12-06 10:19:25.677 2 INFO neutron.agent.securitygroups_rpc [None req-16b19944-c36d-4221-9d13-f63b2c9f61ac b30ee2b2ade74f9e80de3f1afc291bda 29c573bcf157448abe548893ad01e3d2 - - default default] Security group member updated ['96d14d8a-5f78-4831-ba37-3f88bccdbe58']#033[00m Dec 6 05:19:25 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:25.692 263652 INFO neutron.agent.dhcp.agent [None req-670b6bc8-d0d9-45e0-9dfa-450619cb000a - - - - - -] DHCP configuration for ports {'210a490d-79cd-4308-b6fe-935eca96b08e'} is completed#033[00m Dec 6 05:19:25 localhost dnsmasq[327181]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 0 addresses Dec 6 05:19:25 localhost dnsmasq-dhcp[327181]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:19:25 localhost dnsmasq-dhcp[327181]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:19:25 localhost podman[327239]: 2025-12-06 10:19:25.799333401 +0000 UTC m=+0.060125209 container kill 2c89eb92c9d1573a87f913aba601dadadec7a0bfb9ab8a92d5709b06e4bf27c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 6 05:19:26 localhost dnsmasq[327181]: exiting on receipt of SIGTERM Dec 6 05:19:26 localhost podman[327279]: 2025-12-06 10:19:26.569815022 +0000 UTC m=+0.066443942 container kill 2c89eb92c9d1573a87f913aba601dadadec7a0bfb9ab8a92d5709b06e4bf27c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 6 05:19:26 localhost systemd[1]: libpod-2c89eb92c9d1573a87f913aba601dadadec7a0bfb9ab8a92d5709b06e4bf27c4.scope: Deactivated successfully. Dec 6 05:19:26 localhost podman[327292]: 2025-12-06 10:19:26.646133945 +0000 UTC m=+0.062823341 container died 2c89eb92c9d1573a87f913aba601dadadec7a0bfb9ab8a92d5709b06e4bf27c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Dec 6 05:19:26 localhost podman[327292]: 2025-12-06 10:19:26.680017201 +0000 UTC m=+0.096706557 container cleanup 2c89eb92c9d1573a87f913aba601dadadec7a0bfb9ab8a92d5709b06e4bf27c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true) Dec 6 05:19:26 localhost systemd[1]: libpod-conmon-2c89eb92c9d1573a87f913aba601dadadec7a0bfb9ab8a92d5709b06e4bf27c4.scope: Deactivated successfully. Dec 6 05:19:26 localhost podman[327299]: 2025-12-06 10:19:26.727021777 +0000 UTC m=+0.129115537 container remove 2c89eb92c9d1573a87f913aba601dadadec7a0bfb9ab8a92d5709b06e4bf27c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 6 05:19:26 localhost neutron_sriov_agent[256690]: 2025-12-06 10:19:26.796 2 INFO neutron.agent.securitygroups_rpc [None req-6fcbbc2a-54c0-4eb0-a7e2-cb02681a4453 b30ee2b2ade74f9e80de3f1afc291bda 29c573bcf157448abe548893ad01e3d2 - - default default] Security group member updated ['96d14d8a-5f78-4831-ba37-3f88bccdbe58']#033[00m Dec 6 05:19:26 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:26.830 263652 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:19:26 localhost systemd[1]: var-lib-containers-storage-overlay-058ece422527a93ea9cfae4ee7d56c460b45c10e9d7892db0a314b3ee73ae7fb-merged.mount: Deactivated successfully. Dec 6 05:19:26 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2c89eb92c9d1573a87f913aba601dadadec7a0bfb9ab8a92d5709b06e4bf27c4-userdata-shm.mount: Deactivated successfully. Dec 6 05:19:27 localhost neutron_sriov_agent[256690]: 2025-12-06 10:19:27.176 2 INFO neutron.agent.securitygroups_rpc [None req-a0866618-9e73-4e70-a70b-4bf19bcc43ec b30ee2b2ade74f9e80de3f1afc291bda 29c573bcf157448abe548893ad01e3d2 - - default default] Security group member updated ['96d14d8a-5f78-4831-ba37-3f88bccdbe58']#033[00m Dec 6 05:19:27 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #40. Immutable memtables: 0. Dec 6 05:19:27 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:19:27.207619) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 6 05:19:27 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 40 Dec 6 05:19:27 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016367207707, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 2235, "num_deletes": 268, "total_data_size": 4244735, "memory_usage": 4323144, "flush_reason": "Manual Compaction"} Dec 6 05:19:27 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #41: started Dec 6 05:19:27 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016367222196, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 41, "file_size": 2755035, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24477, "largest_seqno": 26707, "table_properties": {"data_size": 2746461, "index_size": 5207, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 19127, "raw_average_key_size": 20, "raw_value_size": 2728650, "raw_average_value_size": 2991, "num_data_blocks": 225, "num_entries": 912, "num_filter_entries": 912, "num_deletions": 268, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765016242, "oldest_key_time": 1765016242, "file_creation_time": 1765016367, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ec2a3a6-c7d0-4f1e-9923-5a3b782725ae", "db_session_id": "JMBO5KX1IJCJ8FWC64EX", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}} Dec 6 05:19:27 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 14619 microseconds, and 4385 cpu microseconds. Dec 6 05:19:27 localhost ceph-mon[298582]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 6 05:19:27 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:19:27.222254) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #41: 2755035 bytes OK Dec 6 05:19:27 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:19:27.222282) [db/memtable_list.cc:519] [default] Level-0 commit table #41 started Dec 6 05:19:27 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:19:27.224284) [db/memtable_list.cc:722] [default] Level-0 commit table #41: memtable #1 done Dec 6 05:19:27 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:19:27.224314) EVENT_LOG_v1 {"time_micros": 1765016367224305, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 6 05:19:27 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:19:27.224345) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 6 05:19:27 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 4234450, prev total WAL file size 4234450, number of live WAL files 2. Dec 6 05:19:27 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000037.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:19:27 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:19:27.225469) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034303137' seq:72057594037927935, type:22 .. '6C6F676D0034323731' seq:0, type:0; will stop at (end) Dec 6 05:19:27 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 6 05:19:27 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [41(2690KB)], [39(16MB)] Dec 6 05:19:27 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016367225536, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [41], "files_L6": [39], "score": -1, "input_data_size": 20324186, "oldest_snapshot_seqno": -1} Dec 6 05:19:27 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #42: 12944 keys, 19835981 bytes, temperature: kUnknown Dec 6 05:19:27 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016367359250, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 42, "file_size": 19835981, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19760106, "index_size": 42430, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32389, "raw_key_size": 345814, "raw_average_key_size": 26, "raw_value_size": 19537821, "raw_average_value_size": 1509, "num_data_blocks": 1616, "num_entries": 12944, "num_filter_entries": 12944, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015623, "oldest_key_time": 0, "file_creation_time": 1765016367, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ec2a3a6-c7d0-4f1e-9923-5a3b782725ae", "db_session_id": "JMBO5KX1IJCJ8FWC64EX", "orig_file_number": 42, "seqno_to_time_mapping": "N/A"}} Dec 6 05:19:27 localhost ceph-mon[298582]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 6 05:19:27 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:19:27.359670) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 19835981 bytes Dec 6 05:19:27 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:19:27.362376) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 151.8 rd, 148.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 16.8 +0.0 blob) out(18.9 +0.0 blob), read-write-amplify(14.6) write-amplify(7.2) OK, records in: 13495, records dropped: 551 output_compression: NoCompression Dec 6 05:19:27 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:19:27.362416) EVENT_LOG_v1 {"time_micros": 1765016367362397, "job": 22, "event": "compaction_finished", "compaction_time_micros": 133863, "compaction_time_cpu_micros": 54263, "output_level": 6, "num_output_files": 1, "total_output_size": 19835981, "num_input_records": 13495, "num_output_records": 12944, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 6 05:19:27 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:19:27 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016367362991, "job": 22, "event": "table_file_deletion", "file_number": 41} Dec 6 05:19:27 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000039.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:19:27 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016367365818, "job": 22, "event": "table_file_deletion", "file_number": 39} Dec 6 05:19:27 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:19:27.225336) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:19:27 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:19:27.365918) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:19:27 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:19:27.365927) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:19:27 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:19:27.365931) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:19:27 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:19:27.365934) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:19:27 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:19:27.365937) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:19:27 localhost podman[327374]: Dec 6 05:19:27 localhost podman[327374]: 2025-12-06 10:19:27.581971263 +0000 UTC m=+0.094623634 container create bed0476a1e794dd5f1daab1ec80beaa8ead2b2df45feba8ef98898bb4e2ac0d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:19:27 localhost systemd[1]: Started libpod-conmon-bed0476a1e794dd5f1daab1ec80beaa8ead2b2df45feba8ef98898bb4e2ac0d1.scope. Dec 6 05:19:27 localhost podman[327374]: 2025-12-06 10:19:27.540105433 +0000 UTC m=+0.052757844 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:19:27 localhost systemd[1]: Started libcrun container. Dec 6 05:19:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08ca1b43792c51b5d171ba38429a148b3fedf4393e22cecad6459d4f9db8f88a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:19:27 localhost podman[327374]: 2025-12-06 10:19:27.67743093 +0000 UTC m=+0.190083291 container init bed0476a1e794dd5f1daab1ec80beaa8ead2b2df45feba8ef98898bb4e2ac0d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 6 05:19:27 localhost podman[327374]: 2025-12-06 10:19:27.687706924 +0000 UTC m=+0.200359295 container start bed0476a1e794dd5f1daab1ec80beaa8ead2b2df45feba8ef98898bb4e2ac0d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 6 05:19:27 localhost dnsmasq[327393]: started, version 2.85 cachesize 150 Dec 6 05:19:27 localhost dnsmasq[327393]: DNS service limited to local subnets Dec 6 05:19:27 localhost dnsmasq[327393]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:19:27 localhost dnsmasq[327393]: warning: no upstream servers configured Dec 6 05:19:27 localhost dnsmasq-dhcp[327393]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d Dec 6 05:19:27 localhost dnsmasq[327393]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/addn_hosts - 0 addresses Dec 6 05:19:27 localhost dnsmasq-dhcp[327393]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/host Dec 6 05:19:27 localhost dnsmasq-dhcp[327393]: read /var/lib/neutron/dhcp/43883dce-1590-48c4-987c-a21b63b82a1c/opts Dec 6 05:19:27 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 6 05:19:27 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3594078665' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 6 05:19:27 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 6 05:19:27 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3594078665' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 6 05:19:28 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:28.067 263652 INFO neutron.agent.dhcp.agent [None req-69ed11c0-041e-4c0e-84e7-211f55860449 - - - - - -] DHCP configuration for ports {'71317000-7e06-4580-adc9-235e7990a2e9', '687d7abb-e6aa-4047-aa26-552c962fcc91'} is completed#033[00m Dec 6 05:19:28 localhost nova_compute[282193]: 2025-12-06 10:19:28.115 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:28 localhost dnsmasq[327393]: exiting on receipt of SIGTERM Dec 6 05:19:28 localhost podman[327411]: 2025-12-06 10:19:28.185011825 +0000 UTC m=+0.049592077 container kill bed0476a1e794dd5f1daab1ec80beaa8ead2b2df45feba8ef98898bb4e2ac0d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:19:28 localhost systemd[1]: libpod-bed0476a1e794dd5f1daab1ec80beaa8ead2b2df45feba8ef98898bb4e2ac0d1.scope: Deactivated successfully. Dec 6 05:19:28 localhost podman[327425]: 2025-12-06 10:19:28.240108389 +0000 UTC m=+0.045002507 container died bed0476a1e794dd5f1daab1ec80beaa8ead2b2df45feba8ef98898bb4e2ac0d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 6 05:19:28 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:19:28 localhost podman[327425]: 2025-12-06 10:19:28.327909653 +0000 UTC m=+0.132803721 container cleanup bed0476a1e794dd5f1daab1ec80beaa8ead2b2df45feba8ef98898bb4e2ac0d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:19:28 localhost systemd[1]: libpod-conmon-bed0476a1e794dd5f1daab1ec80beaa8ead2b2df45feba8ef98898bb4e2ac0d1.scope: Deactivated successfully. Dec 6 05:19:28 localhost podman[327432]: 2025-12-06 10:19:28.348710869 +0000 UTC m=+0.139270829 container remove bed0476a1e794dd5f1daab1ec80beaa8ead2b2df45feba8ef98898bb4e2ac0d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-43883dce-1590-48c4-987c-a21b63b82a1c, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:19:28 localhost nova_compute[282193]: 2025-12-06 10:19:28.361 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:28 localhost kernel: device tap71317000-7e left promiscuous mode Dec 6 05:19:28 localhost ovn_controller[154851]: 2025-12-06T10:19:28Z|00367|binding|INFO|Releasing lport 71317000-7e06-4580-adc9-235e7990a2e9 from this chassis (sb_readonly=0) Dec 6 05:19:28 localhost ovn_controller[154851]: 2025-12-06T10:19:28Z|00368|binding|INFO|Setting lport 71317000-7e06-4580-adc9-235e7990a2e9 down in Southbound Dec 6 05:19:28 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:28.373 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe6c:348c/64 2001:db8::2/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548789.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=71317000-7e06-4580-adc9-235e7990a2e9) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:19:28 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:28.375 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 71317000-7e06-4580-adc9-235e7990a2e9 in datapath 43883dce-1590-48c4-987c-a21b63b82a1c unbound from our chassis#033[00m Dec 6 05:19:28 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:28.377 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43883dce-1590-48c4-987c-a21b63b82a1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:19:28 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:28.378 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[a675313e-8b0e-4dea-8ba5-e2aeb7a94ed5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:19:28 localhost nova_compute[282193]: 2025-12-06 10:19:28.385 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:28 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:28.766 263652 INFO neutron.agent.dhcp.agent [None req-7309e927-c81a-4a1b-981d-b71b7dc7f8d0 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:19:28 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 1 addresses Dec 6 05:19:28 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:19:28 localhost podman[327474]: 2025-12-06 10:19:28.773114922 +0000 UTC m=+0.064817592 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 6 05:19:28 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:19:28 localhost systemd[1]: tmp-crun.u48Zvq.mount: Deactivated successfully. Dec 6 05:19:28 localhost systemd[1]: var-lib-containers-storage-overlay-08ca1b43792c51b5d171ba38429a148b3fedf4393e22cecad6459d4f9db8f88a-merged.mount: Deactivated successfully. Dec 6 05:19:28 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bed0476a1e794dd5f1daab1ec80beaa8ead2b2df45feba8ef98898bb4e2ac0d1-userdata-shm.mount: Deactivated successfully. Dec 6 05:19:28 localhost systemd[1]: run-netns-qdhcp\x2d43883dce\x2d1590\x2d48c4\x2d987c\x2da21b63b82a1c.mount: Deactivated successfully. Dec 6 05:19:29 localhost nova_compute[282193]: 2025-12-06 10:19:29.368 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:29 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:29.372 263652 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:19:29 localhost ovn_controller[154851]: 2025-12-06T10:19:29Z|00369|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:19:29 localhost nova_compute[282193]: 2025-12-06 10:19:29.607 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:31 localhost neutron_sriov_agent[256690]: 2025-12-06 10:19:31.272 2 INFO neutron.agent.securitygroups_rpc [None req-33076df9-23c5-4745-bba5-728ca02b1a7f b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['cd56abe4-204c-4363-ad64-0a6840260727']#033[00m Dec 6 05:19:32 localhost sshd[327496]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:19:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:19:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:19:32 localhost podman[327499]: 2025-12-06 10:19:32.940913673 +0000 UTC m=+0.096498461 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 6 05:19:32 localhost podman[327499]: 2025-12-06 10:19:32.978701088 +0000 UTC m=+0.134285846 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 6 05:19:32 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:19:32 localhost podman[327498]: 2025-12-06 10:19:32.993793869 +0000 UTC m=+0.151836162 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible) Dec 6 05:19:33 localhost podman[327498]: 2025-12-06 10:19:33.001134164 +0000 UTC m=+0.159176477 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:19:33 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:19:33 localhost nova_compute[282193]: 2025-12-06 10:19:33.118 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:33 localhost nova_compute[282193]: 2025-12-06 10:19:33.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:19:33 localhost nova_compute[282193]: 2025-12-06 10:19:33.221 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:19:33 localhost nova_compute[282193]: 2025-12-06 10:19:33.222 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:19:33 localhost nova_compute[282193]: 2025-12-06 10:19:33.222 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:19:33 localhost nova_compute[282193]: 2025-12-06 10:19:33.222 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Auditing locally available compute resources for np0005548789.localdomain (node: np0005548789.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 6 05:19:33 localhost nova_compute[282193]: 2025-12-06 10:19:33.222 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:19:33 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:19:33 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:19:33 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1956276203' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:19:33 localhost nova_compute[282193]: 2025-12-06 10:19:33.684 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:19:33 localhost nova_compute[282193]: 2025-12-06 10:19:33.765 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:19:33 localhost nova_compute[282193]: 2025-12-06 10:19:33.766 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:19:33 localhost nova_compute[282193]: 2025-12-06 10:19:33.974 282197 WARNING nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 6 05:19:33 localhost nova_compute[282193]: 2025-12-06 10:19:33.976 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Hypervisor/Node resource view: name=np0005548789.localdomain free_ram=11239MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 6 05:19:33 localhost nova_compute[282193]: 2025-12-06 10:19:33.977 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:19:33 localhost nova_compute[282193]: 2025-12-06 10:19:33.977 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:19:34 localhost nova_compute[282193]: 2025-12-06 10:19:34.084 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 6 05:19:34 localhost nova_compute[282193]: 2025-12-06 10:19:34.085 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 6 05:19:34 localhost nova_compute[282193]: 2025-12-06 10:19:34.085 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Final resource view: name=np0005548789.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 6 05:19:34 localhost nova_compute[282193]: 2025-12-06 10:19:34.138 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:19:34 localhost nova_compute[282193]: 2025-12-06 10:19:34.370 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:34 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:19:34 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2644755643' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:19:34 localhost nova_compute[282193]: 2025-12-06 10:19:34.598 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:19:34 localhost nova_compute[282193]: 2025-12-06 10:19:34.604 282197 DEBUG nova.compute.provider_tree [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed in ProviderTree for provider: 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 6 05:19:34 localhost nova_compute[282193]: 2025-12-06 10:19:34.655 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 6 05:19:34 localhost nova_compute[282193]: 2025-12-06 10:19:34.658 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Compute_service record updated for np0005548789.localdomain:np0005548789.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 6 05:19:34 localhost nova_compute[282193]: 2025-12-06 10:19:34.659 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:19:35 localhost neutron_sriov_agent[256690]: 2025-12-06 10:19:35.112 2 INFO neutron.agent.securitygroups_rpc [None req-dad18757-c8ae-4573-92a7-49e2b9f564ab a31577503edf4745abb112adc3113276 90bd35d6ab7c40c58d9d1d61ff7a12d3 - - default default] Security group member updated ['1fabbc74-497e-44d5-8d22-97b341de2968']#033[00m Dec 6 05:19:35 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:35.250 263652 INFO neutron.agent.linux.ip_lib [None req-273b9099-78e2-422b-a659-0c74343e45f5 - - - - - -] Device tap972f93d0-ef cannot be used as it has no MAC address#033[00m Dec 6 05:19:35 localhost nova_compute[282193]: 2025-12-06 10:19:35.274 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:35 localhost kernel: device tap972f93d0-ef entered promiscuous mode Dec 6 05:19:35 localhost nova_compute[282193]: 2025-12-06 10:19:35.282 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:35 localhost NetworkManager[5973]: [1765016375.2870] manager: (tap972f93d0-ef): new Generic device (/org/freedesktop/NetworkManager/Devices/61) Dec 6 05:19:35 localhost ovn_controller[154851]: 2025-12-06T10:19:35Z|00370|binding|INFO|Claiming lport 972f93d0-ef12-4f24-a9a3-a699348b3358 for this chassis. Dec 6 05:19:35 localhost ovn_controller[154851]: 2025-12-06T10:19:35Z|00371|binding|INFO|972f93d0-ef12-4f24-a9a3-a699348b3358: Claiming unknown Dec 6 05:19:35 localhost systemd-udevd[327593]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:19:35 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:35.309 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1::1/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-5a779660-e992-4a3c-97a9-04be836f7fcf', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5a779660-e992-4a3c-97a9-04be836f7fcf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29c573bcf157448abe548893ad01e3d2', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dd5b3e84-a0b8-4106-a60c-6b065c1db991, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=972f93d0-ef12-4f24-a9a3-a699348b3358) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:19:35 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:35.311 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 972f93d0-ef12-4f24-a9a3-a699348b3358 in datapath 5a779660-e992-4a3c-97a9-04be836f7fcf bound to our chassis#033[00m Dec 6 05:19:35 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:35.314 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 5a779660-e992-4a3c-97a9-04be836f7fcf or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:19:35 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:35.316 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[750721b6-fdfd-4310-bf44-bc060cf5bfdc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:19:35 localhost ovn_controller[154851]: 2025-12-06T10:19:35Z|00372|binding|INFO|Setting lport 972f93d0-ef12-4f24-a9a3-a699348b3358 ovn-installed in OVS Dec 6 05:19:35 localhost ovn_controller[154851]: 2025-12-06T10:19:35Z|00373|binding|INFO|Setting lport 972f93d0-ef12-4f24-a9a3-a699348b3358 up in Southbound Dec 6 05:19:35 localhost nova_compute[282193]: 2025-12-06 10:19:35.319 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:35 localhost journal[230404]: ethtool ioctl error on tap972f93d0-ef: No such device Dec 6 05:19:35 localhost journal[230404]: ethtool ioctl error on tap972f93d0-ef: No such device Dec 6 05:19:35 localhost journal[230404]: ethtool ioctl error on tap972f93d0-ef: No such device Dec 6 05:19:35 localhost journal[230404]: ethtool ioctl error on tap972f93d0-ef: No such device Dec 6 05:19:35 localhost journal[230404]: ethtool ioctl error on tap972f93d0-ef: No such device Dec 6 05:19:35 localhost journal[230404]: ethtool ioctl error on tap972f93d0-ef: No such device Dec 6 05:19:35 localhost nova_compute[282193]: 2025-12-06 10:19:35.356 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:35 localhost journal[230404]: ethtool ioctl error on tap972f93d0-ef: No such device Dec 6 05:19:35 localhost journal[230404]: ethtool ioctl error on tap972f93d0-ef: No such device Dec 6 05:19:35 localhost nova_compute[282193]: 2025-12-06 10:19:35.387 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:35 localhost neutron_sriov_agent[256690]: 2025-12-06 10:19:35.418 2 INFO neutron.agent.securitygroups_rpc [None req-dad18757-c8ae-4573-92a7-49e2b9f564ab a31577503edf4745abb112adc3113276 90bd35d6ab7c40c58d9d1d61ff7a12d3 - - default default] Security group member updated ['1fabbc74-497e-44d5-8d22-97b341de2968']#033[00m Dec 6 05:19:35 localhost ovn_controller[154851]: 2025-12-06T10:19:35Z|00374|binding|INFO|Removing iface tap972f93d0-ef ovn-installed in OVS Dec 6 05:19:35 localhost ovn_controller[154851]: 2025-12-06T10:19:35Z|00375|binding|INFO|Removing lport 972f93d0-ef12-4f24-a9a3-a699348b3358 ovn-installed in OVS Dec 6 05:19:35 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:35.913 160509 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 740f31b0-86f5-42a1-89fc-5a7cfe5f636e with type ""#033[00m Dec 6 05:19:35 localhost nova_compute[282193]: 2025-12-06 10:19:35.915 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:35 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:35.915 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1::1/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-5a779660-e992-4a3c-97a9-04be836f7fcf', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5a779660-e992-4a3c-97a9-04be836f7fcf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29c573bcf157448abe548893ad01e3d2', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dd5b3e84-a0b8-4106-a60c-6b065c1db991, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=972f93d0-ef12-4f24-a9a3-a699348b3358) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:19:35 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:35.918 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 972f93d0-ef12-4f24-a9a3-a699348b3358 in datapath 5a779660-e992-4a3c-97a9-04be836f7fcf unbound from our chassis#033[00m Dec 6 05:19:35 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:35.920 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 5a779660-e992-4a3c-97a9-04be836f7fcf or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:19:35 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:35.921 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[f62d5ccc-e875-4e21-badf-38f0cba563e9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:19:35 localhost nova_compute[282193]: 2025-12-06 10:19:35.923 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:36 localhost neutron_sriov_agent[256690]: 2025-12-06 10:19:36.139 2 INFO neutron.agent.securitygroups_rpc [None req-792339ab-c7cd-409a-a342-ae21c75c2ee5 b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['cd56abe4-204c-4363-ad64-0a6840260727']#033[00m Dec 6 05:19:36 localhost podman[327664]: Dec 6 05:19:36 localhost podman[327664]: 2025-12-06 10:19:36.306224635 +0000 UTC m=+0.099043149 container create 1541aa106705a20c01b44e91351c5106244b5f8e047bafeec8275898714300ed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5a779660-e992-4a3c-97a9-04be836f7fcf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 6 05:19:36 localhost systemd[1]: Started libpod-conmon-1541aa106705a20c01b44e91351c5106244b5f8e047bafeec8275898714300ed.scope. Dec 6 05:19:36 localhost podman[327664]: 2025-12-06 10:19:36.258495656 +0000 UTC m=+0.051314210 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:19:36 localhost systemd[1]: Started libcrun container. Dec 6 05:19:36 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ac63c69befceb1700e6f815e6fb9f6bb1f8c8c5843c6564c9613824e209b1c6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:19:36 localhost podman[327664]: 2025-12-06 10:19:36.378431092 +0000 UTC m=+0.171249596 container init 1541aa106705a20c01b44e91351c5106244b5f8e047bafeec8275898714300ed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5a779660-e992-4a3c-97a9-04be836f7fcf, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 6 05:19:36 localhost podman[327664]: 2025-12-06 10:19:36.388066156 +0000 UTC m=+0.180884670 container start 1541aa106705a20c01b44e91351c5106244b5f8e047bafeec8275898714300ed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5a779660-e992-4a3c-97a9-04be836f7fcf, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true) Dec 6 05:19:36 localhost dnsmasq[327682]: started, version 2.85 cachesize 150 Dec 6 05:19:36 localhost dnsmasq[327682]: DNS service limited to local subnets Dec 6 05:19:36 localhost dnsmasq[327682]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:19:36 localhost dnsmasq[327682]: warning: no upstream servers configured Dec 6 05:19:36 localhost dnsmasq-dhcp[327682]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d Dec 6 05:19:36 localhost dnsmasq[327682]: read /var/lib/neutron/dhcp/5a779660-e992-4a3c-97a9-04be836f7fcf/addn_hosts - 0 addresses Dec 6 05:19:36 localhost dnsmasq-dhcp[327682]: read /var/lib/neutron/dhcp/5a779660-e992-4a3c-97a9-04be836f7fcf/host Dec 6 05:19:36 localhost dnsmasq-dhcp[327682]: read /var/lib/neutron/dhcp/5a779660-e992-4a3c-97a9-04be836f7fcf/opts Dec 6 05:19:36 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:36.456 263652 INFO neutron.agent.dhcp.agent [None req-90b892c8-0506-4dbe-af3a-d4a70d2244e9 - - - - - -] Synchronizing state#033[00m Dec 6 05:19:36 localhost neutron_sriov_agent[256690]: 2025-12-06 10:19:36.470 2 INFO neutron.agent.securitygroups_rpc [None req-21429926-074a-46a0-a4f4-611f2e364131 a31577503edf4745abb112adc3113276 90bd35d6ab7c40c58d9d1d61ff7a12d3 - - default default] Security group member updated ['1fabbc74-497e-44d5-8d22-97b341de2968']#033[00m Dec 6 05:19:36 localhost ovn_controller[154851]: 2025-12-06T10:19:36Z|00376|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:19:36 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:36.564 263652 INFO neutron.agent.dhcp.agent [None req-e1eef1f3-ad01-420a-aeea-e1814b5e1031 - - - - - -] DHCP configuration for ports {'84119ae3-1fc0-42ee-88f4-2202a230930e'} is completed#033[00m Dec 6 05:19:36 localhost nova_compute[282193]: 2025-12-06 10:19:36.580 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:36 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:36.757 263652 INFO neutron.agent.dhcp.agent [None req-cca5e3b7-9de6-4346-ba1d-d46e49c59ff8 - - - - - -] All active networks have been fetched through RPC.#033[00m Dec 6 05:19:36 localhost dnsmasq[327682]: exiting on receipt of SIGTERM Dec 6 05:19:36 localhost podman[327700]: 2025-12-06 10:19:36.947511478 +0000 UTC m=+0.063637387 container kill 1541aa106705a20c01b44e91351c5106244b5f8e047bafeec8275898714300ed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5a779660-e992-4a3c-97a9-04be836f7fcf, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125) Dec 6 05:19:36 localhost systemd[1]: libpod-1541aa106705a20c01b44e91351c5106244b5f8e047bafeec8275898714300ed.scope: Deactivated successfully. Dec 6 05:19:37 localhost podman[327713]: 2025-12-06 10:19:37.025601675 +0000 UTC m=+0.061641175 container died 1541aa106705a20c01b44e91351c5106244b5f8e047bafeec8275898714300ed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5a779660-e992-4a3c-97a9-04be836f7fcf, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 6 05:19:37 localhost podman[327713]: 2025-12-06 10:19:37.073502609 +0000 UTC m=+0.109542079 container cleanup 1541aa106705a20c01b44e91351c5106244b5f8e047bafeec8275898714300ed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5a779660-e992-4a3c-97a9-04be836f7fcf, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:19:37 localhost systemd[1]: libpod-conmon-1541aa106705a20c01b44e91351c5106244b5f8e047bafeec8275898714300ed.scope: Deactivated successfully. Dec 6 05:19:37 localhost neutron_sriov_agent[256690]: 2025-12-06 10:19:37.107 2 INFO neutron.agent.securitygroups_rpc [None req-bb88ef2d-64f1-4b09-a81f-2bd8c1d4b6c6 a31577503edf4745abb112adc3113276 90bd35d6ab7c40c58d9d1d61ff7a12d3 - - default default] Security group member updated ['1fabbc74-497e-44d5-8d22-97b341de2968']#033[00m Dec 6 05:19:37 localhost podman[327715]: 2025-12-06 10:19:37.150784121 +0000 UTC m=+0.178755396 container remove 1541aa106705a20c01b44e91351c5106244b5f8e047bafeec8275898714300ed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5a779660-e992-4a3c-97a9-04be836f7fcf, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 6 05:19:37 localhost nova_compute[282193]: 2025-12-06 10:19:37.161 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:37 localhost kernel: device tap972f93d0-ef left promiscuous mode Dec 6 05:19:37 localhost nova_compute[282193]: 2025-12-06 10:19:37.175 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:37 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:37.199 263652 INFO neutron.agent.dhcp.agent [-] Starting network 64b8068a-5126-4521-be60-754a588ea213 dhcp configuration#033[00m Dec 6 05:19:37 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:37.200 263652 INFO neutron.agent.dhcp.agent [-] Finished network 64b8068a-5126-4521-be60-754a588ea213 dhcp configuration#033[00m Dec 6 05:19:37 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:37.200 263652 INFO neutron.agent.dhcp.agent [-] Starting network 86cd7531-ca23-4747-83b1-28bcd175a277 dhcp configuration#033[00m Dec 6 05:19:37 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:37.201 263652 INFO neutron.agent.dhcp.agent [-] Finished network 86cd7531-ca23-4747-83b1-28bcd175a277 dhcp configuration#033[00m Dec 6 05:19:37 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:37.201 263652 INFO neutron.agent.dhcp.agent [-] Starting network ba9cb6a7-7d80-4f37-aa3f-eaee69fb8585 dhcp configuration#033[00m Dec 6 05:19:37 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:37.202 263652 INFO neutron.agent.dhcp.agent [-] Finished network ba9cb6a7-7d80-4f37-aa3f-eaee69fb8585 dhcp configuration#033[00m Dec 6 05:19:37 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:37.203 263652 INFO neutron.agent.dhcp.agent [None req-52fc0508-2b36-4e56-82d0-a682e93b4cc9 - - - - - -] Synchronizing state complete#033[00m Dec 6 05:19:37 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:37.203 263652 INFO neutron.agent.dhcp.agent [None req-273b9099-78e2-422b-a659-0c74343e45f5 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:19:37 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:37.204 263652 INFO neutron.agent.dhcp.agent [None req-273b9099-78e2-422b-a659-0c74343e45f5 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:19:37 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:37.204 263652 INFO neutron.agent.dhcp.agent [None req-ea612e7a-694f-480c-a463-f4911cd2ede5 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:19:37 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:37.204 263652 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:19:37 localhost systemd[1]: var-lib-containers-storage-overlay-2ac63c69befceb1700e6f815e6fb9f6bb1f8c8c5843c6564c9613824e209b1c6-merged.mount: Deactivated successfully. Dec 6 05:19:37 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1541aa106705a20c01b44e91351c5106244b5f8e047bafeec8275898714300ed-userdata-shm.mount: Deactivated successfully. Dec 6 05:19:37 localhost systemd[1]: run-netns-qdhcp\x2d5a779660\x2de992\x2d4a3c\x2d97a9\x2d04be836f7fcf.mount: Deactivated successfully. Dec 6 05:19:37 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:37.358 263652 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:19:38 localhost nova_compute[282193]: 2025-12-06 10:19:38.121 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:38 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:38.294 263652 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:19:38 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:19:38 localhost nova_compute[282193]: 2025-12-06 10:19:38.657 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:19:39 localhost nova_compute[282193]: 2025-12-06 10:19:39.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:19:39 localhost nova_compute[282193]: 2025-12-06 10:19:39.182 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 6 05:19:39 localhost nova_compute[282193]: 2025-12-06 10:19:39.182 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 6 05:19:39 localhost nova_compute[282193]: 2025-12-06 10:19:39.419 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:39 localhost nova_compute[282193]: 2025-12-06 10:19:39.543 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 6 05:19:39 localhost nova_compute[282193]: 2025-12-06 10:19:39.543 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquired lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 6 05:19:39 localhost nova_compute[282193]: 2025-12-06 10:19:39.544 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 6 05:19:39 localhost nova_compute[282193]: 2025-12-06 10:19:39.544 282197 DEBUG nova.objects.instance [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 6 05:19:40 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-659509012", "format": "json"} : dispatch Dec 6 05:19:40 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-659509012", "caps": ["mds", "allow rw path=/volumes/_nogroup/7a05360b-59a7-495e-a884-ff87c0880377/50397115-0c2d-4191-896c-db5fe71ed3ba", "osd", "allow rw pool=manila_data namespace=fsvolumens_7a05360b-59a7-495e-a884-ff87c0880377", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:19:40 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-659509012", "caps": ["mds", "allow rw path=/volumes/_nogroup/7a05360b-59a7-495e-a884-ff87c0880377/50397115-0c2d-4191-896c-db5fe71ed3ba", "osd", "allow rw pool=manila_data namespace=fsvolumens_7a05360b-59a7-495e-a884-ff87c0880377", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:19:40 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-659509012", "caps": ["mds", "allow rw path=/volumes/_nogroup/7a05360b-59a7-495e-a884-ff87c0880377/50397115-0c2d-4191-896c-db5fe71ed3ba", "osd", "allow rw pool=manila_data namespace=fsvolumens_7a05360b-59a7-495e-a884-ff87c0880377", "mon", "allow r"], "format": "json"}]': finished Dec 6 05:19:40 localhost nova_compute[282193]: 2025-12-06 10:19:40.828 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updating instance_info_cache with network_info: [{"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 6 05:19:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:19:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:19:40 localhost podman[327742]: 2025-12-06 10:19:40.93216835 +0000 UTC m=+0.089852267 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, vendor=Red Hat, Inc., version=9.6, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, build-date=2025-08-20T13:12:41, config_id=edpm, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, managed_by=edpm_ansible, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=) Dec 6 05:19:40 localhost podman[327742]: 2025-12-06 10:19:40.948647844 +0000 UTC m=+0.106331731 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, release=1755695350, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=edpm, distribution-scope=public, maintainer=Red Hat, Inc., architecture=x86_64) Dec 6 05:19:40 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:19:41 localhost podman[327743]: 2025-12-06 10:19:41.030752484 +0000 UTC m=+0.185788940 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125) Dec 6 05:19:41 localhost podman[327743]: 2025-12-06 10:19:41.07218819 +0000 UTC m=+0.227224626 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:19:41 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:19:41 localhost nova_compute[282193]: 2025-12-06 10:19:41.136 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Releasing lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 6 05:19:41 localhost nova_compute[282193]: 2025-12-06 10:19:41.137 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 6 05:19:41 localhost nova_compute[282193]: 2025-12-06 10:19:41.138 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:19:41 localhost nova_compute[282193]: 2025-12-06 10:19:41.180 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:19:41 localhost nova_compute[282193]: 2025-12-06 10:19:41.180 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:19:41 localhost nova_compute[282193]: 2025-12-06 10:19:41.180 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:19:41 localhost nova_compute[282193]: 2025-12-06 10:19:41.180 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 6 05:19:41 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-659509012", "format": "json"} : dispatch Dec 6 05:19:41 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-659509012"} : dispatch Dec 6 05:19:41 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-659509012"} : dispatch Dec 6 05:19:41 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-659509012"}]': finished Dec 6 05:19:42 localhost nova_compute[282193]: 2025-12-06 10:19:42.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:19:42 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 6 05:19:42 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/139921279' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 6 05:19:42 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 6 05:19:42 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/139921279' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 6 05:19:43 localhost nova_compute[282193]: 2025-12-06 10:19:43.154 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:43 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:19:43 localhost neutron_sriov_agent[256690]: 2025-12-06 10:19:43.623 2 INFO neutron.agent.securitygroups_rpc [None req-f21d32c3-41e3-465d-a5ba-39b4b631a0c1 b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['ae1eaa44-7360-485a-b85b-f1bfb95ce20b']#033[00m Dec 6 05:19:44 localhost nova_compute[282193]: 2025-12-06 10:19:44.454 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:19:44 localhost podman[327777]: 2025-12-06 10:19:44.916679038 +0000 UTC m=+0.078472089 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2) Dec 6 05:19:44 localhost podman[327777]: 2025-12-06 10:19:44.955293499 +0000 UTC m=+0.117086560 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:19:44 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:19:45 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:45.006 263652 INFO neutron.agent.linux.ip_lib [None req-d2a896ef-8f87-4210-8e7e-42e780412f4b - - - - - -] Device tapf66469ad-cc cannot be used as it has no MAC address#033[00m Dec 6 05:19:45 localhost nova_compute[282193]: 2025-12-06 10:19:45.034 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:45 localhost kernel: device tapf66469ad-cc entered promiscuous mode Dec 6 05:19:45 localhost ovn_controller[154851]: 2025-12-06T10:19:45Z|00377|binding|INFO|Claiming lport f66469ad-cca4-4e75-8ad1-16dcdb97964a for this chassis. Dec 6 05:19:45 localhost ovn_controller[154851]: 2025-12-06T10:19:45Z|00378|binding|INFO|f66469ad-cca4-4e75-8ad1-16dcdb97964a: Claiming unknown Dec 6 05:19:45 localhost NetworkManager[5973]: [1765016385.0424] manager: (tapf66469ad-cc): new Generic device (/org/freedesktop/NetworkManager/Devices/62) Dec 6 05:19:45 localhost nova_compute[282193]: 2025-12-06 10:19:45.042 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:45 localhost systemd-udevd[327807]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:19:45 localhost journal[230404]: ethtool ioctl error on tapf66469ad-cc: No such device Dec 6 05:19:45 localhost journal[230404]: ethtool ioctl error on tapf66469ad-cc: No such device Dec 6 05:19:45 localhost ovn_controller[154851]: 2025-12-06T10:19:45Z|00379|binding|INFO|Setting lport f66469ad-cca4-4e75-8ad1-16dcdb97964a ovn-installed in OVS Dec 6 05:19:45 localhost nova_compute[282193]: 2025-12-06 10:19:45.081 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:45 localhost journal[230404]: ethtool ioctl error on tapf66469ad-cc: No such device Dec 6 05:19:45 localhost journal[230404]: ethtool ioctl error on tapf66469ad-cc: No such device Dec 6 05:19:45 localhost journal[230404]: ethtool ioctl error on tapf66469ad-cc: No such device Dec 6 05:19:45 localhost journal[230404]: ethtool ioctl error on tapf66469ad-cc: No such device Dec 6 05:19:45 localhost journal[230404]: ethtool ioctl error on tapf66469ad-cc: No such device Dec 6 05:19:45 localhost journal[230404]: ethtool ioctl error on tapf66469ad-cc: No such device Dec 6 05:19:45 localhost nova_compute[282193]: 2025-12-06 10:19:45.126 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:45 localhost nova_compute[282193]: 2025-12-06 10:19:45.154 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:45 localhost nova_compute[282193]: 2025-12-06 10:19:45.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:19:45 localhost ovn_controller[154851]: 2025-12-06T10:19:45Z|00380|binding|INFO|Setting lport f66469ad-cca4-4e75-8ad1-16dcdb97964a up in Southbound Dec 6 05:19:45 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:45.482 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-3dc43717-9c00-4de5-8dc8-b5288e2abad9', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3dc43717-9c00-4de5-8dc8-b5288e2abad9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90bd35d6ab7c40c58d9d1d61ff7a12d3', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e7698555-bf3e-4a92-a2b0-48becfd360ed, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=f66469ad-cca4-4e75-8ad1-16dcdb97964a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:19:45 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:45.486 160509 INFO neutron.agent.ovn.metadata.agent [-] Port f66469ad-cca4-4e75-8ad1-16dcdb97964a in datapath 3dc43717-9c00-4de5-8dc8-b5288e2abad9 bound to our chassis#033[00m Dec 6 05:19:45 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:45.489 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Port aa67f9cf-8b12-49fc-b2c3-d95c1add9eae IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 6 05:19:45 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:45.489 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3dc43717-9c00-4de5-8dc8-b5288e2abad9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:19:45 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:45.493 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[6f43c383-e81b-4700-a6fb-65e143ad9531]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:19:46 localhost podman[327878]: Dec 6 05:19:46 localhost podman[327878]: 2025-12-06 10:19:46.105724716 +0000 UTC m=+0.098176952 container create c31077edc184a24e36da53f7b38bbed4198fa3943c8edd21da2fe4e57d6bbbfc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3dc43717-9c00-4de5-8dc8-b5288e2abad9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2) Dec 6 05:19:46 localhost podman[327878]: 2025-12-06 10:19:46.055588413 +0000 UTC m=+0.048040699 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:19:46 localhost systemd[1]: Started libpod-conmon-c31077edc184a24e36da53f7b38bbed4198fa3943c8edd21da2fe4e57d6bbbfc.scope. Dec 6 05:19:46 localhost systemd[1]: tmp-crun.Apo2WE.mount: Deactivated successfully. Dec 6 05:19:46 localhost systemd[1]: Started libcrun container. Dec 6 05:19:46 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d40a863adfa6ac1229342b29110117480605cd9c7a0dd59af495be4287ab44f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:19:46 localhost podman[327878]: 2025-12-06 10:19:46.207852417 +0000 UTC m=+0.200304663 container init c31077edc184a24e36da53f7b38bbed4198fa3943c8edd21da2fe4e57d6bbbfc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3dc43717-9c00-4de5-8dc8-b5288e2abad9, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:19:46 localhost podman[327878]: 2025-12-06 10:19:46.219717141 +0000 UTC m=+0.212169377 container start c31077edc184a24e36da53f7b38bbed4198fa3943c8edd21da2fe4e57d6bbbfc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3dc43717-9c00-4de5-8dc8-b5288e2abad9, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Dec 6 05:19:46 localhost dnsmasq[327896]: started, version 2.85 cachesize 150 Dec 6 05:19:46 localhost dnsmasq[327896]: DNS service limited to local subnets Dec 6 05:19:46 localhost dnsmasq[327896]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:19:46 localhost dnsmasq[327896]: warning: no upstream servers configured Dec 6 05:19:46 localhost dnsmasq-dhcp[327896]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 6 05:19:46 localhost dnsmasq[327896]: read /var/lib/neutron/dhcp/3dc43717-9c00-4de5-8dc8-b5288e2abad9/addn_hosts - 0 addresses Dec 6 05:19:46 localhost dnsmasq-dhcp[327896]: read /var/lib/neutron/dhcp/3dc43717-9c00-4de5-8dc8-b5288e2abad9/host Dec 6 05:19:46 localhost dnsmasq-dhcp[327896]: read /var/lib/neutron/dhcp/3dc43717-9c00-4de5-8dc8-b5288e2abad9/opts Dec 6 05:19:46 localhost openstack_network_exporter[243110]: ERROR 10:19:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:19:46 localhost openstack_network_exporter[243110]: ERROR 10:19:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:19:46 localhost openstack_network_exporter[243110]: ERROR 10:19:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:19:46 localhost openstack_network_exporter[243110]: ERROR 10:19:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:19:46 localhost openstack_network_exporter[243110]: Dec 6 05:19:46 localhost openstack_network_exporter[243110]: ERROR 10:19:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:19:46 localhost openstack_network_exporter[243110]: Dec 6 05:19:46 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:46.789 263652 INFO neutron.agent.dhcp.agent [None req-40d3a584-ba15-47e8-92c6-15a36d08dec5 - - - - - -] DHCP configuration for ports {'62c4ed28-0829-462e-ad0f-ed2a041f9945'} is completed#033[00m Dec 6 05:19:47 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:47.244 263652 INFO neutron.agent.linux.ip_lib [None req-d91cf00b-8b62-45e5-aaf8-aca75f50b829 - - - - - -] Device tap1ca7855c-cd cannot be used as it has no MAC address#033[00m Dec 6 05:19:47 localhost nova_compute[282193]: 2025-12-06 10:19:47.263 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:47 localhost kernel: device tap1ca7855c-cd entered promiscuous mode Dec 6 05:19:47 localhost NetworkManager[5973]: [1765016387.2717] manager: (tap1ca7855c-cd): new Generic device (/org/freedesktop/NetworkManager/Devices/63) Dec 6 05:19:47 localhost ovn_controller[154851]: 2025-12-06T10:19:47Z|00381|binding|INFO|Claiming lport 1ca7855c-cd02-499a-a723-f901eb28ad76 for this chassis. Dec 6 05:19:47 localhost ovn_controller[154851]: 2025-12-06T10:19:47Z|00382|binding|INFO|1ca7855c-cd02-499a-a723-f901eb28ad76: Claiming unknown Dec 6 05:19:47 localhost nova_compute[282193]: 2025-12-06 10:19:47.274 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:47 localhost journal[230404]: ethtool ioctl error on tap1ca7855c-cd: No such device Dec 6 05:19:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:47.308 160509 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:19:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:47.308 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:19:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:47.309 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:19:47 localhost journal[230404]: ethtool ioctl error on tap1ca7855c-cd: No such device Dec 6 05:19:47 localhost ovn_controller[154851]: 2025-12-06T10:19:47Z|00383|binding|INFO|Setting lport 1ca7855c-cd02-499a-a723-f901eb28ad76 ovn-installed in OVS Dec 6 05:19:47 localhost nova_compute[282193]: 2025-12-06 10:19:47.314 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:47 localhost journal[230404]: ethtool ioctl error on tap1ca7855c-cd: No such device Dec 6 05:19:47 localhost journal[230404]: ethtool ioctl error on tap1ca7855c-cd: No such device Dec 6 05:19:47 localhost journal[230404]: ethtool ioctl error on tap1ca7855c-cd: No such device Dec 6 05:19:47 localhost journal[230404]: ethtool ioctl error on tap1ca7855c-cd: No such device Dec 6 05:19:47 localhost journal[230404]: ethtool ioctl error on tap1ca7855c-cd: No such device Dec 6 05:19:47 localhost journal[230404]: ethtool ioctl error on tap1ca7855c-cd: No such device Dec 6 05:19:47 localhost nova_compute[282193]: 2025-12-06 10:19:47.351 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:47 localhost ovn_controller[154851]: 2025-12-06T10:19:47Z|00384|binding|INFO|Setting lport 1ca7855c-cd02-499a-a723-f901eb28ad76 up in Southbound Dec 6 05:19:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:47.383 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-1898c940-0651-45db-aebd-630d54fbe329', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1898c940-0651-45db-aebd-630d54fbe329', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f00ab5f7d934f62991ed1e7e798e47e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8c44abc1-74e7-483f-a478-b580dd3fd31f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=1ca7855c-cd02-499a-a723-f901eb28ad76) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:19:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:47.385 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 1ca7855c-cd02-499a-a723-f901eb28ad76 in datapath 1898c940-0651-45db-aebd-630d54fbe329 bound to our chassis#033[00m Dec 6 05:19:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:47.386 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 1898c940-0651-45db-aebd-630d54fbe329 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:19:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:47.387 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[b428d420-dbb6-4ae6-bc9b-0fd85ba32eff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:19:47 localhost nova_compute[282193]: 2025-12-06 10:19:47.396 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:48 localhost podman[327971]: 2025-12-06 10:19:48.038890379 +0000 UTC m=+0.074508469 container kill c31077edc184a24e36da53f7b38bbed4198fa3943c8edd21da2fe4e57d6bbbfc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3dc43717-9c00-4de5-8dc8-b5288e2abad9, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:19:48 localhost dnsmasq[327896]: exiting on receipt of SIGTERM Dec 6 05:19:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:19:48 localhost systemd[1]: tmp-crun.su1KFg.mount: Deactivated successfully. Dec 6 05:19:48 localhost systemd[1]: libpod-c31077edc184a24e36da53f7b38bbed4198fa3943c8edd21da2fe4e57d6bbbfc.scope: Deactivated successfully. Dec 6 05:19:48 localhost podman[327983]: 2025-12-06 10:19:48.12040348 +0000 UTC m=+0.064271835 container died c31077edc184a24e36da53f7b38bbed4198fa3943c8edd21da2fe4e57d6bbbfc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3dc43717-9c00-4de5-8dc8-b5288e2abad9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 6 05:19:48 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c31077edc184a24e36da53f7b38bbed4198fa3943c8edd21da2fe4e57d6bbbfc-userdata-shm.mount: Deactivated successfully. Dec 6 05:19:48 localhost systemd[1]: var-lib-containers-storage-overlay-4d40a863adfa6ac1229342b29110117480605cd9c7a0dd59af495be4287ab44f-merged.mount: Deactivated successfully. Dec 6 05:19:48 localhost podman[327983]: 2025-12-06 10:19:48.147695565 +0000 UTC m=+0.091563880 container cleanup c31077edc184a24e36da53f7b38bbed4198fa3943c8edd21da2fe4e57d6bbbfc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3dc43717-9c00-4de5-8dc8-b5288e2abad9, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0) Dec 6 05:19:48 localhost systemd[1]: libpod-conmon-c31077edc184a24e36da53f7b38bbed4198fa3943c8edd21da2fe4e57d6bbbfc.scope: Deactivated successfully. Dec 6 05:19:48 localhost nova_compute[282193]: 2025-12-06 10:19:48.198 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:48 localhost podman[327985]: 2025-12-06 10:19:48.218709865 +0000 UTC m=+0.153899405 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 6 05:19:48 localhost podman[327992]: 2025-12-06 10:19:48.304782267 +0000 UTC m=+0.231698694 container remove c31077edc184a24e36da53f7b38bbed4198fa3943c8edd21da2fe4e57d6bbbfc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3dc43717-9c00-4de5-8dc8-b5288e2abad9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:19:48 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:19:48 localhost nova_compute[282193]: 2025-12-06 10:19:48.317 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:48 localhost ovn_controller[154851]: 2025-12-06T10:19:48Z|00385|binding|INFO|Releasing lport f66469ad-cca4-4e75-8ad1-16dcdb97964a from this chassis (sb_readonly=0) Dec 6 05:19:48 localhost ovn_controller[154851]: 2025-12-06T10:19:48Z|00386|binding|INFO|Setting lport f66469ad-cca4-4e75-8ad1-16dcdb97964a down in Southbound Dec 6 05:19:48 localhost kernel: device tapf66469ad-cc left promiscuous mode Dec 6 05:19:48 localhost podman[327985]: 2025-12-06 10:19:48.328576844 +0000 UTC m=+0.263766424 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:19:48 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:19:48 localhost nova_compute[282193]: 2025-12-06 10:19:48.350 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:48 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:48.366 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-3dc43717-9c00-4de5-8dc8-b5288e2abad9', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3dc43717-9c00-4de5-8dc8-b5288e2abad9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90bd35d6ab7c40c58d9d1d61ff7a12d3', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e7698555-bf3e-4a92-a2b0-48becfd360ed, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=f66469ad-cca4-4e75-8ad1-16dcdb97964a) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:19:48 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:48.368 160509 INFO neutron.agent.ovn.metadata.agent [-] Port f66469ad-cca4-4e75-8ad1-16dcdb97964a in datapath 3dc43717-9c00-4de5-8dc8-b5288e2abad9 unbound from our chassis#033[00m Dec 6 05:19:48 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:48.371 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3dc43717-9c00-4de5-8dc8-b5288e2abad9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:19:48 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:48.372 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[bce42a0b-acd6-4e53-9715-b083344cf114]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:19:48 localhost podman[328080]: Dec 6 05:19:48 localhost podman[328080]: 2025-12-06 10:19:48.529236918 +0000 UTC m=+0.081100740 container create c1aa008254c45e53dc61ebe069ba0c87cd28f3a675b669f9e3eef43a4322839a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1898c940-0651-45db-aebd-630d54fbe329, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Dec 6 05:19:48 localhost systemd[1]: Started libpod-conmon-c1aa008254c45e53dc61ebe069ba0c87cd28f3a675b669f9e3eef43a4322839a.scope. Dec 6 05:19:48 localhost podman[328080]: 2025-12-06 10:19:48.485460079 +0000 UTC m=+0.037323931 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:19:48 localhost systemd[1]: Started libcrun container. Dec 6 05:19:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3a91e6d853e6221211e9d3c2ca40331c0a06702c1a305cff3b145b6afe2f5cd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:19:48 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:48.597 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:94:c3:b8 10.100.0.18 10.100.0.3'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.3/28', 'neutron:device_id': 'ovnmeta-667a7cf2-00f8-4896-8e3d-8222fad7f397', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-667a7cf2-00f8-4896-8e3d-8222fad7f397', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f8e1c4c589749b99178bbc7c2bea3f0', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=528a9f17-509a-4c49-a9ac-4a6363f2178f, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=659e29bd-a84c-4733-b754-dbb7b70b98cc) old=Port_Binding(mac=['fa:16:3e:94:c3:b8 10.100.0.3'], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ovnmeta-667a7cf2-00f8-4896-8e3d-8222fad7f397', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-667a7cf2-00f8-4896-8e3d-8222fad7f397', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f8e1c4c589749b99178bbc7c2bea3f0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:19:48 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:48.600 160509 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 659e29bd-a84c-4733-b754-dbb7b70b98cc in datapath 667a7cf2-00f8-4896-8e3d-8222fad7f397 updated#033[00m Dec 6 05:19:48 localhost podman[328080]: 2025-12-06 10:19:48.602891489 +0000 UTC m=+0.154755321 container init c1aa008254c45e53dc61ebe069ba0c87cd28f3a675b669f9e3eef43a4322839a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1898c940-0651-45db-aebd-630d54fbe329, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Dec 6 05:19:48 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:48.603 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 667a7cf2-00f8-4896-8e3d-8222fad7f397, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:19:48 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:48.604 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[acf880d5-73f1-462b-8d3c-a3d22efb3d7c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:19:48 localhost podman[328080]: 2025-12-06 10:19:48.61372512 +0000 UTC m=+0.165588952 container start c1aa008254c45e53dc61ebe069ba0c87cd28f3a675b669f9e3eef43a4322839a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1898c940-0651-45db-aebd-630d54fbe329, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Dec 6 05:19:48 localhost dnsmasq[328099]: started, version 2.85 cachesize 150 Dec 6 05:19:48 localhost dnsmasq[328099]: DNS service limited to local subnets Dec 6 05:19:48 localhost dnsmasq[328099]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:19:48 localhost dnsmasq[328099]: warning: no upstream servers configured Dec 6 05:19:48 localhost dnsmasq-dhcp[328099]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 6 05:19:48 localhost dnsmasq[328099]: read /var/lib/neutron/dhcp/1898c940-0651-45db-aebd-630d54fbe329/addn_hosts - 0 addresses Dec 6 05:19:48 localhost dnsmasq-dhcp[328099]: read /var/lib/neutron/dhcp/1898c940-0651-45db-aebd-630d54fbe329/host Dec 6 05:19:48 localhost dnsmasq-dhcp[328099]: read /var/lib/neutron/dhcp/1898c940-0651-45db-aebd-630d54fbe329/opts Dec 6 05:19:48 localhost ovn_controller[154851]: 2025-12-06T10:19:48Z|00387|binding|INFO|Removing iface tap1ca7855c-cd ovn-installed in OVS Dec 6 05:19:48 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:48.646 160509 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 68df6cee-6a4d-4258-87de-8bc5bc40efa1 with type ""#033[00m Dec 6 05:19:48 localhost ovn_controller[154851]: 2025-12-06T10:19:48Z|00388|binding|INFO|Removing lport 1ca7855c-cd02-499a-a723-f901eb28ad76 ovn-installed in OVS Dec 6 05:19:48 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:48.648 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-1898c940-0651-45db-aebd-630d54fbe329', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1898c940-0651-45db-aebd-630d54fbe329', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f00ab5f7d934f62991ed1e7e798e47e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8c44abc1-74e7-483f-a478-b580dd3fd31f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=1ca7855c-cd02-499a-a723-f901eb28ad76) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:19:48 localhost nova_compute[282193]: 2025-12-06 10:19:48.648 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:48 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:48.650 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 1ca7855c-cd02-499a-a723-f901eb28ad76 in datapath 1898c940-0651-45db-aebd-630d54fbe329 unbound from our chassis#033[00m Dec 6 05:19:48 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:48.651 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 1898c940-0651-45db-aebd-630d54fbe329 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:19:48 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:48.652 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[286e55f1-7ceb-40fd-8491-fc78373c1a68]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:19:48 localhost nova_compute[282193]: 2025-12-06 10:19:48.654 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:48 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:48.980 263652 INFO neutron.agent.dhcp.agent [None req-153de0db-46c2-43a3-bd21-0a0173f33aeb - - - - - -] DHCP configuration for ports {'c00098c6-48f4-4539-8926-9f3b3e3be6e9'} is completed#033[00m Dec 6 05:19:49 localhost systemd[1]: run-netns-qdhcp\x2d3dc43717\x2d9c00\x2d4de5\x2d8dc8\x2db5288e2abad9.mount: Deactivated successfully. Dec 6 05:19:49 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:49.151 263652 INFO neutron.agent.dhcp.agent [None req-5beb0022-7483-49e2-a0d6-eacfb6043978 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:19:49 localhost ovn_controller[154851]: 2025-12-06T10:19:49Z|00389|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:19:49 localhost nova_compute[282193]: 2025-12-06 10:19:49.205 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:49 localhost dnsmasq[328099]: exiting on receipt of SIGTERM Dec 6 05:19:49 localhost podman[328115]: 2025-12-06 10:19:49.254874609 +0000 UTC m=+0.050910287 container kill c1aa008254c45e53dc61ebe069ba0c87cd28f3a675b669f9e3eef43a4322839a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1898c940-0651-45db-aebd-630d54fbe329, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 6 05:19:49 localhost systemd[1]: libpod-c1aa008254c45e53dc61ebe069ba0c87cd28f3a675b669f9e3eef43a4322839a.scope: Deactivated successfully. Dec 6 05:19:49 localhost podman[328128]: 2025-12-06 10:19:49.323165596 +0000 UTC m=+0.054011501 container died c1aa008254c45e53dc61ebe069ba0c87cd28f3a675b669f9e3eef43a4322839a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1898c940-0651-45db-aebd-630d54fbe329, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:19:49 localhost podman[328128]: 2025-12-06 10:19:49.357987521 +0000 UTC m=+0.088833386 container cleanup c1aa008254c45e53dc61ebe069ba0c87cd28f3a675b669f9e3eef43a4322839a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1898c940-0651-45db-aebd-630d54fbe329, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:19:49 localhost systemd[1]: libpod-conmon-c1aa008254c45e53dc61ebe069ba0c87cd28f3a675b669f9e3eef43a4322839a.scope: Deactivated successfully. Dec 6 05:19:49 localhost podman[328130]: 2025-12-06 10:19:49.409192446 +0000 UTC m=+0.131545862 container remove c1aa008254c45e53dc61ebe069ba0c87cd28f3a675b669f9e3eef43a4322839a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1898c940-0651-45db-aebd-630d54fbe329, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 6 05:19:49 localhost nova_compute[282193]: 2025-12-06 10:19:49.462 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:49 localhost kernel: device tap1ca7855c-cd left promiscuous mode Dec 6 05:19:49 localhost nova_compute[282193]: 2025-12-06 10:19:49.478 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:49 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:49.669 263652 INFO neutron.agent.dhcp.agent [None req-d109619d-c45f-4705-b2e4-ba28a4fe13d9 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:19:49 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:49.671 263652 INFO neutron.agent.dhcp.agent [None req-d109619d-c45f-4705-b2e4-ba28a4fe13d9 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:19:50 localhost systemd[1]: var-lib-containers-storage-overlay-e3a91e6d853e6221211e9d3c2ca40331c0a06702c1a305cff3b145b6afe2f5cd-merged.mount: Deactivated successfully. Dec 6 05:19:50 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c1aa008254c45e53dc61ebe069ba0c87cd28f3a675b669f9e3eef43a4322839a-userdata-shm.mount: Deactivated successfully. Dec 6 05:19:50 localhost systemd[1]: run-netns-qdhcp\x2d1898c940\x2d0651\x2d45db\x2daebd\x2d630d54fbe329.mount: Deactivated successfully. Dec 6 05:19:50 localhost neutron_sriov_agent[256690]: 2025-12-06 10:19:50.922 2 INFO neutron.agent.securitygroups_rpc [None req-8ab54d3f-0dba-4adf-88cd-ebbf59b7b541 b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['86fafa90-40d2-4e2b-87d7-dc3d530576aa', 'ae1eaa44-7360-485a-b85b-f1bfb95ce20b']#033[00m Dec 6 05:19:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:19:52 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:52.389 263652 INFO neutron.agent.linux.ip_lib [None req-14bfe837-9375-4020-9865-030c147dcb1d - - - - - -] Device tap3adb2c37-0f cannot be used as it has no MAC address#033[00m Dec 6 05:19:52 localhost nova_compute[282193]: 2025-12-06 10:19:52.411 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:52 localhost systemd[1]: tmp-crun.bYEjzT.mount: Deactivated successfully. Dec 6 05:19:52 localhost kernel: device tap3adb2c37-0f entered promiscuous mode Dec 6 05:19:52 localhost NetworkManager[5973]: [1765016392.4266] manager: (tap3adb2c37-0f): new Generic device (/org/freedesktop/NetworkManager/Devices/64) Dec 6 05:19:52 localhost nova_compute[282193]: 2025-12-06 10:19:52.427 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:52 localhost ovn_controller[154851]: 2025-12-06T10:19:52Z|00390|binding|INFO|Claiming lport 3adb2c37-0f70-478d-98be-4e26b3a4f4ff for this chassis. Dec 6 05:19:52 localhost ovn_controller[154851]: 2025-12-06T10:19:52Z|00391|binding|INFO|3adb2c37-0f70-478d-98be-4e26b3a4f4ff: Claiming unknown Dec 6 05:19:52 localhost systemd-udevd[328185]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:19:52 localhost podman[328161]: 2025-12-06 10:19:52.43338996 +0000 UTC m=+0.114351076 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 6 05:19:52 localhost journal[230404]: ethtool ioctl error on tap3adb2c37-0f: No such device Dec 6 05:19:52 localhost ovn_controller[154851]: 2025-12-06T10:19:52Z|00392|binding|INFO|Setting lport 3adb2c37-0f70-478d-98be-4e26b3a4f4ff ovn-installed in OVS Dec 6 05:19:52 localhost nova_compute[282193]: 2025-12-06 10:19:52.470 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:52 localhost podman[328161]: 2025-12-06 10:19:52.473256809 +0000 UTC m=+0.154217935 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 6 05:19:52 localhost journal[230404]: ethtool ioctl error on tap3adb2c37-0f: No such device Dec 6 05:19:52 localhost journal[230404]: ethtool ioctl error on tap3adb2c37-0f: No such device Dec 6 05:19:52 localhost journal[230404]: ethtool ioctl error on tap3adb2c37-0f: No such device Dec 6 05:19:52 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:19:52 localhost journal[230404]: ethtool ioctl error on tap3adb2c37-0f: No such device Dec 6 05:19:52 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:52.488 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.255.242/28', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-c68f9a6d-f183-4c32-ae20-3af5e94473b3', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c68f9a6d-f183-4c32-ae20-3af5e94473b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fa76bcfc789b4e53acf344cd0b1cd7c5', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=caee8882-f3cb-4a2a-a1c8-8579f9a721cf, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=3adb2c37-0f70-478d-98be-4e26b3a4f4ff) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:19:52 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:52.490 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 3adb2c37-0f70-478d-98be-4e26b3a4f4ff in datapath c68f9a6d-f183-4c32-ae20-3af5e94473b3 bound to our chassis#033[00m Dec 6 05:19:52 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:52.490 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c68f9a6d-f183-4c32-ae20-3af5e94473b3 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:19:52 localhost ovn_metadata_agent[160504]: 2025-12-06 10:19:52.491 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[126a3f1c-091c-401b-8662-29dcb32f9647]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:19:52 localhost ovn_controller[154851]: 2025-12-06T10:19:52Z|00393|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:19:52 localhost journal[230404]: ethtool ioctl error on tap3adb2c37-0f: No such device Dec 6 05:19:52 localhost ovn_controller[154851]: 2025-12-06T10:19:52Z|00394|binding|INFO|Setting lport 3adb2c37-0f70-478d-98be-4e26b3a4f4ff up in Southbound Dec 6 05:19:52 localhost journal[230404]: ethtool ioctl error on tap3adb2c37-0f: No such device Dec 6 05:19:52 localhost journal[230404]: ethtool ioctl error on tap3adb2c37-0f: No such device Dec 6 05:19:52 localhost nova_compute[282193]: 2025-12-06 10:19:52.520 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:52 localhost nova_compute[282193]: 2025-12-06 10:19:52.563 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:53 localhost neutron_sriov_agent[256690]: 2025-12-06 10:19:53.185 2 INFO neutron.agent.securitygroups_rpc [None req-cbc27e7f-4bef-4dfe-ad5b-dd1345427342 b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['86fafa90-40d2-4e2b-87d7-dc3d530576aa']#033[00m Dec 6 05:19:53 localhost nova_compute[282193]: 2025-12-06 10:19:53.239 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:53 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:19:53 localhost podman[328263]: Dec 6 05:19:53 localhost podman[328263]: 2025-12-06 10:19:53.656647542 +0000 UTC m=+0.095499920 container create c2781052a3630cf50580874a9e8559954e4ce423e9575765f5c45967475b3fb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c68f9a6d-f183-4c32-ae20-3af5e94473b3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125) Dec 6 05:19:53 localhost systemd[1]: Started libpod-conmon-c2781052a3630cf50580874a9e8559954e4ce423e9575765f5c45967475b3fb4.scope. Dec 6 05:19:53 localhost podman[328263]: 2025-12-06 10:19:53.611735819 +0000 UTC m=+0.050588207 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:19:53 localhost systemd[1]: tmp-crun.4hhJ8a.mount: Deactivated successfully. Dec 6 05:19:53 localhost systemd[1]: Started libcrun container. Dec 6 05:19:53 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d060ccaa9e70944fae8106a7cb62cb34c009da5ff146db45b624802025af9fb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:19:53 localhost podman[328263]: 2025-12-06 10:19:53.750383038 +0000 UTC m=+0.189235436 container init c2781052a3630cf50580874a9e8559954e4ce423e9575765f5c45967475b3fb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c68f9a6d-f183-4c32-ae20-3af5e94473b3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3) Dec 6 05:19:53 localhost podman[328263]: 2025-12-06 10:19:53.761258231 +0000 UTC m=+0.200110609 container start c2781052a3630cf50580874a9e8559954e4ce423e9575765f5c45967475b3fb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c68f9a6d-f183-4c32-ae20-3af5e94473b3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0) Dec 6 05:19:53 localhost dnsmasq[328281]: started, version 2.85 cachesize 150 Dec 6 05:19:53 localhost dnsmasq[328281]: DNS service limited to local subnets Dec 6 05:19:53 localhost dnsmasq[328281]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:19:53 localhost dnsmasq[328281]: warning: no upstream servers configured Dec 6 05:19:53 localhost dnsmasq-dhcp[328281]: DHCP, static leases only on 10.100.255.240, lease time 1d Dec 6 05:19:53 localhost dnsmasq[328281]: read /var/lib/neutron/dhcp/c68f9a6d-f183-4c32-ae20-3af5e94473b3/addn_hosts - 0 addresses Dec 6 05:19:53 localhost dnsmasq-dhcp[328281]: read /var/lib/neutron/dhcp/c68f9a6d-f183-4c32-ae20-3af5e94473b3/host Dec 6 05:19:53 localhost dnsmasq-dhcp[328281]: read /var/lib/neutron/dhcp/c68f9a6d-f183-4c32-ae20-3af5e94473b3/opts Dec 6 05:19:53 localhost podman[241090]: time="2025-12-06T10:19:53Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:19:53 localhost podman[241090]: @ - - [06/Dec/2025:10:19:53 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157932 "" "Go-http-client/1.1" Dec 6 05:19:53 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:53.969 263652 INFO neutron.agent.dhcp.agent [None req-7491ba73-f23b-433e-8431-90e61e1a197e - - - - - -] DHCP configuration for ports {'ffdb62da-674d-4d01-8db8-0f5fd1e913bf'} is completed#033[00m Dec 6 05:19:53 localhost podman[241090]: @ - - [06/Dec/2025:10:19:53 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19738 "" "Go-http-client/1.1" Dec 6 05:19:54 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:54.122 263652 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:19:54 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #43. Immutable memtables: 0. Dec 6 05:19:54 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:19:54.319446) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 6 05:19:54 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 43 Dec 6 05:19:54 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016394319514, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 666, "num_deletes": 251, "total_data_size": 537930, "memory_usage": 550184, "flush_reason": "Manual Compaction"} Dec 6 05:19:54 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #44: started Dec 6 05:19:54 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016394324029, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 44, "file_size": 345511, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26712, "largest_seqno": 27373, "table_properties": {"data_size": 342325, "index_size": 1041, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8349, "raw_average_key_size": 20, "raw_value_size": 335709, "raw_average_value_size": 818, "num_data_blocks": 46, "num_entries": 410, "num_filter_entries": 410, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765016367, "oldest_key_time": 1765016367, "file_creation_time": 1765016394, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ec2a3a6-c7d0-4f1e-9923-5a3b782725ae", "db_session_id": "JMBO5KX1IJCJ8FWC64EX", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}} Dec 6 05:19:54 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 4652 microseconds, and 1897 cpu microseconds. Dec 6 05:19:54 localhost ceph-mon[298582]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 6 05:19:54 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:19:54.324098) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #44: 345511 bytes OK Dec 6 05:19:54 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:19:54.324126) [db/memtable_list.cc:519] [default] Level-0 commit table #44 started Dec 6 05:19:54 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:19:54.326188) [db/memtable_list.cc:722] [default] Level-0 commit table #44: memtable #1 done Dec 6 05:19:54 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:19:54.326210) EVENT_LOG_v1 {"time_micros": 1765016394326203, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 6 05:19:54 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:19:54.326233) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 6 05:19:54 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 534157, prev total WAL file size 534157, number of live WAL files 2. Dec 6 05:19:54 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000040.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:19:54 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:19:54.326817) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132303438' seq:72057594037927935, type:22 .. '7061786F73003132333030' seq:0, type:0; will stop at (end) Dec 6 05:19:54 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 6 05:19:54 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [44(337KB)], [42(18MB)] Dec 6 05:19:54 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016394326900, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [44], "files_L6": [42], "score": -1, "input_data_size": 20181492, "oldest_snapshot_seqno": -1} Dec 6 05:19:54 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #45: 12835 keys, 18822323 bytes, temperature: kUnknown Dec 6 05:19:54 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016394427947, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 45, "file_size": 18822323, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18748158, "index_size": 40976, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32133, "raw_key_size": 344198, "raw_average_key_size": 26, "raw_value_size": 18528952, "raw_average_value_size": 1443, "num_data_blocks": 1551, "num_entries": 12835, "num_filter_entries": 12835, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015623, "oldest_key_time": 0, "file_creation_time": 1765016394, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ec2a3a6-c7d0-4f1e-9923-5a3b782725ae", "db_session_id": "JMBO5KX1IJCJ8FWC64EX", "orig_file_number": 45, "seqno_to_time_mapping": "N/A"}} Dec 6 05:19:54 localhost ceph-mon[298582]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 6 05:19:54 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:19:54.428307) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 18822323 bytes Dec 6 05:19:54 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:19:54.430365) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 199.5 rd, 186.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 18.9 +0.0 blob) out(18.0 +0.0 blob), read-write-amplify(112.9) write-amplify(54.5) OK, records in: 13354, records dropped: 519 output_compression: NoCompression Dec 6 05:19:54 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:19:54.430405) EVENT_LOG_v1 {"time_micros": 1765016394430388, "job": 24, "event": "compaction_finished", "compaction_time_micros": 101142, "compaction_time_cpu_micros": 50983, "output_level": 6, "num_output_files": 1, "total_output_size": 18822323, "num_input_records": 13354, "num_output_records": 12835, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 6 05:19:54 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:19:54 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016394430662, "job": 24, "event": "table_file_deletion", "file_number": 44} Dec 6 05:19:54 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000042.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:19:54 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016394433497, "job": 24, "event": "table_file_deletion", "file_number": 42} Dec 6 05:19:54 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:19:54.326661) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:19:54 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:19:54.433604) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:19:54 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:19:54.433612) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:19:54 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:19:54.433615) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:19:54 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:19:54.433618) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:19:54 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:19:54.433621) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:19:54 localhost nova_compute[282193]: 2025-12-06 10:19:54.512 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:58 localhost nova_compute[282193]: 2025-12-06 10:19:58.285 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:19:58 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:19:58 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:19:58.417 263652 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:19:59 localhost nova_compute[282193]: 2025-12-06 10:19:59.515 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:00 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:20:00 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:20:00 localhost ceph-mon[298582]: overall HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm Dec 6 05:20:01 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:01.583 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:94:c3:b8 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-667a7cf2-00f8-4896-8e3d-8222fad7f397', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-667a7cf2-00f8-4896-8e3d-8222fad7f397', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f8e1c4c589749b99178bbc7c2bea3f0', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=528a9f17-509a-4c49-a9ac-4a6363f2178f, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=659e29bd-a84c-4733-b754-dbb7b70b98cc) old=Port_Binding(mac=['fa:16:3e:94:c3:b8 10.100.0.18 10.100.0.3'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.3/28', 'neutron:device_id': 'ovnmeta-667a7cf2-00f8-4896-8e3d-8222fad7f397', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-667a7cf2-00f8-4896-8e3d-8222fad7f397', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f8e1c4c589749b99178bbc7c2bea3f0', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:20:01 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:01.585 160509 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 659e29bd-a84c-4733-b754-dbb7b70b98cc in datapath 667a7cf2-00f8-4896-8e3d-8222fad7f397 updated#033[00m Dec 6 05:20:01 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:01.588 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 667a7cf2-00f8-4896-8e3d-8222fad7f397, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:20:01 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:01.589 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[5f1fc5cd-9f32-4e61-84a9-cf7d3f2a6672]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:20:02 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:20:03 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:20:03 localhost nova_compute[282193]: 2025-12-06 10:20:03.310 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:20:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:20:03 localhost podman[328370]: 2025-12-06 10:20:03.931249928 +0000 UTC m=+0.085993080 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:20:03 localhost podman[328370]: 2025-12-06 10:20:03.944161623 +0000 UTC m=+0.098904825 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 6 05:20:03 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:20:04 localhost podman[328369]: 2025-12-06 10:20:04.031313757 +0000 UTC m=+0.188769492 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:20:04 localhost podman[328369]: 2025-12-06 10:20:04.04127306 +0000 UTC m=+0.198728825 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent) Dec 6 05:20:04 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:20:04 localhost sshd[328410]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:20:04 localhost nova_compute[282193]: 2025-12-06 10:20:04.573 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:06 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:06.796 263652 INFO neutron.agent.linux.ip_lib [None req-38231a6d-8e5c-41d2-a990-f1b82c8e9de5 - - - - - -] Device tap8502c635-ed cannot be used as it has no MAC address#033[00m Dec 6 05:20:06 localhost nova_compute[282193]: 2025-12-06 10:20:06.856 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:06 localhost kernel: device tap8502c635-ed entered promiscuous mode Dec 6 05:20:06 localhost NetworkManager[5973]: [1765016406.8682] manager: (tap8502c635-ed): new Generic device (/org/freedesktop/NetworkManager/Devices/65) Dec 6 05:20:06 localhost nova_compute[282193]: 2025-12-06 10:20:06.868 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:06 localhost ovn_controller[154851]: 2025-12-06T10:20:06Z|00395|binding|INFO|Claiming lport 8502c635-ed1a-4597-9657-4577483e7713 for this chassis. Dec 6 05:20:06 localhost ovn_controller[154851]: 2025-12-06T10:20:06Z|00396|binding|INFO|8502c635-ed1a-4597-9657-4577483e7713: Claiming unknown Dec 6 05:20:06 localhost systemd-udevd[328422]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:20:06 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:06.883 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-b2c47b1f-f8cf-41da-adf1-6c6404edb8e3', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b2c47b1f-f8cf-41da-adf1-6c6404edb8e3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '24086b701d6b4d4081d2e63578d18d24', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5059c6b1-bf63-4619-b361-4c64f7e8a30d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=8502c635-ed1a-4597-9657-4577483e7713) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:20:06 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:06.885 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 8502c635-ed1a-4597-9657-4577483e7713 in datapath b2c47b1f-f8cf-41da-adf1-6c6404edb8e3 bound to our chassis#033[00m Dec 6 05:20:06 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:06.886 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network b2c47b1f-f8cf-41da-adf1-6c6404edb8e3 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:20:06 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:06.887 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[f9cb6d72-175c-4538-8cb0-ce444dc3eabd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:20:06 localhost journal[230404]: ethtool ioctl error on tap8502c635-ed: No such device Dec 6 05:20:06 localhost ovn_controller[154851]: 2025-12-06T10:20:06Z|00397|binding|INFO|Setting lport 8502c635-ed1a-4597-9657-4577483e7713 ovn-installed in OVS Dec 6 05:20:06 localhost ovn_controller[154851]: 2025-12-06T10:20:06Z|00398|binding|INFO|Setting lport 8502c635-ed1a-4597-9657-4577483e7713 up in Southbound Dec 6 05:20:06 localhost nova_compute[282193]: 2025-12-06 10:20:06.905 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:06 localhost nova_compute[282193]: 2025-12-06 10:20:06.908 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:06 localhost journal[230404]: ethtool ioctl error on tap8502c635-ed: No such device Dec 6 05:20:06 localhost journal[230404]: ethtool ioctl error on tap8502c635-ed: No such device Dec 6 05:20:06 localhost journal[230404]: ethtool ioctl error on tap8502c635-ed: No such device Dec 6 05:20:06 localhost journal[230404]: ethtool ioctl error on tap8502c635-ed: No such device Dec 6 05:20:06 localhost journal[230404]: ethtool ioctl error on tap8502c635-ed: No such device Dec 6 05:20:06 localhost journal[230404]: ethtool ioctl error on tap8502c635-ed: No such device Dec 6 05:20:06 localhost journal[230404]: ethtool ioctl error on tap8502c635-ed: No such device Dec 6 05:20:06 localhost nova_compute[282193]: 2025-12-06 10:20:06.948 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:06 localhost nova_compute[282193]: 2025-12-06 10:20:06.976 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:07 localhost neutron_sriov_agent[256690]: 2025-12-06 10:20:07.587 2 INFO neutron.agent.securitygroups_rpc [None req-6d59f8dd-76ee-4672-86ac-2d91b87c0791 260dfc8941214c308c05293af65bdae9 24086b701d6b4d4081d2e63578d18d24 - - default default] Security group member updated ['ea587027-2c02-4165-a90f-98eaf0ce1ddb']#033[00m Dec 6 05:20:07 localhost neutron_sriov_agent[256690]: 2025-12-06 10:20:07.614 2 INFO neutron.agent.securitygroups_rpc [None req-1250ea59-7c13-4a58-b22f-38de2df53542 b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['c05cd5e8-c5d4-4d05-80ba-b6a4af8b3ba8']#033[00m Dec 6 05:20:07 localhost podman[328493]: Dec 6 05:20:07 localhost podman[328493]: 2025-12-06 10:20:07.824293312 +0000 UTC m=+0.082418552 container create b702010cfde6ddd33cba348d96e0b98a1b493114b99e87a7ebe4d35b59d0803f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b2c47b1f-f8cf-41da-adf1-6c6404edb8e3, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 6 05:20:07 localhost systemd[1]: Started libpod-conmon-b702010cfde6ddd33cba348d96e0b98a1b493114b99e87a7ebe4d35b59d0803f.scope. Dec 6 05:20:07 localhost podman[328493]: 2025-12-06 10:20:07.78043439 +0000 UTC m=+0.038559670 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:20:07 localhost systemd[1]: Started libcrun container. Dec 6 05:20:07 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3566bd4719ade8695fae6a6e305e235adfa409a11feb22336d90853652527834/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:20:07 localhost podman[328493]: 2025-12-06 10:20:07.906326929 +0000 UTC m=+0.164452169 container init b702010cfde6ddd33cba348d96e0b98a1b493114b99e87a7ebe4d35b59d0803f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b2c47b1f-f8cf-41da-adf1-6c6404edb8e3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3) Dec 6 05:20:07 localhost podman[328493]: 2025-12-06 10:20:07.91620008 +0000 UTC m=+0.174325330 container start b702010cfde6ddd33cba348d96e0b98a1b493114b99e87a7ebe4d35b59d0803f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b2c47b1f-f8cf-41da-adf1-6c6404edb8e3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:20:07 localhost dnsmasq[328512]: started, version 2.85 cachesize 150 Dec 6 05:20:07 localhost dnsmasq[328512]: DNS service limited to local subnets Dec 6 05:20:07 localhost dnsmasq[328512]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:20:07 localhost dnsmasq[328512]: warning: no upstream servers configured Dec 6 05:20:07 localhost dnsmasq-dhcp[328512]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 6 05:20:07 localhost dnsmasq[328512]: read /var/lib/neutron/dhcp/b2c47b1f-f8cf-41da-adf1-6c6404edb8e3/addn_hosts - 0 addresses Dec 6 05:20:07 localhost dnsmasq-dhcp[328512]: read /var/lib/neutron/dhcp/b2c47b1f-f8cf-41da-adf1-6c6404edb8e3/host Dec 6 05:20:07 localhost dnsmasq-dhcp[328512]: read /var/lib/neutron/dhcp/b2c47b1f-f8cf-41da-adf1-6c6404edb8e3/opts Dec 6 05:20:07 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:07.983 263652 INFO neutron.agent.dhcp.agent [None req-38231a6d-8e5c-41d2-a990-f1b82c8e9de5 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:20:07Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=f254b4d6-48c8-4533-8087-a9ef7c023950, ip_allocation=immediate, mac_address=fa:16:3e:a8:a3:b9, name=tempest-RoutersIpV6Test-2080841993, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:20:00Z, description=, dns_domain=, id=b2c47b1f-f8cf-41da-adf1-6c6404edb8e3, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-836254303, port_security_enabled=True, project_id=24086b701d6b4d4081d2e63578d18d24, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=57617, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2265, status=ACTIVE, subnets=['e24a4bb6-1a63-4e1f-97ef-3bc7eb6a4ce8'], tags=[], tenant_id=24086b701d6b4d4081d2e63578d18d24, updated_at=2025-12-06T10:20:05Z, vlan_transparent=None, network_id=b2c47b1f-f8cf-41da-adf1-6c6404edb8e3, port_security_enabled=True, project_id=24086b701d6b4d4081d2e63578d18d24, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['ea587027-2c02-4165-a90f-98eaf0ce1ddb'], standard_attr_id=2310, status=DOWN, tags=[], tenant_id=24086b701d6b4d4081d2e63578d18d24, updated_at=2025-12-06T10:20:07Z on network b2c47b1f-f8cf-41da-adf1-6c6404edb8e3#033[00m Dec 6 05:20:08 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:08.118 263652 INFO neutron.agent.dhcp.agent [None req-491c8e88-3dac-4fc7-8bbd-6ccaae48b5bb - - - - - -] DHCP configuration for ports {'8be8da87-467b-4f9c-8bc8-059ed9eceb24'} is completed#033[00m Dec 6 05:20:08 localhost dnsmasq[328512]: read /var/lib/neutron/dhcp/b2c47b1f-f8cf-41da-adf1-6c6404edb8e3/addn_hosts - 1 addresses Dec 6 05:20:08 localhost dnsmasq-dhcp[328512]: read /var/lib/neutron/dhcp/b2c47b1f-f8cf-41da-adf1-6c6404edb8e3/host Dec 6 05:20:08 localhost podman[328530]: 2025-12-06 10:20:08.181849381 +0000 UTC m=+0.061386698 container kill b702010cfde6ddd33cba348d96e0b98a1b493114b99e87a7ebe4d35b59d0803f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b2c47b1f-f8cf-41da-adf1-6c6404edb8e3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0) Dec 6 05:20:08 localhost dnsmasq-dhcp[328512]: read /var/lib/neutron/dhcp/b2c47b1f-f8cf-41da-adf1-6c6404edb8e3/opts Dec 6 05:20:08 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:20:08 localhost nova_compute[282193]: 2025-12-06 10:20:08.359 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:08 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:08.523 263652 INFO neutron.agent.dhcp.agent [None req-b44f8ac8-6f06-4658-b123-14f741668d51 - - - - - -] DHCP configuration for ports {'f254b4d6-48c8-4533-8087-a9ef7c023950'} is completed#033[00m Dec 6 05:20:09 localhost ceph-osd[31726]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 6 05:20:09 localhost ceph-osd[31726]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 8400.1 total, 600.0 interval#012Cumulative writes: 13K writes, 48K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.00 MB/s#012Cumulative WAL: 13K writes, 4005 syncs, 3.29 writes per sync, written: 0.04 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 7263 writes, 22K keys, 7263 commit groups, 1.0 writes per commit group, ingest: 18.95 MB, 0.03 MB/s#012Interval WAL: 7263 writes, 3188 syncs, 2.28 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 6 05:20:09 localhost nova_compute[282193]: 2025-12-06 10:20:09.602 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:09 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:09.670 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:20:07Z, description=, device_id=b00fbecf-d8af-4c63-88f1-d68107f5afd3, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=f254b4d6-48c8-4533-8087-a9ef7c023950, ip_allocation=immediate, mac_address=fa:16:3e:a8:a3:b9, name=tempest-RoutersIpV6Test-2080841993, network_id=b2c47b1f-f8cf-41da-adf1-6c6404edb8e3, port_security_enabled=True, project_id=24086b701d6b4d4081d2e63578d18d24, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['ea587027-2c02-4165-a90f-98eaf0ce1ddb'], standard_attr_id=2310, status=DOWN, tags=[], tenant_id=24086b701d6b4d4081d2e63578d18d24, updated_at=2025-12-06T10:20:08Z on network b2c47b1f-f8cf-41da-adf1-6c6404edb8e3#033[00m Dec 6 05:20:09 localhost podman[328568]: 2025-12-06 10:20:09.874473271 +0000 UTC m=+0.070508686 container kill b702010cfde6ddd33cba348d96e0b98a1b493114b99e87a7ebe4d35b59d0803f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b2c47b1f-f8cf-41da-adf1-6c6404edb8e3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS) Dec 6 05:20:09 localhost systemd[1]: tmp-crun.LZUfzJ.mount: Deactivated successfully. Dec 6 05:20:09 localhost dnsmasq[328512]: read /var/lib/neutron/dhcp/b2c47b1f-f8cf-41da-adf1-6c6404edb8e3/addn_hosts - 1 addresses Dec 6 05:20:09 localhost dnsmasq-dhcp[328512]: read /var/lib/neutron/dhcp/b2c47b1f-f8cf-41da-adf1-6c6404edb8e3/host Dec 6 05:20:09 localhost dnsmasq-dhcp[328512]: read /var/lib/neutron/dhcp/b2c47b1f-f8cf-41da-adf1-6c6404edb8e3/opts Dec 6 05:20:10 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:10.110 263652 INFO neutron.agent.linux.ip_lib [None req-29c63b20-7da5-44f9-a255-d5f09d479629 - - - - - -] Device tap225f6418-78 cannot be used as it has no MAC address#033[00m Dec 6 05:20:10 localhost nova_compute[282193]: 2025-12-06 10:20:10.132 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:10 localhost kernel: device tap225f6418-78 entered promiscuous mode Dec 6 05:20:10 localhost NetworkManager[5973]: [1765016410.1407] manager: (tap225f6418-78): new Generic device (/org/freedesktop/NetworkManager/Devices/66) Dec 6 05:20:10 localhost ovn_controller[154851]: 2025-12-06T10:20:10Z|00399|binding|INFO|Claiming lport 225f6418-78e0-4a61-a073-a03b711b3e97 for this chassis. Dec 6 05:20:10 localhost ovn_controller[154851]: 2025-12-06T10:20:10Z|00400|binding|INFO|225f6418-78e0-4a61-a073-a03b711b3e97: Claiming unknown Dec 6 05:20:10 localhost nova_compute[282193]: 2025-12-06 10:20:10.141 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:10 localhost systemd-udevd[328599]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:20:10 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:10.157 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-25c4a3e3-dd82-4090-9ea0-aa2af92e22bf', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-25c4a3e3-dd82-4090-9ea0-aa2af92e22bf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fa76bcfc789b4e53acf344cd0b1cd7c5', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c448e0c7-883f-4055-a342-20d4d6819f0c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=225f6418-78e0-4a61-a073-a03b711b3e97) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:20:10 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:10.159 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 225f6418-78e0-4a61-a073-a03b711b3e97 in datapath 25c4a3e3-dd82-4090-9ea0-aa2af92e22bf bound to our chassis#033[00m Dec 6 05:20:10 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:10.162 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Port dc92a6e7-934f-4ad1-bb78-6f002a01bc3a IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 6 05:20:10 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:10.162 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 25c4a3e3-dd82-4090-9ea0-aa2af92e22bf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:20:10 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:10.163 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[6a6ab2ce-9f25-4e03-a999-04908075e354]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:20:10 localhost journal[230404]: ethtool ioctl error on tap225f6418-78: No such device Dec 6 05:20:10 localhost ovn_controller[154851]: 2025-12-06T10:20:10Z|00401|binding|INFO|Setting lport 225f6418-78e0-4a61-a073-a03b711b3e97 ovn-installed in OVS Dec 6 05:20:10 localhost ovn_controller[154851]: 2025-12-06T10:20:10Z|00402|binding|INFO|Setting lport 225f6418-78e0-4a61-a073-a03b711b3e97 up in Southbound Dec 6 05:20:10 localhost nova_compute[282193]: 2025-12-06 10:20:10.182 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:10 localhost journal[230404]: ethtool ioctl error on tap225f6418-78: No such device Dec 6 05:20:10 localhost journal[230404]: ethtool ioctl error on tap225f6418-78: No such device Dec 6 05:20:10 localhost journal[230404]: ethtool ioctl error on tap225f6418-78: No such device Dec 6 05:20:10 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:10.198 263652 INFO neutron.agent.dhcp.agent [None req-0b84d4ff-4686-46c0-8f31-61d69a1a4b35 - - - - - -] DHCP configuration for ports {'f254b4d6-48c8-4533-8087-a9ef7c023950'} is completed#033[00m Dec 6 05:20:10 localhost journal[230404]: ethtool ioctl error on tap225f6418-78: No such device Dec 6 05:20:10 localhost journal[230404]: ethtool ioctl error on tap225f6418-78: No such device Dec 6 05:20:10 localhost journal[230404]: ethtool ioctl error on tap225f6418-78: No such device Dec 6 05:20:10 localhost journal[230404]: ethtool ioctl error on tap225f6418-78: No such device Dec 6 05:20:10 localhost nova_compute[282193]: 2025-12-06 10:20:10.219 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:10 localhost nova_compute[282193]: 2025-12-06 10:20:10.246 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:10 localhost ovn_controller[154851]: 2025-12-06T10:20:10Z|00403|binding|INFO|Removing iface tap225f6418-78 ovn-installed in OVS Dec 6 05:20:10 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:10.411 160509 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port dc92a6e7-934f-4ad1-bb78-6f002a01bc3a with type ""#033[00m Dec 6 05:20:10 localhost ovn_controller[154851]: 2025-12-06T10:20:10Z|00404|binding|INFO|Removing lport 225f6418-78e0-4a61-a073-a03b711b3e97 ovn-installed in OVS Dec 6 05:20:10 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:10.413 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-25c4a3e3-dd82-4090-9ea0-aa2af92e22bf', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-25c4a3e3-dd82-4090-9ea0-aa2af92e22bf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fa76bcfc789b4e53acf344cd0b1cd7c5', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c448e0c7-883f-4055-a342-20d4d6819f0c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=225f6418-78e0-4a61-a073-a03b711b3e97) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:20:10 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:10.415 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 225f6418-78e0-4a61-a073-a03b711b3e97 in datapath 25c4a3e3-dd82-4090-9ea0-aa2af92e22bf unbound from our chassis#033[00m Dec 6 05:20:10 localhost nova_compute[282193]: 2025-12-06 10:20:10.416 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:10 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:10.418 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 25c4a3e3-dd82-4090-9ea0-aa2af92e22bf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:20:10 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:10.419 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[4745e0e8-09f8-4275-84ee-d2d1ad07670d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:20:10 localhost neutron_sriov_agent[256690]: 2025-12-06 10:20:10.428 2 INFO neutron.agent.securitygroups_rpc [None req-e7d31636-8439-4e6f-9785-a953cb0386af 260dfc8941214c308c05293af65bdae9 24086b701d6b4d4081d2e63578d18d24 - - default default] Security group member updated ['ea587027-2c02-4165-a90f-98eaf0ce1ddb']#033[00m Dec 6 05:20:10 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e164 e164: 6 total, 6 up, 6 in Dec 6 05:20:10 localhost dnsmasq[328512]: read /var/lib/neutron/dhcp/b2c47b1f-f8cf-41da-adf1-6c6404edb8e3/addn_hosts - 0 addresses Dec 6 05:20:10 localhost dnsmasq-dhcp[328512]: read /var/lib/neutron/dhcp/b2c47b1f-f8cf-41da-adf1-6c6404edb8e3/host Dec 6 05:20:10 localhost dnsmasq-dhcp[328512]: read /var/lib/neutron/dhcp/b2c47b1f-f8cf-41da-adf1-6c6404edb8e3/opts Dec 6 05:20:10 localhost podman[328654]: 2025-12-06 10:20:10.647306945 +0000 UTC m=+0.066950258 container kill b702010cfde6ddd33cba348d96e0b98a1b493114b99e87a7ebe4d35b59d0803f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b2c47b1f-f8cf-41da-adf1-6c6404edb8e3, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:20:10 localhost ovn_controller[154851]: 2025-12-06T10:20:10Z|00405|binding|INFO|Releasing lport 8502c635-ed1a-4597-9657-4577483e7713 from this chassis (sb_readonly=0) Dec 6 05:20:10 localhost ovn_controller[154851]: 2025-12-06T10:20:10Z|00406|binding|INFO|Setting lport 8502c635-ed1a-4597-9657-4577483e7713 down in Southbound Dec 6 05:20:10 localhost kernel: device tap8502c635-ed left promiscuous mode Dec 6 05:20:10 localhost nova_compute[282193]: 2025-12-06 10:20:10.859 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:10 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:10.864 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-b2c47b1f-f8cf-41da-adf1-6c6404edb8e3', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b2c47b1f-f8cf-41da-adf1-6c6404edb8e3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '24086b701d6b4d4081d2e63578d18d24', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548789.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5059c6b1-bf63-4619-b361-4c64f7e8a30d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=8502c635-ed1a-4597-9657-4577483e7713) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:20:10 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:10.866 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 8502c635-ed1a-4597-9657-4577483e7713 in datapath b2c47b1f-f8cf-41da-adf1-6c6404edb8e3 unbound from our chassis#033[00m Dec 6 05:20:10 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:10.867 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network b2c47b1f-f8cf-41da-adf1-6c6404edb8e3 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:20:10 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:10.868 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[4c93f3d3-6f19-4c02-bfab-a26fb627ab21]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:20:10 localhost nova_compute[282193]: 2025-12-06 10:20:10.878 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:11 localhost podman[328706]: Dec 6 05:20:11 localhost podman[328706]: 2025-12-06 10:20:11.156672095 +0000 UTC m=+0.073645021 container create c4d5652f0ca83ac56260b3c6e7d7bc8fce4934ba3122d2234540336c428473c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-25c4a3e3-dd82-4090-9ea0-aa2af92e22bf, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:20:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:20:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:20:11 localhost systemd[1]: Started libpod-conmon-c4d5652f0ca83ac56260b3c6e7d7bc8fce4934ba3122d2234540336c428473c3.scope. Dec 6 05:20:11 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:11.211 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:94:c3:b8 10.100.0.18 10.100.0.2 10.100.0.34'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28 10.100.0.34/28', 'neutron:device_id': 'ovnmeta-667a7cf2-00f8-4896-8e3d-8222fad7f397', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-667a7cf2-00f8-4896-8e3d-8222fad7f397', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f8e1c4c589749b99178bbc7c2bea3f0', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=528a9f17-509a-4c49-a9ac-4a6363f2178f, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=659e29bd-a84c-4733-b754-dbb7b70b98cc) old=Port_Binding(mac=['fa:16:3e:94:c3:b8 10.100.0.18 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-667a7cf2-00f8-4896-8e3d-8222fad7f397', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-667a7cf2-00f8-4896-8e3d-8222fad7f397', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f8e1c4c589749b99178bbc7c2bea3f0', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:20:11 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:11.214 160509 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 659e29bd-a84c-4733-b754-dbb7b70b98cc in datapath 667a7cf2-00f8-4896-8e3d-8222fad7f397 updated#033[00m Dec 6 05:20:11 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:11.217 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 667a7cf2-00f8-4896-8e3d-8222fad7f397, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:20:11 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:11.219 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[a89a3d4e-5289-48a9-a1bb-32d10a0df882]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:20:11 localhost podman[328706]: 2025-12-06 10:20:11.122579493 +0000 UTC m=+0.039552409 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:20:11 localhost systemd[1]: Started libcrun container. Dec 6 05:20:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dec80bbf34eccee69cc589d9da96e60fb9d53df365ac4bfeaf1ca640c086573/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:20:11 localhost podman[328706]: 2025-12-06 10:20:11.297082967 +0000 UTC m=+0.214055863 container init c4d5652f0ca83ac56260b3c6e7d7bc8fce4934ba3122d2234540336c428473c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-25c4a3e3-dd82-4090-9ea0-aa2af92e22bf, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Dec 6 05:20:11 localhost podman[328706]: 2025-12-06 10:20:11.303452193 +0000 UTC m=+0.220425099 container start c4d5652f0ca83ac56260b3c6e7d7bc8fce4934ba3122d2234540336c428473c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-25c4a3e3-dd82-4090-9ea0-aa2af92e22bf, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125) Dec 6 05:20:11 localhost dnsmasq[328758]: started, version 2.85 cachesize 150 Dec 6 05:20:11 localhost dnsmasq[328758]: DNS service limited to local subnets Dec 6 05:20:11 localhost dnsmasq[328758]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:20:11 localhost dnsmasq[328758]: warning: no upstream servers configured Dec 6 05:20:11 localhost dnsmasq-dhcp[328758]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 6 05:20:11 localhost dnsmasq[328758]: read /var/lib/neutron/dhcp/25c4a3e3-dd82-4090-9ea0-aa2af92e22bf/addn_hosts - 0 addresses Dec 6 05:20:11 localhost dnsmasq-dhcp[328758]: read /var/lib/neutron/dhcp/25c4a3e3-dd82-4090-9ea0-aa2af92e22bf/host Dec 6 05:20:11 localhost dnsmasq-dhcp[328758]: read /var/lib/neutron/dhcp/25c4a3e3-dd82-4090-9ea0-aa2af92e22bf/opts Dec 6 05:20:11 localhost podman[328719]: 2025-12-06 10:20:11.310104556 +0000 UTC m=+0.104625169 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, distribution-scope=public, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, vcs-type=git) Dec 6 05:20:11 localhost podman[328719]: 2025-12-06 10:20:11.324141995 +0000 UTC m=+0.118662618 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, version=9.6, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, config_id=edpm, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Dec 6 05:20:11 localhost podman[328720]: 2025-12-06 10:20:11.285598207 +0000 UTC m=+0.085260407 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm) Dec 6 05:20:11 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:20:11 localhost podman[328720]: 2025-12-06 10:20:11.365235311 +0000 UTC m=+0.164897531 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:20:11 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:20:11 localhost nova_compute[282193]: 2025-12-06 10:20:11.394 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:11 localhost kernel: device tap225f6418-78 left promiscuous mode Dec 6 05:20:11 localhost nova_compute[282193]: 2025-12-06 10:20:11.411 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:11 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:11.431 263652 INFO neutron.agent.dhcp.agent [None req-ac73be23-35e0-4a50-8455-0b348b06b099 - - - - - -] DHCP configuration for ports {'1998c7e0-78ce-4456-9959-95d99e0050bc'} is completed#033[00m Dec 6 05:20:11 localhost dnsmasq[328758]: read /var/lib/neutron/dhcp/25c4a3e3-dd82-4090-9ea0-aa2af92e22bf/addn_hosts - 0 addresses Dec 6 05:20:11 localhost dnsmasq-dhcp[328758]: read /var/lib/neutron/dhcp/25c4a3e3-dd82-4090-9ea0-aa2af92e22bf/host Dec 6 05:20:11 localhost dnsmasq-dhcp[328758]: read /var/lib/neutron/dhcp/25c4a3e3-dd82-4090-9ea0-aa2af92e22bf/opts Dec 6 05:20:11 localhost podman[328780]: 2025-12-06 10:20:11.604410532 +0000 UTC m=+0.047827453 container kill c4d5652f0ca83ac56260b3c6e7d7bc8fce4934ba3122d2234540336c428473c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-25c4a3e3-dd82-4090-9ea0-aa2af92e22bf, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:20:11 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:11.634 263652 ERROR neutron.agent.dhcp.agent [None req-dc16eb19-be2b-4ac1-b923-1d1738c06083 - - - - - -] Unable to reload_allocations dhcp for 25c4a3e3-dd82-4090-9ea0-aa2af92e22bf.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap225f6418-78 not found in namespace qdhcp-25c4a3e3-dd82-4090-9ea0-aa2af92e22bf. Dec 6 05:20:11 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:11.634 263652 ERROR neutron.agent.dhcp.agent Traceback (most recent call last): Dec 6 05:20:11 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:11.634 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver Dec 6 05:20:11 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:11.634 263652 ERROR neutron.agent.dhcp.agent rv = getattr(driver, action)(**action_kwargs) Dec 6 05:20:11 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:11.634 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations Dec 6 05:20:11 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:11.634 263652 ERROR neutron.agent.dhcp.agent self.device_manager.update(self.network, self.interface_name) Dec 6 05:20:11 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:11.634 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update Dec 6 05:20:11 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:11.634 263652 ERROR neutron.agent.dhcp.agent self._set_default_route(network, device_name) Dec 6 05:20:11 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:11.634 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route Dec 6 05:20:11 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:11.634 263652 ERROR neutron.agent.dhcp.agent self._set_default_route_ip_version(network, device_name, Dec 6 05:20:11 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:11.634 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version Dec 6 05:20:11 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:11.634 263652 ERROR neutron.agent.dhcp.agent gateway = device.route.get_gateway(ip_version=ip_version) Dec 6 05:20:11 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:11.634 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway Dec 6 05:20:11 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:11.634 263652 ERROR neutron.agent.dhcp.agent routes = self.list_routes(ip_version, scope=scope, table=table) Dec 6 05:20:11 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:11.634 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes Dec 6 05:20:11 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:11.634 263652 ERROR neutron.agent.dhcp.agent return list_ip_routes(self._parent.namespace, ip_version, scope=scope, Dec 6 05:20:11 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:11.634 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes Dec 6 05:20:11 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:11.634 263652 ERROR neutron.agent.dhcp.agent routes = privileged.list_ip_routes(namespace, ip_version, device=device, Dec 6 05:20:11 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:11.634 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f Dec 6 05:20:11 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:11.634 263652 ERROR neutron.agent.dhcp.agent return self(f, *args, **kw) Dec 6 05:20:11 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:11.634 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__ Dec 6 05:20:11 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:11.634 263652 ERROR neutron.agent.dhcp.agent do = self.iter(retry_state=retry_state) Dec 6 05:20:11 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:11.634 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter Dec 6 05:20:11 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:11.634 263652 ERROR neutron.agent.dhcp.agent return fut.result() Dec 6 05:20:11 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:11.634 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result Dec 6 05:20:11 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:11.634 263652 ERROR neutron.agent.dhcp.agent return self.__get_result() Dec 6 05:20:11 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:11.634 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result Dec 6 05:20:11 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:11.634 263652 ERROR neutron.agent.dhcp.agent raise self._exception Dec 6 05:20:11 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:11.634 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__ Dec 6 05:20:11 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:11.634 263652 ERROR neutron.agent.dhcp.agent result = fn(*args, **kwargs) Dec 6 05:20:11 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:11.634 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap Dec 6 05:20:11 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:11.634 263652 ERROR neutron.agent.dhcp.agent return self.channel.remote_call(name, args, kwargs, Dec 6 05:20:11 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:11.634 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call Dec 6 05:20:11 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:11.634 263652 ERROR neutron.agent.dhcp.agent raise exc_type(*result[2]) Dec 6 05:20:11 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:11.634 263652 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap225f6418-78 not found in namespace qdhcp-25c4a3e3-dd82-4090-9ea0-aa2af92e22bf. Dec 6 05:20:11 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:11.634 263652 ERROR neutron.agent.dhcp.agent #033[00m Dec 6 05:20:11 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:11.638 263652 INFO neutron.agent.dhcp.agent [None req-52fc0508-2b36-4e56-82d0-a682e93b4cc9 - - - - - -] Synchronizing state#033[00m Dec 6 05:20:11 localhost ovn_controller[154851]: 2025-12-06T10:20:11Z|00407|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:20:11 localhost nova_compute[282193]: 2025-12-06 10:20:11.728 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:11 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:11.858 263652 INFO neutron.agent.dhcp.agent [None req-10e72291-99ff-473a-8e06-8c56fde98d4a - - - - - -] All active networks have been fetched through RPC.#033[00m Dec 6 05:20:12 localhost dnsmasq[328758]: exiting on receipt of SIGTERM Dec 6 05:20:12 localhost podman[328812]: 2025-12-06 10:20:12.054033296 +0000 UTC m=+0.061400297 container kill c4d5652f0ca83ac56260b3c6e7d7bc8fce4934ba3122d2234540336c428473c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-25c4a3e3-dd82-4090-9ea0-aa2af92e22bf, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 6 05:20:12 localhost systemd[1]: libpod-c4d5652f0ca83ac56260b3c6e7d7bc8fce4934ba3122d2234540336c428473c3.scope: Deactivated successfully. Dec 6 05:20:12 localhost podman[328826]: 2025-12-06 10:20:12.130748111 +0000 UTC m=+0.057619252 container died c4d5652f0ca83ac56260b3c6e7d7bc8fce4934ba3122d2234540336c428473c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-25c4a3e3-dd82-4090-9ea0-aa2af92e22bf, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:20:12 localhost podman[328826]: 2025-12-06 10:20:12.158589332 +0000 UTC m=+0.085460453 container cleanup c4d5652f0ca83ac56260b3c6e7d7bc8fce4934ba3122d2234540336c428473c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-25c4a3e3-dd82-4090-9ea0-aa2af92e22bf, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:20:12 localhost systemd[1]: var-lib-containers-storage-overlay-0dec80bbf34eccee69cc589d9da96e60fb9d53df365ac4bfeaf1ca640c086573-merged.mount: Deactivated successfully. Dec 6 05:20:12 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c4d5652f0ca83ac56260b3c6e7d7bc8fce4934ba3122d2234540336c428473c3-userdata-shm.mount: Deactivated successfully. Dec 6 05:20:12 localhost systemd[1]: libpod-conmon-c4d5652f0ca83ac56260b3c6e7d7bc8fce4934ba3122d2234540336c428473c3.scope: Deactivated successfully. Dec 6 05:20:12 localhost podman[328827]: 2025-12-06 10:20:12.215596735 +0000 UTC m=+0.133925965 container remove c4d5652f0ca83ac56260b3c6e7d7bc8fce4934ba3122d2234540336c428473c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-25c4a3e3-dd82-4090-9ea0-aa2af92e22bf, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 6 05:20:12 localhost neutron_sriov_agent[256690]: 2025-12-06 10:20:12.220 2 INFO neutron.agent.securitygroups_rpc [None req-6d93ce58-a6ee-4351-b70e-4269edfdd4c8 b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['81acd248-ff6c-407a-a3e7-57e59597aa28', 'c05cd5e8-c5d4-4d05-80ba-b6a4af8b3ba8', '1d275e53-d6a2-4014-8325-c04642bc5279']#033[00m Dec 6 05:20:12 localhost systemd[1]: run-netns-qdhcp\x2d25c4a3e3\x2ddd82\x2d4090\x2d9ea0\x2daa2af92e22bf.mount: Deactivated successfully. Dec 6 05:20:12 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:12.249 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:6c:02', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:a8:2f:0c:cb:a1'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:20:12 localhost nova_compute[282193]: 2025-12-06 10:20:12.250 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:12 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:12.251 263652 INFO neutron.agent.dhcp.agent [None req-5a5595b0-a0f8-44d7-9627-4542518c5211 - - - - - -] Synchronizing state complete#033[00m Dec 6 05:20:12 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:12.252 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 6 05:20:12 localhost dnsmasq[328512]: exiting on receipt of SIGTERM Dec 6 05:20:12 localhost podman[328871]: 2025-12-06 10:20:12.523543488 +0000 UTC m=+0.066273187 container kill b702010cfde6ddd33cba348d96e0b98a1b493114b99e87a7ebe4d35b59d0803f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b2c47b1f-f8cf-41da-adf1-6c6404edb8e3, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 6 05:20:12 localhost systemd[1]: libpod-b702010cfde6ddd33cba348d96e0b98a1b493114b99e87a7ebe4d35b59d0803f.scope: Deactivated successfully. Dec 6 05:20:12 localhost podman[328884]: 2025-12-06 10:20:12.579018904 +0000 UTC m=+0.046432251 container died b702010cfde6ddd33cba348d96e0b98a1b493114b99e87a7ebe4d35b59d0803f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b2c47b1f-f8cf-41da-adf1-6c6404edb8e3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 6 05:20:12 localhost podman[328884]: 2025-12-06 10:20:12.605926936 +0000 UTC m=+0.073340263 container cleanup b702010cfde6ddd33cba348d96e0b98a1b493114b99e87a7ebe4d35b59d0803f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b2c47b1f-f8cf-41da-adf1-6c6404edb8e3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 6 05:20:12 localhost systemd[1]: libpod-conmon-b702010cfde6ddd33cba348d96e0b98a1b493114b99e87a7ebe4d35b59d0803f.scope: Deactivated successfully. Dec 6 05:20:12 localhost podman[328891]: 2025-12-06 10:20:12.679543887 +0000 UTC m=+0.129643925 container remove b702010cfde6ddd33cba348d96e0b98a1b493114b99e87a7ebe4d35b59d0803f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b2c47b1f-f8cf-41da-adf1-6c6404edb8e3, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125) Dec 6 05:20:12 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:12.730 263652 INFO neutron.agent.dhcp.agent [None req-95c11c3d-bd4a-43a9-8a7d-7e4eb6958961 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:20:12 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:12.731 263652 INFO neutron.agent.dhcp.agent [None req-95c11c3d-bd4a-43a9-8a7d-7e4eb6958961 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:20:12 localhost neutron_sriov_agent[256690]: 2025-12-06 10:20:12.871 2 INFO neutron.agent.securitygroups_rpc [None req-5ab424f2-09a3-4942-a99f-ad10877e0761 b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['81acd248-ff6c-407a-a3e7-57e59597aa28', '1d275e53-d6a2-4014-8325-c04642bc5279']#033[00m Dec 6 05:20:12 localhost ceph-osd[32665]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 6 05:20:12 localhost ceph-osd[32665]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 8400.2 total, 600.0 interval#012Cumulative writes: 10K writes, 39K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.00 MB/s#012Cumulative WAL: 10K writes, 3013 syncs, 3.41 writes per sync, written: 0.03 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5151 writes, 16K keys, 5151 commit groups, 1.0 writes per commit group, ingest: 14.16 MB, 0.02 MB/s#012Interval WAL: 5151 writes, 2234 syncs, 2.31 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 6 05:20:13 localhost systemd[1]: var-lib-containers-storage-overlay-3566bd4719ade8695fae6a6e305e235adfa409a11feb22336d90853652527834-merged.mount: Deactivated successfully. Dec 6 05:20:13 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b702010cfde6ddd33cba348d96e0b98a1b493114b99e87a7ebe4d35b59d0803f-userdata-shm.mount: Deactivated successfully. Dec 6 05:20:13 localhost systemd[1]: run-netns-qdhcp\x2db2c47b1f\x2df8cf\x2d41da\x2dadf1\x2d6c6404edb8e3.mount: Deactivated successfully. Dec 6 05:20:13 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e165 e165: 6 total, 6 up, 6 in Dec 6 05:20:13 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:20:13 localhost nova_compute[282193]: 2025-12-06 10:20:13.410 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:14 localhost nova_compute[282193]: 2025-12-06 10:20:14.605 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:15 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:15.254 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b142a5ef-fbed-4e92-aa78-e3ad080c6370, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 05:20:15 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e166 e166: 6 total, 6 up, 6 in Dec 6 05:20:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:20:15 localhost podman[328915]: 2025-12-06 10:20:15.929546904 +0000 UTC m=+0.089639151 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 6 05:20:15 localhost podman[328915]: 2025-12-06 10:20:15.944239353 +0000 UTC m=+0.104331600 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3) Dec 6 05:20:15 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:20:16 localhost openstack_network_exporter[243110]: ERROR 10:20:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:20:16 localhost openstack_network_exporter[243110]: ERROR 10:20:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:20:16 localhost openstack_network_exporter[243110]: ERROR 10:20:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:20:16 localhost openstack_network_exporter[243110]: ERROR 10:20:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:20:16 localhost openstack_network_exporter[243110]: Dec 6 05:20:16 localhost openstack_network_exporter[243110]: ERROR 10:20:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:20:16 localhost openstack_network_exporter[243110]: Dec 6 05:20:16 localhost neutron_sriov_agent[256690]: 2025-12-06 10:20:16.701 2 INFO neutron.agent.securitygroups_rpc [None req-9395d16c-29ac-47bb-b03d-1c577d966648 b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['cd56abe4-204c-4363-ad64-0a6840260727']#033[00m Dec 6 05:20:17 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e167 e167: 6 total, 6 up, 6 in Dec 6 05:20:18 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:20:18 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:18.382 263652 INFO neutron.agent.linux.ip_lib [None req-9d5a4069-5851-4fe0-bc47-a6ba9dbe8444 - - - - - -] Device tap4df77c93-33 cannot be used as it has no MAC address#033[00m Dec 6 05:20:18 localhost nova_compute[282193]: 2025-12-06 10:20:18.405 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:18 localhost kernel: device tap4df77c93-33 entered promiscuous mode Dec 6 05:20:18 localhost NetworkManager[5973]: [1765016418.4130] manager: (tap4df77c93-33): new Generic device (/org/freedesktop/NetworkManager/Devices/67) Dec 6 05:20:18 localhost nova_compute[282193]: 2025-12-06 10:20:18.416 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:18 localhost systemd-udevd[328944]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:20:18 localhost nova_compute[282193]: 2025-12-06 10:20:18.424 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:20:18 localhost nova_compute[282193]: 2025-12-06 10:20:18.449 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:18 localhost nova_compute[282193]: 2025-12-06 10:20:18.487 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:18 localhost nova_compute[282193]: 2025-12-06 10:20:18.510 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:18 localhost podman[328947]: 2025-12-06 10:20:18.536260746 +0000 UTC m=+0.078352836 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 6 05:20:18 localhost systemd[1]: tmp-crun.h55rUG.mount: Deactivated successfully. Dec 6 05:20:18 localhost podman[328947]: 2025-12-06 10:20:18.546159719 +0000 UTC m=+0.088251859 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 6 05:20:18 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:20:19 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e168 e168: 6 total, 6 up, 6 in Dec 6 05:20:19 localhost podman[329023]: Dec 6 05:20:19 localhost podman[329023]: 2025-12-06 10:20:19.325257915 +0000 UTC m=+0.081795411 container create 70ddb52186bceeb98bb0bc6042403e0e8607b5fa5ef68f5b2c3a3df973255395 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-080f9d33-1223-44cc-b553-017f3a017f1d, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:20:19 localhost systemd[1]: Started libpod-conmon-70ddb52186bceeb98bb0bc6042403e0e8607b5fa5ef68f5b2c3a3df973255395.scope. Dec 6 05:20:19 localhost systemd[1]: Started libcrun container. Dec 6 05:20:19 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/439f5a96e55010ba06d9a70e7060d6f3f7af4a09fb294473de0406a522f7e8e8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:20:19 localhost podman[329023]: 2025-12-06 10:20:19.382425352 +0000 UTC m=+0.138962858 container init 70ddb52186bceeb98bb0bc6042403e0e8607b5fa5ef68f5b2c3a3df973255395 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-080f9d33-1223-44cc-b553-017f3a017f1d, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3) Dec 6 05:20:19 localhost podman[329023]: 2025-12-06 10:20:19.289274625 +0000 UTC m=+0.045812121 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:20:19 localhost podman[329023]: 2025-12-06 10:20:19.389999334 +0000 UTC m=+0.146536830 container start 70ddb52186bceeb98bb0bc6042403e0e8607b5fa5ef68f5b2c3a3df973255395 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-080f9d33-1223-44cc-b553-017f3a017f1d, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 6 05:20:19 localhost dnsmasq[329041]: started, version 2.85 cachesize 150 Dec 6 05:20:19 localhost dnsmasq[329041]: DNS service limited to local subnets Dec 6 05:20:19 localhost dnsmasq[329041]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:20:19 localhost dnsmasq[329041]: warning: no upstream servers configured Dec 6 05:20:19 localhost dnsmasq-dhcp[329041]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 6 05:20:19 localhost dnsmasq[329041]: read /var/lib/neutron/dhcp/080f9d33-1223-44cc-b553-017f3a017f1d/addn_hosts - 0 addresses Dec 6 05:20:19 localhost dnsmasq-dhcp[329041]: read /var/lib/neutron/dhcp/080f9d33-1223-44cc-b553-017f3a017f1d/host Dec 6 05:20:19 localhost dnsmasq-dhcp[329041]: read /var/lib/neutron/dhcp/080f9d33-1223-44cc-b553-017f3a017f1d/opts Dec 6 05:20:19 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:19.531 263652 INFO neutron.agent.dhcp.agent [None req-e1f0a4c4-a881-40d6-a9c3-786641c231ec - - - - - -] DHCP configuration for ports {'ae9783fa-c0a8-4028-9d3b-28cdd45ffbb6'} is completed#033[00m Dec 6 05:20:19 localhost dnsmasq[329041]: exiting on receipt of SIGTERM Dec 6 05:20:19 localhost podman[329059]: 2025-12-06 10:20:19.592468673 +0000 UTC m=+0.038972972 container kill 70ddb52186bceeb98bb0bc6042403e0e8607b5fa5ef68f5b2c3a3df973255395 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-080f9d33-1223-44cc-b553-017f3a017f1d, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:20:19 localhost systemd[1]: libpod-70ddb52186bceeb98bb0bc6042403e0e8607b5fa5ef68f5b2c3a3df973255395.scope: Deactivated successfully. Dec 6 05:20:19 localhost podman[329072]: 2025-12-06 10:20:19.63721376 +0000 UTC m=+0.034932538 container died 70ddb52186bceeb98bb0bc6042403e0e8607b5fa5ef68f5b2c3a3df973255395 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-080f9d33-1223-44cc-b553-017f3a017f1d, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:20:19 localhost nova_compute[282193]: 2025-12-06 10:20:19.642 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:19 localhost podman[329072]: 2025-12-06 10:20:19.667696902 +0000 UTC m=+0.065415650 container cleanup 70ddb52186bceeb98bb0bc6042403e0e8607b5fa5ef68f5b2c3a3df973255395 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-080f9d33-1223-44cc-b553-017f3a017f1d, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 6 05:20:19 localhost systemd[1]: libpod-conmon-70ddb52186bceeb98bb0bc6042403e0e8607b5fa5ef68f5b2c3a3df973255395.scope: Deactivated successfully. Dec 6 05:20:19 localhost podman[329079]: 2025-12-06 10:20:19.687962423 +0000 UTC m=+0.075081687 container remove 70ddb52186bceeb98bb0bc6042403e0e8607b5fa5ef68f5b2c3a3df973255395 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-080f9d33-1223-44cc-b553-017f3a017f1d, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:20:19 localhost nova_compute[282193]: 2025-12-06 10:20:19.697 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:19 localhost kernel: device tap4df77c93-33 left promiscuous mode Dec 6 05:20:19 localhost nova_compute[282193]: 2025-12-06 10:20:19.724 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:19 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:19.759 263652 INFO neutron.agent.dhcp.agent [None req-d28f9adb-6d87-44f3-8ef6-5c1777b22602 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:20:19 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:19.759 263652 INFO neutron.agent.dhcp.agent [None req-d28f9adb-6d87-44f3-8ef6-5c1777b22602 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:20:20 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:20.199 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:20:19Z, description=, device_id=f5c53eaf-931c-42d6-8b97-9823f000abba, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=2042a880-40b6-4791-9487-9eabb2033780, ip_allocation=immediate, mac_address=fa:16:3e:46:b1:83, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2383, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-06T10:20:19Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:20:20 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e169 e169: 6 total, 6 up, 6 in Dec 6 05:20:20 localhost systemd[1]: var-lib-containers-storage-overlay-439f5a96e55010ba06d9a70e7060d6f3f7af4a09fb294473de0406a522f7e8e8-merged.mount: Deactivated successfully. Dec 6 05:20:20 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-70ddb52186bceeb98bb0bc6042403e0e8607b5fa5ef68f5b2c3a3df973255395-userdata-shm.mount: Deactivated successfully. Dec 6 05:20:20 localhost systemd[1]: run-netns-qdhcp\x2d080f9d33\x2d1223\x2d44cc\x2db553\x2d017f3a017f1d.mount: Deactivated successfully. Dec 6 05:20:20 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 2 addresses Dec 6 05:20:20 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:20:20 localhost podman[329120]: 2025-12-06 10:20:20.456900327 +0000 UTC m=+0.072405474 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 6 05:20:20 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:20:20 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:20.686 263652 INFO neutron.agent.dhcp.agent [None req-84d092d6-73e7-4993-89ec-d6ca5f2f8089 - - - - - -] DHCP configuration for ports {'2042a880-40b6-4791-9487-9eabb2033780'} is completed#033[00m Dec 6 05:20:21 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e170 e170: 6 total, 6 up, 6 in Dec 6 05:20:22 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e171 e171: 6 total, 6 up, 6 in Dec 6 05:20:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:20:22 localhost systemd[1]: tmp-crun.PpAEGo.mount: Deactivated successfully. Dec 6 05:20:22 localhost podman[329142]: 2025-12-06 10:20:22.929829722 +0000 UTC m=+0.085605248 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Dec 6 05:20:22 localhost podman[329142]: 2025-12-06 10:20:22.968394051 +0000 UTC m=+0.124169617 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 6 05:20:22 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:20:22 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:22.996 263652 INFO neutron.agent.linux.ip_lib [None req-60cf68e3-5e8c-4911-9c07-5a4b824aa225 - - - - - -] Device tap7c8805e4-f0 cannot be used as it has no MAC address#033[00m Dec 6 05:20:23 localhost nova_compute[282193]: 2025-12-06 10:20:23.023 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:23 localhost kernel: device tap7c8805e4-f0 entered promiscuous mode Dec 6 05:20:23 localhost NetworkManager[5973]: [1765016423.0310] manager: (tap7c8805e4-f0): new Generic device (/org/freedesktop/NetworkManager/Devices/68) Dec 6 05:20:23 localhost ovn_controller[154851]: 2025-12-06T10:20:23Z|00408|binding|INFO|Claiming lport 7c8805e4-f06e-4359-b7b5-effc8da5aad8 for this chassis. Dec 6 05:20:23 localhost nova_compute[282193]: 2025-12-06 10:20:23.033 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:23 localhost ovn_controller[154851]: 2025-12-06T10:20:23Z|00409|binding|INFO|7c8805e4-f06e-4359-b7b5-effc8da5aad8: Claiming unknown Dec 6 05:20:23 localhost systemd-udevd[329176]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:20:23 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:23.044 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-941344ac-1e9e-4ba5-9592-4a1e73ea58e6', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-941344ac-1e9e-4ba5-9592-4a1e73ea58e6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f00ab5f7d934f62991ed1e7e798e47e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9f2c5640-07d2-4d8e-95f6-82e2d2dfdf54, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=7c8805e4-f06e-4359-b7b5-effc8da5aad8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:20:23 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:23.046 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 7c8805e4-f06e-4359-b7b5-effc8da5aad8 in datapath 941344ac-1e9e-4ba5-9592-4a1e73ea58e6 bound to our chassis#033[00m Dec 6 05:20:23 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:23.047 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 941344ac-1e9e-4ba5-9592-4a1e73ea58e6 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:20:23 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:23.047 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[b055b9a8-5240-4166-b99e-829f96c39a2d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:20:23 localhost journal[230404]: ethtool ioctl error on tap7c8805e4-f0: No such device Dec 6 05:20:23 localhost ovn_controller[154851]: 2025-12-06T10:20:23Z|00410|binding|INFO|Setting lport 7c8805e4-f06e-4359-b7b5-effc8da5aad8 ovn-installed in OVS Dec 6 05:20:23 localhost ovn_controller[154851]: 2025-12-06T10:20:23Z|00411|binding|INFO|Setting lport 7c8805e4-f06e-4359-b7b5-effc8da5aad8 up in Southbound Dec 6 05:20:23 localhost nova_compute[282193]: 2025-12-06 10:20:23.075 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:23 localhost journal[230404]: ethtool ioctl error on tap7c8805e4-f0: No such device Dec 6 05:20:23 localhost journal[230404]: ethtool ioctl error on tap7c8805e4-f0: No such device Dec 6 05:20:23 localhost journal[230404]: ethtool ioctl error on tap7c8805e4-f0: No such device Dec 6 05:20:23 localhost journal[230404]: ethtool ioctl error on tap7c8805e4-f0: No such device Dec 6 05:20:23 localhost journal[230404]: ethtool ioctl error on tap7c8805e4-f0: No such device Dec 6 05:20:23 localhost journal[230404]: ethtool ioctl error on tap7c8805e4-f0: No such device Dec 6 05:20:23 localhost journal[230404]: ethtool ioctl error on tap7c8805e4-f0: No such device Dec 6 05:20:23 localhost nova_compute[282193]: 2025-12-06 10:20:23.110 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:23 localhost nova_compute[282193]: 2025-12-06 10:20:23.132 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:23 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:20:23 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e172 e172: 6 total, 6 up, 6 in Dec 6 05:20:23 localhost nova_compute[282193]: 2025-12-06 10:20:23.450 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:23 localhost neutron_sriov_agent[256690]: 2025-12-06 10:20:23.709 2 INFO neutron.agent.securitygroups_rpc [None req-048ea5a4-2d0a-4365-a398-a917ab48f027 95268a68b5c84162ba789100555874fb 7787060a7af94f168805e73d06841337 - - default default] Security group rule updated ['eb426258-160f-4f74-a9d2-50e476134e75']#033[00m Dec 6 05:20:23 localhost ovn_controller[154851]: 2025-12-06T10:20:23Z|00412|binding|INFO|Removing iface tap7c8805e4-f0 ovn-installed in OVS Dec 6 05:20:23 localhost ovn_controller[154851]: 2025-12-06T10:20:23Z|00413|binding|INFO|Removing lport 7c8805e4-f06e-4359-b7b5-effc8da5aad8 ovn-installed in OVS Dec 6 05:20:23 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:23.797 160509 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 4c677c22-7fad-48c9-83cf-9d6b14ff5d3f with type ""#033[00m Dec 6 05:20:23 localhost nova_compute[282193]: 2025-12-06 10:20:23.798 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:23 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:23.798 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-941344ac-1e9e-4ba5-9592-4a1e73ea58e6', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-941344ac-1e9e-4ba5-9592-4a1e73ea58e6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f00ab5f7d934f62991ed1e7e798e47e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548789.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9f2c5640-07d2-4d8e-95f6-82e2d2dfdf54, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=7c8805e4-f06e-4359-b7b5-effc8da5aad8) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:20:23 localhost nova_compute[282193]: 2025-12-06 10:20:23.801 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:23 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:23.802 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 7c8805e4-f06e-4359-b7b5-effc8da5aad8 in datapath 941344ac-1e9e-4ba5-9592-4a1e73ea58e6 unbound from our chassis#033[00m Dec 6 05:20:23 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:23.803 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 941344ac-1e9e-4ba5-9592-4a1e73ea58e6 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:20:23 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:23.804 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[90ed9ad9-08d1-4a14-adca-a4302cf751df]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:20:23 localhost podman[329245]: Dec 6 05:20:23 localhost podman[329245]: 2025-12-06 10:20:23.916845603 +0000 UTC m=+0.084213775 container create 5652484e04839d706157bf2db8d0c37ce6784ac6075c24bee3fa0a80f4ff462b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-941344ac-1e9e-4ba5-9592-4a1e73ea58e6, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:20:23 localhost podman[241090]: time="2025-12-06T10:20:23Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:20:23 localhost podman[329245]: 2025-12-06 10:20:23.886485605 +0000 UTC m=+0.053853807 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:20:23 localhost neutron_sriov_agent[256690]: 2025-12-06 10:20:23.991 2 INFO neutron.agent.securitygroups_rpc [None req-6df8f9bf-dce2-42c5-9279-2397b4b4c0d3 95268a68b5c84162ba789100555874fb 7787060a7af94f168805e73d06841337 - - default default] Security group rule updated ['eb426258-160f-4f74-a9d2-50e476134e75']#033[00m Dec 6 05:20:23 localhost ovn_controller[154851]: 2025-12-06T10:20:23Z|00414|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:20:24 localhost nova_compute[282193]: 2025-12-06 10:20:24.014 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:24 localhost systemd[1]: Started libpod-conmon-5652484e04839d706157bf2db8d0c37ce6784ac6075c24bee3fa0a80f4ff462b.scope. Dec 6 05:20:24 localhost systemd[1]: Started libcrun container. Dec 6 05:20:24 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6420eab620073affa12b038addfba3666aa3c4527d56e42cdcfa58ae57886adb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:20:24 localhost podman[329245]: 2025-12-06 10:20:24.048956951 +0000 UTC m=+0.216325153 container init 5652484e04839d706157bf2db8d0c37ce6784ac6075c24bee3fa0a80f4ff462b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-941344ac-1e9e-4ba5-9592-4a1e73ea58e6, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Dec 6 05:20:24 localhost podman[329245]: 2025-12-06 10:20:24.056031788 +0000 UTC m=+0.223399990 container start 5652484e04839d706157bf2db8d0c37ce6784ac6075c24bee3fa0a80f4ff462b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-941344ac-1e9e-4ba5-9592-4a1e73ea58e6, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 6 05:20:24 localhost dnsmasq[329286]: started, version 2.85 cachesize 150 Dec 6 05:20:24 localhost dnsmasq[329286]: DNS service limited to local subnets Dec 6 05:20:24 localhost dnsmasq[329286]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:20:24 localhost dnsmasq[329286]: warning: no upstream servers configured Dec 6 05:20:24 localhost podman[241090]: @ - - [06/Dec/2025:10:20:23 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 159746 "" "Go-http-client/1.1" Dec 6 05:20:24 localhost dnsmasq-dhcp[329286]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 6 05:20:24 localhost dnsmasq[329286]: read /var/lib/neutron/dhcp/941344ac-1e9e-4ba5-9592-4a1e73ea58e6/addn_hosts - 0 addresses Dec 6 05:20:24 localhost dnsmasq-dhcp[329286]: read /var/lib/neutron/dhcp/941344ac-1e9e-4ba5-9592-4a1e73ea58e6/host Dec 6 05:20:24 localhost dnsmasq-dhcp[329286]: read /var/lib/neutron/dhcp/941344ac-1e9e-4ba5-9592-4a1e73ea58e6/opts Dec 6 05:20:24 localhost podman[241090]: @ - - [06/Dec/2025:10:20:24 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20222 "" "Go-http-client/1.1" Dec 6 05:20:24 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 1 addresses Dec 6 05:20:24 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:20:24 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:20:24 localhost podman[329279]: 2025-12-06 10:20:24.127243464 +0000 UTC m=+0.086553637 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Dec 6 05:20:24 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:24.249 263652 INFO neutron.agent.dhcp.agent [None req-33d184c0-582c-48ac-8fa8-48de9fb994df - - - - - -] DHCP configuration for ports {'9608e96c-6e07-47bb-b306-34f8154f24ff'} is completed#033[00m Dec 6 05:20:24 localhost dnsmasq[329286]: exiting on receipt of SIGTERM Dec 6 05:20:24 localhost podman[329313]: 2025-12-06 10:20:24.287471262 +0000 UTC m=+0.053044113 container kill 5652484e04839d706157bf2db8d0c37ce6784ac6075c24bee3fa0a80f4ff462b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-941344ac-1e9e-4ba5-9592-4a1e73ea58e6, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3) Dec 6 05:20:24 localhost systemd[1]: libpod-5652484e04839d706157bf2db8d0c37ce6784ac6075c24bee3fa0a80f4ff462b.scope: Deactivated successfully. Dec 6 05:20:24 localhost podman[329330]: 2025-12-06 10:20:24.342924817 +0000 UTC m=+0.044796520 container died 5652484e04839d706157bf2db8d0c37ce6784ac6075c24bee3fa0a80f4ff462b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-941344ac-1e9e-4ba5-9592-4a1e73ea58e6, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:20:24 localhost podman[329330]: 2025-12-06 10:20:24.37932263 +0000 UTC m=+0.081194273 container cleanup 5652484e04839d706157bf2db8d0c37ce6784ac6075c24bee3fa0a80f4ff462b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-941344ac-1e9e-4ba5-9592-4a1e73ea58e6, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 6 05:20:24 localhost systemd[1]: libpod-conmon-5652484e04839d706157bf2db8d0c37ce6784ac6075c24bee3fa0a80f4ff462b.scope: Deactivated successfully. Dec 6 05:20:24 localhost podman[329332]: 2025-12-06 10:20:24.429840954 +0000 UTC m=+0.123376462 container remove 5652484e04839d706157bf2db8d0c37ce6784ac6075c24bee3fa0a80f4ff462b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-941344ac-1e9e-4ba5-9592-4a1e73ea58e6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 6 05:20:24 localhost nova_compute[282193]: 2025-12-06 10:20:24.438 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:24 localhost kernel: device tap7c8805e4-f0 left promiscuous mode Dec 6 05:20:24 localhost nova_compute[282193]: 2025-12-06 10:20:24.449 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:24 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:24.525 263652 INFO neutron.agent.dhcp.agent [None req-f1a2618a-d715-487f-8a7f-5aa8f033dbd7 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:20:24 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:24.526 263652 INFO neutron.agent.dhcp.agent [None req-f1a2618a-d715-487f-8a7f-5aa8f033dbd7 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:20:24 localhost nova_compute[282193]: 2025-12-06 10:20:24.681 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:24 localhost systemd[1]: var-lib-containers-storage-overlay-6420eab620073affa12b038addfba3666aa3c4527d56e42cdcfa58ae57886adb-merged.mount: Deactivated successfully. Dec 6 05:20:24 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5652484e04839d706157bf2db8d0c37ce6784ac6075c24bee3fa0a80f4ff462b-userdata-shm.mount: Deactivated successfully. Dec 6 05:20:24 localhost systemd[1]: run-netns-qdhcp\x2d941344ac\x2d1e9e\x2d4ba5\x2d9592\x2d4a1e73ea58e6.mount: Deactivated successfully. Dec 6 05:20:25 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e173 e173: 6 total, 6 up, 6 in Dec 6 05:20:26 localhost neutron_sriov_agent[256690]: 2025-12-06 10:20:26.135 2 INFO neutron.agent.securitygroups_rpc [None req-ef8cf78f-f1a9-46f9-a6c4-622f166b1f57 95268a68b5c84162ba789100555874fb 7787060a7af94f168805e73d06841337 - - default default] Security group rule updated ['bd391031-3a56-454d-bf4f-b44b517a0aeb']#033[00m Dec 6 05:20:26 localhost neutron_sriov_agent[256690]: 2025-12-06 10:20:26.557 2 INFO neutron.agent.securitygroups_rpc [None req-6a0c3d03-5496-4b9d-aee6-2794cf73d3e3 95268a68b5c84162ba789100555874fb 7787060a7af94f168805e73d06841337 - - default default] Security group rule updated ['bd391031-3a56-454d-bf4f-b44b517a0aeb']#033[00m Dec 6 05:20:26 localhost neutron_sriov_agent[256690]: 2025-12-06 10:20:26.778 2 INFO neutron.agent.securitygroups_rpc [None req-cb5ad30b-b885-42c3-a286-99bf227690f5 95268a68b5c84162ba789100555874fb 7787060a7af94f168805e73d06841337 - - default default] Security group rule updated ['bd391031-3a56-454d-bf4f-b44b517a0aeb']#033[00m Dec 6 05:20:26 localhost neutron_sriov_agent[256690]: 2025-12-06 10:20:26.972 2 INFO neutron.agent.securitygroups_rpc [None req-415499aa-cb15-4206-8b55-b9a21ed2dc86 95268a68b5c84162ba789100555874fb 7787060a7af94f168805e73d06841337 - - default default] Security group rule updated ['bd391031-3a56-454d-bf4f-b44b517a0aeb']#033[00m Dec 6 05:20:27 localhost neutron_sriov_agent[256690]: 2025-12-06 10:20:27.101 2 INFO neutron.agent.securitygroups_rpc [None req-2e89e39c-a441-4c37-8f6e-df561eb77ca2 95268a68b5c84162ba789100555874fb 7787060a7af94f168805e73d06841337 - - default default] Security group rule updated ['bd391031-3a56-454d-bf4f-b44b517a0aeb']#033[00m Dec 6 05:20:27 localhost neutron_sriov_agent[256690]: 2025-12-06 10:20:27.211 2 INFO neutron.agent.securitygroups_rpc [None req-a03daea5-0289-4f98-a7ae-aa6379a3c0f5 95268a68b5c84162ba789100555874fb 7787060a7af94f168805e73d06841337 - - default default] Security group rule updated ['bd391031-3a56-454d-bf4f-b44b517a0aeb']#033[00m Dec 6 05:20:27 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e174 e174: 6 total, 6 up, 6 in Dec 6 05:20:28 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:20:28 localhost nova_compute[282193]: 2025-12-06 10:20:28.488 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:29 localhost neutron_sriov_agent[256690]: 2025-12-06 10:20:29.040 2 INFO neutron.agent.securitygroups_rpc [None req-3c4f32bd-b684-4b66-9a34-68450fbeb73b 95268a68b5c84162ba789100555874fb 7787060a7af94f168805e73d06841337 - - default default] Security group rule updated ['bd391031-3a56-454d-bf4f-b44b517a0aeb']#033[00m Dec 6 05:20:29 localhost neutron_sriov_agent[256690]: 2025-12-06 10:20:29.255 2 INFO neutron.agent.securitygroups_rpc [None req-03f78b6a-c5ab-4048-95e5-dac6933624ce 95268a68b5c84162ba789100555874fb 7787060a7af94f168805e73d06841337 - - default default] Security group rule updated ['bd391031-3a56-454d-bf4f-b44b517a0aeb']#033[00m Dec 6 05:20:29 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e175 e175: 6 total, 6 up, 6 in Dec 6 05:20:29 localhost neutron_sriov_agent[256690]: 2025-12-06 10:20:29.502 2 INFO neutron.agent.securitygroups_rpc [None req-bec77d6f-35e0-4121-9e3c-321d796fa6a3 95268a68b5c84162ba789100555874fb 7787060a7af94f168805e73d06841337 - - default default] Security group rule updated ['bd391031-3a56-454d-bf4f-b44b517a0aeb']#033[00m Dec 6 05:20:29 localhost nova_compute[282193]: 2025-12-06 10:20:29.683 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:29 localhost neutron_sriov_agent[256690]: 2025-12-06 10:20:29.696 2 INFO neutron.agent.securitygroups_rpc [None req-b83ddb52-2122-4d26-8d71-acc6737aed87 95268a68b5c84162ba789100555874fb 7787060a7af94f168805e73d06841337 - - default default] Security group rule updated ['bd391031-3a56-454d-bf4f-b44b517a0aeb']#033[00m Dec 6 05:20:30 localhost neutron_sriov_agent[256690]: 2025-12-06 10:20:30.741 2 INFO neutron.agent.securitygroups_rpc [None req-1a7eb84d-a3b4-4a88-a0ae-062b0b90ebc4 95268a68b5c84162ba789100555874fb 7787060a7af94f168805e73d06841337 - - default default] Security group rule updated ['ea4ca242-5187-4603-82cf-af66665b0039']#033[00m Dec 6 05:20:31 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:31.297 263652 INFO neutron.agent.linux.ip_lib [None req-9bd62092-9152-4a74-9bc1-8ba6ab839186 - - - - - -] Device tap902b329b-7b cannot be used as it has no MAC address#033[00m Dec 6 05:20:31 localhost nova_compute[282193]: 2025-12-06 10:20:31.320 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:31 localhost kernel: device tap902b329b-7b entered promiscuous mode Dec 6 05:20:31 localhost ovn_controller[154851]: 2025-12-06T10:20:31Z|00415|binding|INFO|Claiming lport 902b329b-7b8a-46c2-a01e-dfe82eef6b46 for this chassis. Dec 6 05:20:31 localhost ovn_controller[154851]: 2025-12-06T10:20:31Z|00416|binding|INFO|902b329b-7b8a-46c2-a01e-dfe82eef6b46: Claiming unknown Dec 6 05:20:31 localhost nova_compute[282193]: 2025-12-06 10:20:31.327 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:31 localhost NetworkManager[5973]: [1765016431.3306] manager: (tap902b329b-7b): new Generic device (/org/freedesktop/NetworkManager/Devices/69) Dec 6 05:20:31 localhost systemd-udevd[329370]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:20:31 localhost ovn_controller[154851]: 2025-12-06T10:20:31Z|00417|binding|INFO|Setting lport 902b329b-7b8a-46c2-a01e-dfe82eef6b46 ovn-installed in OVS Dec 6 05:20:31 localhost nova_compute[282193]: 2025-12-06 10:20:31.366 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:31 localhost ovn_controller[154851]: 2025-12-06T10:20:31Z|00418|binding|INFO|Setting lport 902b329b-7b8a-46c2-a01e-dfe82eef6b46 up in Southbound Dec 6 05:20:31 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:31.401 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.1/28', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-e449a5e0-9225-4a29-ab74-be48f680b8f1', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e449a5e0-9225-4a29-ab74-be48f680b8f1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fa76bcfc789b4e53acf344cd0b1cd7c5', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9489b31d-85bf-439b-b0c5-aab9e51b25ad, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=902b329b-7b8a-46c2-a01e-dfe82eef6b46) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:20:31 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:31.403 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 902b329b-7b8a-46c2-a01e-dfe82eef6b46 in datapath e449a5e0-9225-4a29-ab74-be48f680b8f1 bound to our chassis#033[00m Dec 6 05:20:31 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:31.405 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network e449a5e0-9225-4a29-ab74-be48f680b8f1 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:20:31 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:31.406 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[5a06b9c8-0085-444f-9897-df294b7b66db]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:20:31 localhost nova_compute[282193]: 2025-12-06 10:20:31.412 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:31 localhost nova_compute[282193]: 2025-12-06 10:20:31.442 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:32 localhost podman[329423]: Dec 6 05:20:32 localhost podman[329423]: 2025-12-06 10:20:32.227858195 +0000 UTC m=+0.078443949 container create 0099ac51aa4947a706ce8ce0a5fb406487f51dd73b2794c1ae5c1e761827e58b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e449a5e0-9225-4a29-ab74-be48f680b8f1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:20:32 localhost systemd[1]: Started libpod-conmon-0099ac51aa4947a706ce8ce0a5fb406487f51dd73b2794c1ae5c1e761827e58b.scope. Dec 6 05:20:32 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e176 e176: 6 total, 6 up, 6 in Dec 6 05:20:32 localhost systemd[1]: Started libcrun container. Dec 6 05:20:32 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cca8e8acafd6a16f5767d7885b992dc7463fe2148062a0fa246de7d9c1c0baa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:20:32 localhost podman[329423]: 2025-12-06 10:20:32.290775568 +0000 UTC m=+0.141361342 container init 0099ac51aa4947a706ce8ce0a5fb406487f51dd73b2794c1ae5c1e761827e58b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e449a5e0-9225-4a29-ab74-be48f680b8f1, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 6 05:20:32 localhost podman[329423]: 2025-12-06 10:20:32.298872076 +0000 UTC m=+0.149457820 container start 0099ac51aa4947a706ce8ce0a5fb406487f51dd73b2794c1ae5c1e761827e58b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e449a5e0-9225-4a29-ab74-be48f680b8f1, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:20:32 localhost podman[329423]: 2025-12-06 10:20:32.200894271 +0000 UTC m=+0.051480025 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:20:32 localhost dnsmasq[329441]: started, version 2.85 cachesize 150 Dec 6 05:20:32 localhost dnsmasq[329441]: DNS service limited to local subnets Dec 6 05:20:32 localhost dnsmasq[329441]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:20:32 localhost dnsmasq[329441]: warning: no upstream servers configured Dec 6 05:20:32 localhost dnsmasq-dhcp[329441]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 6 05:20:32 localhost dnsmasq[329441]: read /var/lib/neutron/dhcp/e449a5e0-9225-4a29-ab74-be48f680b8f1/addn_hosts - 0 addresses Dec 6 05:20:32 localhost dnsmasq-dhcp[329441]: read /var/lib/neutron/dhcp/e449a5e0-9225-4a29-ab74-be48f680b8f1/host Dec 6 05:20:32 localhost dnsmasq-dhcp[329441]: read /var/lib/neutron/dhcp/e449a5e0-9225-4a29-ab74-be48f680b8f1/opts Dec 6 05:20:32 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:32.475 263652 INFO neutron.agent.dhcp.agent [None req-e0e6ba56-53d2-467f-b73d-41a12f165c94 - - - - - -] DHCP configuration for ports {'842ccd8b-2a03-40a8-82af-c1147248d29c'} is completed#033[00m Dec 6 05:20:32 localhost neutron_sriov_agent[256690]: 2025-12-06 10:20:32.640 2 INFO neutron.agent.securitygroups_rpc [None req-8df6a51f-2782-49f1-a34d-739b1e2f53d1 95268a68b5c84162ba789100555874fb 7787060a7af94f168805e73d06841337 - - default default] Security group rule updated ['0223fd9f-7d67-4f35-8221-a118caed647f']#033[00m Dec 6 05:20:32 localhost neutron_sriov_agent[256690]: 2025-12-06 10:20:32.950 2 INFO neutron.agent.securitygroups_rpc [None req-c356675a-b0a4-4bc7-b431-054879bdecb2 95268a68b5c84162ba789100555874fb 7787060a7af94f168805e73d06841337 - - default default] Security group rule updated ['0223fd9f-7d67-4f35-8221-a118caed647f']#033[00m Dec 6 05:20:33 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Dec 6 05:20:33 localhost ovn_controller[154851]: 2025-12-06T10:20:33Z|00419|binding|INFO|Removing iface tap902b329b-7b ovn-installed in OVS Dec 6 05:20:33 localhost ovn_controller[154851]: 2025-12-06T10:20:33Z|00420|binding|INFO|Removing lport 902b329b-7b8a-46c2-a01e-dfe82eef6b46 ovn-installed in OVS Dec 6 05:20:33 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:33.518 160509 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 293179f1-663f-4453-88e7-13f4696f7c9d with type ""#033[00m Dec 6 05:20:33 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:33.520 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.1/28', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-e449a5e0-9225-4a29-ab74-be48f680b8f1', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e449a5e0-9225-4a29-ab74-be48f680b8f1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fa76bcfc789b4e53acf344cd0b1cd7c5', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9489b31d-85bf-439b-b0c5-aab9e51b25ad, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=902b329b-7b8a-46c2-a01e-dfe82eef6b46) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:20:33 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:33.523 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 902b329b-7b8a-46c2-a01e-dfe82eef6b46 in datapath e449a5e0-9225-4a29-ab74-be48f680b8f1 unbound from our chassis#033[00m Dec 6 05:20:33 localhost nova_compute[282193]: 2025-12-06 10:20:33.524 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:33 localhost kernel: device tap902b329b-7b left promiscuous mode Dec 6 05:20:33 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:33.533 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e449a5e0-9225-4a29-ab74-be48f680b8f1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:20:33 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:33.535 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[3d767afb-b401-46bb-9909-fb5b28e2e99c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:20:33 localhost nova_compute[282193]: 2025-12-06 10:20:33.541 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:34 localhost nova_compute[282193]: 2025-12-06 10:20:34.729 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:34 localhost dnsmasq[329441]: read /var/lib/neutron/dhcp/e449a5e0-9225-4a29-ab74-be48f680b8f1/addn_hosts - 0 addresses Dec 6 05:20:34 localhost dnsmasq-dhcp[329441]: read /var/lib/neutron/dhcp/e449a5e0-9225-4a29-ab74-be48f680b8f1/host Dec 6 05:20:34 localhost podman[329462]: 2025-12-06 10:20:34.776842404 +0000 UTC m=+0.091414665 container kill 0099ac51aa4947a706ce8ce0a5fb406487f51dd73b2794c1ae5c1e761827e58b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e449a5e0-9225-4a29-ab74-be48f680b8f1, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 6 05:20:34 localhost dnsmasq-dhcp[329441]: read /var/lib/neutron/dhcp/e449a5e0-9225-4a29-ab74-be48f680b8f1/opts Dec 6 05:20:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:20:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:20:34 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:34.803 263652 ERROR neutron.agent.dhcp.agent [None req-94ddf741-81b5-4bd7-901a-2366724e3951 - - - - - -] Unable to reload_allocations dhcp for e449a5e0-9225-4a29-ab74-be48f680b8f1.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap902b329b-7b not found in namespace qdhcp-e449a5e0-9225-4a29-ab74-be48f680b8f1. Dec 6 05:20:34 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:34.803 263652 ERROR neutron.agent.dhcp.agent Traceback (most recent call last): Dec 6 05:20:34 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:34.803 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver Dec 6 05:20:34 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:34.803 263652 ERROR neutron.agent.dhcp.agent rv = getattr(driver, action)(**action_kwargs) Dec 6 05:20:34 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:34.803 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations Dec 6 05:20:34 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:34.803 263652 ERROR neutron.agent.dhcp.agent self.device_manager.update(self.network, self.interface_name) Dec 6 05:20:34 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:34.803 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update Dec 6 05:20:34 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:34.803 263652 ERROR neutron.agent.dhcp.agent self._set_default_route(network, device_name) Dec 6 05:20:34 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:34.803 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route Dec 6 05:20:34 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:34.803 263652 ERROR neutron.agent.dhcp.agent self._set_default_route_ip_version(network, device_name, Dec 6 05:20:34 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:34.803 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version Dec 6 05:20:34 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:34.803 263652 ERROR neutron.agent.dhcp.agent gateway = device.route.get_gateway(ip_version=ip_version) Dec 6 05:20:34 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:34.803 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway Dec 6 05:20:34 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:34.803 263652 ERROR neutron.agent.dhcp.agent routes = self.list_routes(ip_version, scope=scope, table=table) Dec 6 05:20:34 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:34.803 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes Dec 6 05:20:34 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:34.803 263652 ERROR neutron.agent.dhcp.agent return list_ip_routes(self._parent.namespace, ip_version, scope=scope, Dec 6 05:20:34 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:34.803 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes Dec 6 05:20:34 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:34.803 263652 ERROR neutron.agent.dhcp.agent routes = privileged.list_ip_routes(namespace, ip_version, device=device, Dec 6 05:20:34 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:34.803 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f Dec 6 05:20:34 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:34.803 263652 ERROR neutron.agent.dhcp.agent return self(f, *args, **kw) Dec 6 05:20:34 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:34.803 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__ Dec 6 05:20:34 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:34.803 263652 ERROR neutron.agent.dhcp.agent do = self.iter(retry_state=retry_state) Dec 6 05:20:34 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:34.803 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter Dec 6 05:20:34 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:34.803 263652 ERROR neutron.agent.dhcp.agent return fut.result() Dec 6 05:20:34 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:34.803 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result Dec 6 05:20:34 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:34.803 263652 ERROR neutron.agent.dhcp.agent return self.__get_result() Dec 6 05:20:34 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:34.803 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result Dec 6 05:20:34 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:34.803 263652 ERROR neutron.agent.dhcp.agent raise self._exception Dec 6 05:20:34 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:34.803 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__ Dec 6 05:20:34 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:34.803 263652 ERROR neutron.agent.dhcp.agent result = fn(*args, **kwargs) Dec 6 05:20:34 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:34.803 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap Dec 6 05:20:34 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:34.803 263652 ERROR neutron.agent.dhcp.agent return self.channel.remote_call(name, args, kwargs, Dec 6 05:20:34 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:34.803 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call Dec 6 05:20:34 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:34.803 263652 ERROR neutron.agent.dhcp.agent raise exc_type(*result[2]) Dec 6 05:20:34 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:34.803 263652 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap902b329b-7b not found in namespace qdhcp-e449a5e0-9225-4a29-ab74-be48f680b8f1. Dec 6 05:20:34 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:34.803 263652 ERROR neutron.agent.dhcp.agent #033[00m Dec 6 05:20:34 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:34.809 263652 INFO neutron.agent.dhcp.agent [None req-5a5595b0-a0f8-44d7-9627-4542518c5211 - - - - - -] Synchronizing state#033[00m Dec 6 05:20:34 localhost systemd[1]: tmp-crun.Nx0391.mount: Deactivated successfully. Dec 6 05:20:34 localhost podman[329476]: 2025-12-06 10:20:34.905132306 +0000 UTC m=+0.093246562 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:20:34 localhost podman[329476]: 2025-12-06 10:20:34.909971723 +0000 UTC m=+0.098086009 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:20:34 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:20:34 localhost podman[329477]: 2025-12-06 10:20:34.988898176 +0000 UTC m=+0.179554540 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 6 05:20:34 localhost podman[329477]: 2025-12-06 10:20:34.996746786 +0000 UTC m=+0.187403160 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 6 05:20:35 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:20:35 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:35.118 263652 INFO neutron.agent.dhcp.agent [None req-83cda96d-a56f-4426-a417-563d5a2ec11d - - - - - -] All active networks have been fetched through RPC.#033[00m Dec 6 05:20:35 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:35.119 263652 INFO neutron.agent.dhcp.agent [-] Starting network e449a5e0-9225-4a29-ab74-be48f680b8f1 dhcp configuration#033[00m Dec 6 05:20:35 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:35.120 263652 INFO neutron.agent.dhcp.agent [-] Finished network e449a5e0-9225-4a29-ab74-be48f680b8f1 dhcp configuration#033[00m Dec 6 05:20:35 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:35.121 263652 INFO neutron.agent.dhcp.agent [None req-83cda96d-a56f-4426-a417-563d5a2ec11d - - - - - -] Synchronizing state complete#033[00m Dec 6 05:20:35 localhost nova_compute[282193]: 2025-12-06 10:20:35.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:20:35 localhost nova_compute[282193]: 2025-12-06 10:20:35.412 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:20:35 localhost nova_compute[282193]: 2025-12-06 10:20:35.413 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:20:35 localhost nova_compute[282193]: 2025-12-06 10:20:35.413 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:20:35 localhost nova_compute[282193]: 2025-12-06 10:20:35.413 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Auditing locally available compute resources for np0005548789.localdomain (node: np0005548789.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 6 05:20:35 localhost nova_compute[282193]: 2025-12-06 10:20:35.413 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:20:35 localhost ovn_controller[154851]: 2025-12-06T10:20:35Z|00421|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:20:35 localhost nova_compute[282193]: 2025-12-06 10:20:35.464 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:35 localhost neutron_sriov_agent[256690]: 2025-12-06 10:20:35.498 2 INFO neutron.agent.securitygroups_rpc [None req-70f019b0-4c49-406a-b078-506915b4f443 95268a68b5c84162ba789100555874fb 7787060a7af94f168805e73d06841337 - - default default] Security group rule updated ['2bea4444-1a2f-4249-8686-d0a5b03f529f']#033[00m Dec 6 05:20:35 localhost dnsmasq[329441]: exiting on receipt of SIGTERM Dec 6 05:20:35 localhost systemd[1]: libpod-0099ac51aa4947a706ce8ce0a5fb406487f51dd73b2794c1ae5c1e761827e58b.scope: Deactivated successfully. Dec 6 05:20:35 localhost podman[329536]: 2025-12-06 10:20:35.632874642 +0000 UTC m=+0.069540927 container kill 0099ac51aa4947a706ce8ce0a5fb406487f51dd73b2794c1ae5c1e761827e58b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e449a5e0-9225-4a29-ab74-be48f680b8f1, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true) Dec 6 05:20:35 localhost podman[329569]: 2025-12-06 10:20:35.68447908 +0000 UTC m=+0.040283293 container died 0099ac51aa4947a706ce8ce0a5fb406487f51dd73b2794c1ae5c1e761827e58b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e449a5e0-9225-4a29-ab74-be48f680b8f1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:20:35 localhost podman[329569]: 2025-12-06 10:20:35.716731775 +0000 UTC m=+0.072535918 container cleanup 0099ac51aa4947a706ce8ce0a5fb406487f51dd73b2794c1ae5c1e761827e58b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e449a5e0-9225-4a29-ab74-be48f680b8f1, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 6 05:20:35 localhost systemd[1]: libpod-conmon-0099ac51aa4947a706ce8ce0a5fb406487f51dd73b2794c1ae5c1e761827e58b.scope: Deactivated successfully. Dec 6 05:20:35 localhost podman[329571]: 2025-12-06 10:20:35.754074687 +0000 UTC m=+0.104367832 container remove 0099ac51aa4947a706ce8ce0a5fb406487f51dd73b2794c1ae5c1e761827e58b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e449a5e0-9225-4a29-ab74-be48f680b8f1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 6 05:20:35 localhost systemd[1]: var-lib-containers-storage-overlay-1cca8e8acafd6a16f5767d7885b992dc7463fe2148062a0fa246de7d9c1c0baa-merged.mount: Deactivated successfully. Dec 6 05:20:35 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0099ac51aa4947a706ce8ce0a5fb406487f51dd73b2794c1ae5c1e761827e58b-userdata-shm.mount: Deactivated successfully. Dec 6 05:20:35 localhost systemd[1]: run-netns-qdhcp\x2de449a5e0\x2d9225\x2d4a29\x2dab74\x2dbe48f680b8f1.mount: Deactivated successfully. Dec 6 05:20:35 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:20:35 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3962963758' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:20:35 localhost nova_compute[282193]: 2025-12-06 10:20:35.890 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:20:35 localhost neutron_sriov_agent[256690]: 2025-12-06 10:20:35.894 2 INFO neutron.agent.securitygroups_rpc [None req-76169eb0-4558-4a54-88c6-853dfb7935a8 95268a68b5c84162ba789100555874fb 7787060a7af94f168805e73d06841337 - - default default] Security group rule updated ['2bea4444-1a2f-4249-8686-d0a5b03f529f']#033[00m Dec 6 05:20:35 localhost nova_compute[282193]: 2025-12-06 10:20:35.966 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:20:35 localhost nova_compute[282193]: 2025-12-06 10:20:35.967 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:20:36 localhost nova_compute[282193]: 2025-12-06 10:20:36.162 282197 WARNING nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 6 05:20:36 localhost nova_compute[282193]: 2025-12-06 10:20:36.163 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Hypervisor/Node resource view: name=np0005548789.localdomain free_ram=11221MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 6 05:20:36 localhost nova_compute[282193]: 2025-12-06 10:20:36.163 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:20:36 localhost nova_compute[282193]: 2025-12-06 10:20:36.164 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:20:36 localhost nova_compute[282193]: 2025-12-06 10:20:36.234 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 6 05:20:36 localhost nova_compute[282193]: 2025-12-06 10:20:36.234 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 6 05:20:36 localhost nova_compute[282193]: 2025-12-06 10:20:36.234 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Final resource view: name=np0005548789.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 6 05:20:36 localhost nova_compute[282193]: 2025-12-06 10:20:36.278 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:20:36 localhost sshd[329618]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:20:36 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:20:36 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1176083142' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:20:36 localhost nova_compute[282193]: 2025-12-06 10:20:36.765 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:20:36 localhost nova_compute[282193]: 2025-12-06 10:20:36.771 282197 DEBUG nova.compute.provider_tree [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed in ProviderTree for provider: 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 6 05:20:36 localhost nova_compute[282193]: 2025-12-06 10:20:36.788 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 6 05:20:36 localhost nova_compute[282193]: 2025-12-06 10:20:36.790 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Compute_service record updated for np0005548789.localdomain:np0005548789.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 6 05:20:36 localhost nova_compute[282193]: 2025-12-06 10:20:36.791 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:20:37 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e177 e177: 6 total, 6 up, 6 in Dec 6 05:20:38 localhost neutron_sriov_agent[256690]: 2025-12-06 10:20:38.295 2 INFO neutron.agent.securitygroups_rpc [None req-10166391-fcb0-4201-a7d2-7443ab5c9b01 95268a68b5c84162ba789100555874fb 7787060a7af94f168805e73d06841337 - - default default] Security group rule updated ['344ee16e-9c56-4a03-94c4-4549205a4025']#033[00m Dec 6 05:20:38 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:20:38 localhost nova_compute[282193]: 2025-12-06 10:20:38.528 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:38 localhost neutron_sriov_agent[256690]: 2025-12-06 10:20:38.765 2 INFO neutron.agent.securitygroups_rpc [None req-867b4687-c36d-47aa-8d2e-c76597d4a6cb 95268a68b5c84162ba789100555874fb 7787060a7af94f168805e73d06841337 - - default default] Security group rule updated ['344ee16e-9c56-4a03-94c4-4549205a4025']#033[00m Dec 6 05:20:38 localhost neutron_sriov_agent[256690]: 2025-12-06 10:20:38.969 2 INFO neutron.agent.securitygroups_rpc [None req-80b8119e-1e57-4c4d-b95c-97abe74340b6 95268a68b5c84162ba789100555874fb 7787060a7af94f168805e73d06841337 - - default default] Security group rule updated ['344ee16e-9c56-4a03-94c4-4549205a4025']#033[00m Dec 6 05:20:39 localhost neutron_sriov_agent[256690]: 2025-12-06 10:20:39.298 2 INFO neutron.agent.securitygroups_rpc [None req-335b324d-f81b-4cc4-b913-ab18408e9420 95268a68b5c84162ba789100555874fb 7787060a7af94f168805e73d06841337 - - default default] Security group rule updated ['344ee16e-9c56-4a03-94c4-4549205a4025']#033[00m Dec 6 05:20:39 localhost sshd[329622]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:20:39 localhost neutron_sriov_agent[256690]: 2025-12-06 10:20:39.648 2 INFO neutron.agent.securitygroups_rpc [None req-c2f184e7-70de-480a-95d7-35dc53af97f7 95268a68b5c84162ba789100555874fb 7787060a7af94f168805e73d06841337 - - default default] Security group rule updated ['344ee16e-9c56-4a03-94c4-4549205a4025']#033[00m Dec 6 05:20:39 localhost nova_compute[282193]: 2025-12-06 10:20:39.772 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:39 localhost nova_compute[282193]: 2025-12-06 10:20:39.787 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:20:39 localhost nova_compute[282193]: 2025-12-06 10:20:39.787 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:20:39 localhost nova_compute[282193]: 2025-12-06 10:20:39.788 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 6 05:20:39 localhost nova_compute[282193]: 2025-12-06 10:20:39.788 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 6 05:20:39 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:39.845 263652 INFO neutron.agent.linux.ip_lib [None req-1168e812-3032-4571-a4b2-f8e3c0ed8a19 - - - - - -] Device tap0769d4f9-3c cannot be used as it has no MAC address#033[00m Dec 6 05:20:39 localhost nova_compute[282193]: 2025-12-06 10:20:39.874 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 6 05:20:39 localhost nova_compute[282193]: 2025-12-06 10:20:39.878 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquired lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 6 05:20:39 localhost nova_compute[282193]: 2025-12-06 10:20:39.878 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 6 05:20:39 localhost kernel: device tap0769d4f9-3c entered promiscuous mode Dec 6 05:20:39 localhost nova_compute[282193]: 2025-12-06 10:20:39.879 282197 DEBUG nova.objects.instance [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 6 05:20:39 localhost ovn_controller[154851]: 2025-12-06T10:20:39Z|00422|binding|INFO|Claiming lport 0769d4f9-3cf8-430d-87d0-faa554cf4d51 for this chassis. Dec 6 05:20:39 localhost ovn_controller[154851]: 2025-12-06T10:20:39Z|00423|binding|INFO|0769d4f9-3cf8-430d-87d0-faa554cf4d51: Claiming unknown Dec 6 05:20:39 localhost nova_compute[282193]: 2025-12-06 10:20:39.884 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:39 localhost NetworkManager[5973]: [1765016439.8846] manager: (tap0769d4f9-3c): new Generic device (/org/freedesktop/NetworkManager/Devices/70) Dec 6 05:20:39 localhost systemd-udevd[329634]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:20:39 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:39.894 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-610fcf3f-6e70-4d5d-9d9e-df794ff4196d', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-610fcf3f-6e70-4d5d-9d9e-df794ff4196d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '24086b701d6b4d4081d2e63578d18d24', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=22180776-fd75-4bd6-be28-febd70acf464, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=0769d4f9-3cf8-430d-87d0-faa554cf4d51) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:20:39 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:39.896 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 0769d4f9-3cf8-430d-87d0-faa554cf4d51 in datapath 610fcf3f-6e70-4d5d-9d9e-df794ff4196d bound to our chassis#033[00m Dec 6 05:20:39 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:39.898 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 610fcf3f-6e70-4d5d-9d9e-df794ff4196d or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:20:39 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:39.898 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[26c5ff61-9ee7-4614-854a-696546a7904d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:20:39 localhost journal[230404]: ethtool ioctl error on tap0769d4f9-3c: No such device Dec 6 05:20:39 localhost nova_compute[282193]: 2025-12-06 10:20:39.918 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:39 localhost ovn_controller[154851]: 2025-12-06T10:20:39Z|00424|binding|INFO|Setting lport 0769d4f9-3cf8-430d-87d0-faa554cf4d51 ovn-installed in OVS Dec 6 05:20:39 localhost ovn_controller[154851]: 2025-12-06T10:20:39Z|00425|binding|INFO|Setting lport 0769d4f9-3cf8-430d-87d0-faa554cf4d51 up in Southbound Dec 6 05:20:39 localhost journal[230404]: ethtool ioctl error on tap0769d4f9-3c: No such device Dec 6 05:20:39 localhost nova_compute[282193]: 2025-12-06 10:20:39.925 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:39 localhost journal[230404]: ethtool ioctl error on tap0769d4f9-3c: No such device Dec 6 05:20:39 localhost journal[230404]: ethtool ioctl error on tap0769d4f9-3c: No such device Dec 6 05:20:39 localhost journal[230404]: ethtool ioctl error on tap0769d4f9-3c: No such device Dec 6 05:20:39 localhost journal[230404]: ethtool ioctl error on tap0769d4f9-3c: No such device Dec 6 05:20:39 localhost journal[230404]: ethtool ioctl error on tap0769d4f9-3c: No such device Dec 6 05:20:39 localhost nova_compute[282193]: 2025-12-06 10:20:39.957 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:39 localhost journal[230404]: ethtool ioctl error on tap0769d4f9-3c: No such device Dec 6 05:20:39 localhost nova_compute[282193]: 2025-12-06 10:20:39.987 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:40 localhost neutron_sriov_agent[256690]: 2025-12-06 10:20:40.453 2 INFO neutron.agent.securitygroups_rpc [None req-4752e8b4-a74a-419a-afa7-aac12ad63453 95268a68b5c84162ba789100555874fb 7787060a7af94f168805e73d06841337 - - default default] Security group rule updated ['344ee16e-9c56-4a03-94c4-4549205a4025']#033[00m Dec 6 05:20:40 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e178 e178: 6 total, 6 up, 6 in Dec 6 05:20:40 localhost podman[329704]: Dec 6 05:20:40 localhost podman[329704]: 2025-12-06 10:20:40.841812381 +0000 UTC m=+0.094896762 container create 195bb5c97a1927fb0566a11eeb513a05a2f050ad98f5e7dc5af8ef23feb91058 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-610fcf3f-6e70-4d5d-9d9e-df794ff4196d, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:20:40 localhost systemd[1]: Started libpod-conmon-195bb5c97a1927fb0566a11eeb513a05a2f050ad98f5e7dc5af8ef23feb91058.scope. Dec 6 05:20:40 localhost podman[329704]: 2025-12-06 10:20:40.799672782 +0000 UTC m=+0.052757203 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:20:40 localhost systemd[1]: tmp-crun.AIQ7iv.mount: Deactivated successfully. Dec 6 05:20:40 localhost nova_compute[282193]: 2025-12-06 10:20:40.925 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updating instance_info_cache with network_info: [{"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 6 05:20:40 localhost systemd[1]: Started libcrun container. Dec 6 05:20:40 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac8af9b2a52417f488cf9670e26c867e6ab859687c3a4f72e20580bf57f1cadb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:20:40 localhost podman[329704]: 2025-12-06 10:20:40.943507689 +0000 UTC m=+0.196592060 container init 195bb5c97a1927fb0566a11eeb513a05a2f050ad98f5e7dc5af8ef23feb91058 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-610fcf3f-6e70-4d5d-9d9e-df794ff4196d, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2) Dec 6 05:20:40 localhost nova_compute[282193]: 2025-12-06 10:20:40.952 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Releasing lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 6 05:20:40 localhost nova_compute[282193]: 2025-12-06 10:20:40.953 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 6 05:20:40 localhost podman[329704]: 2025-12-06 10:20:40.955810645 +0000 UTC m=+0.208895016 container start 195bb5c97a1927fb0566a11eeb513a05a2f050ad98f5e7dc5af8ef23feb91058 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-610fcf3f-6e70-4d5d-9d9e-df794ff4196d, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Dec 6 05:20:40 localhost dnsmasq[329722]: started, version 2.85 cachesize 150 Dec 6 05:20:40 localhost dnsmasq[329722]: DNS service limited to local subnets Dec 6 05:20:40 localhost dnsmasq[329722]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:20:40 localhost dnsmasq[329722]: warning: no upstream servers configured Dec 6 05:20:40 localhost dnsmasq-dhcp[329722]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 6 05:20:40 localhost dnsmasq[329722]: read /var/lib/neutron/dhcp/610fcf3f-6e70-4d5d-9d9e-df794ff4196d/addn_hosts - 0 addresses Dec 6 05:20:40 localhost dnsmasq-dhcp[329722]: read /var/lib/neutron/dhcp/610fcf3f-6e70-4d5d-9d9e-df794ff4196d/host Dec 6 05:20:40 localhost dnsmasq-dhcp[329722]: read /var/lib/neutron/dhcp/610fcf3f-6e70-4d5d-9d9e-df794ff4196d/opts Dec 6 05:20:41 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:41.081 263652 INFO neutron.agent.dhcp.agent [None req-1168e812-3032-4571-a4b2-f8e3c0ed8a19 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:20:39Z, description=, device_id=b4262675-0888-4a13-bb89-bb38cc732c6a, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=36e3a010-4f1c-470f-b642-2ad82f1c412c, ip_allocation=immediate, mac_address=fa:16:3e:cf:00:44, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:20:36Z, description=, dns_domain=, id=610fcf3f-6e70-4d5d-9d9e-df794ff4196d, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-1607502893, port_security_enabled=True, project_id=24086b701d6b4d4081d2e63578d18d24, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=60833, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2475, status=ACTIVE, subnets=['a0f9db6e-5853-44b4-b7cb-c956203cda7f'], tags=[], tenant_id=24086b701d6b4d4081d2e63578d18d24, updated_at=2025-12-06T10:20:38Z, vlan_transparent=None, network_id=610fcf3f-6e70-4d5d-9d9e-df794ff4196d, port_security_enabled=False, project_id=24086b701d6b4d4081d2e63578d18d24, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2493, status=DOWN, tags=[], tenant_id=24086b701d6b4d4081d2e63578d18d24, updated_at=2025-12-06T10:20:39Z on network 610fcf3f-6e70-4d5d-9d9e-df794ff4196d#033[00m Dec 6 05:20:41 localhost nova_compute[282193]: 2025-12-06 10:20:41.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:20:41 localhost nova_compute[282193]: 2025-12-06 10:20:41.182 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:20:41 localhost nova_compute[282193]: 2025-12-06 10:20:41.182 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:20:41 localhost nova_compute[282193]: 2025-12-06 10:20:41.182 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 6 05:20:41 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:41.190 263652 INFO neutron.agent.dhcp.agent [None req-62ad0b69-664f-49cd-a5e8-f16aafd2554b - - - - - -] DHCP configuration for ports {'e4db429b-394a-4ad3-95be-68082bed1436'} is completed#033[00m Dec 6 05:20:41 localhost dnsmasq[329722]: read /var/lib/neutron/dhcp/610fcf3f-6e70-4d5d-9d9e-df794ff4196d/addn_hosts - 1 addresses Dec 6 05:20:41 localhost dnsmasq-dhcp[329722]: read /var/lib/neutron/dhcp/610fcf3f-6e70-4d5d-9d9e-df794ff4196d/host Dec 6 05:20:41 localhost podman[329741]: 2025-12-06 10:20:41.284470772 +0000 UTC m=+0.060276604 container kill 195bb5c97a1927fb0566a11eeb513a05a2f050ad98f5e7dc5af8ef23feb91058 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-610fcf3f-6e70-4d5d-9d9e-df794ff4196d, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 6 05:20:41 localhost dnsmasq-dhcp[329722]: read /var/lib/neutron/dhcp/610fcf3f-6e70-4d5d-9d9e-df794ff4196d/opts Dec 6 05:20:41 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:41.442 263652 INFO neutron.agent.dhcp.agent [None req-1168e812-3032-4571-a4b2-f8e3c0ed8a19 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:20:39Z, description=, device_id=b4262675-0888-4a13-bb89-bb38cc732c6a, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=36e3a010-4f1c-470f-b642-2ad82f1c412c, ip_allocation=immediate, mac_address=fa:16:3e:cf:00:44, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:20:36Z, description=, dns_domain=, id=610fcf3f-6e70-4d5d-9d9e-df794ff4196d, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-1607502893, port_security_enabled=True, project_id=24086b701d6b4d4081d2e63578d18d24, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=60833, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2475, status=ACTIVE, subnets=['a0f9db6e-5853-44b4-b7cb-c956203cda7f'], tags=[], tenant_id=24086b701d6b4d4081d2e63578d18d24, updated_at=2025-12-06T10:20:38Z, vlan_transparent=None, network_id=610fcf3f-6e70-4d5d-9d9e-df794ff4196d, port_security_enabled=False, project_id=24086b701d6b4d4081d2e63578d18d24, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2493, status=DOWN, tags=[], tenant_id=24086b701d6b4d4081d2e63578d18d24, updated_at=2025-12-06T10:20:39Z on network 610fcf3f-6e70-4d5d-9d9e-df794ff4196d#033[00m Dec 6 05:20:41 localhost nova_compute[282193]: 2025-12-06 10:20:41.531 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:41 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:41.534 263652 INFO neutron.agent.dhcp.agent [None req-172e93d5-4bdf-47d5-9be3-3ec1864cf020 - - - - - -] DHCP configuration for ports {'36e3a010-4f1c-470f-b642-2ad82f1c412c'} is completed#033[00m Dec 6 05:20:41 localhost dnsmasq[329722]: read /var/lib/neutron/dhcp/610fcf3f-6e70-4d5d-9d9e-df794ff4196d/addn_hosts - 1 addresses Dec 6 05:20:41 localhost dnsmasq-dhcp[329722]: read /var/lib/neutron/dhcp/610fcf3f-6e70-4d5d-9d9e-df794ff4196d/host Dec 6 05:20:41 localhost dnsmasq-dhcp[329722]: read /var/lib/neutron/dhcp/610fcf3f-6e70-4d5d-9d9e-df794ff4196d/opts Dec 6 05:20:41 localhost podman[329781]: 2025-12-06 10:20:41.631068456 +0000 UTC m=+0.061753799 container kill 195bb5c97a1927fb0566a11eeb513a05a2f050ad98f5e7dc5af8ef23feb91058 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-610fcf3f-6e70-4d5d-9d9e-df794ff4196d, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:20:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:20:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:20:41 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:41.891 263652 INFO neutron.agent.dhcp.agent [None req-70a6017e-0197-46bf-b006-34c5c8ec7b9b - - - - - -] DHCP configuration for ports {'36e3a010-4f1c-470f-b642-2ad82f1c412c'} is completed#033[00m Dec 6 05:20:41 localhost podman[329802]: 2025-12-06 10:20:41.932866132 +0000 UTC m=+0.086539477 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute) Dec 6 05:20:41 localhost podman[329802]: 2025-12-06 10:20:41.945174438 +0000 UTC m=+0.098847793 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2) Dec 6 05:20:41 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:20:42 localhost podman[329801]: 2025-12-06 10:20:42.035608783 +0000 UTC m=+0.193193137 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, architecture=x86_64, build-date=2025-08-20T13:12:41, release=1755695350, vendor=Red Hat, Inc., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, version=9.6, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Dec 6 05:20:42 localhost podman[329801]: 2025-12-06 10:20:42.054402447 +0000 UTC m=+0.211986801 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, io.openshift.expose-services=, managed_by=edpm_ansible, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Dec 6 05:20:42 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:20:42 localhost neutron_sriov_agent[256690]: 2025-12-06 10:20:42.095 2 INFO neutron.agent.securitygroups_rpc [None req-739af509-ab08-45e2-ba83-57dd6efc5660 95268a68b5c84162ba789100555874fb 7787060a7af94f168805e73d06841337 - - default default] Security group rule updated ['6ae4fdb3-8bab-4aac-9ae7-1f521287092b']#033[00m Dec 6 05:20:42 localhost nova_compute[282193]: 2025-12-06 10:20:42.183 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:20:43 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:20:43 localhost nova_compute[282193]: 2025-12-06 10:20:43.559 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:43 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:43.596 263652 INFO neutron.agent.linux.ip_lib [None req-8d1fb88d-36ee-4e82-9ff5-b0c494330ddd - - - - - -] Device tapb26e75de-36 cannot be used as it has no MAC address#033[00m Dec 6 05:20:43 localhost nova_compute[282193]: 2025-12-06 10:20:43.612 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:43 localhost kernel: device tapb26e75de-36 entered promiscuous mode Dec 6 05:20:43 localhost NetworkManager[5973]: [1765016443.6189] manager: (tapb26e75de-36): new Generic device (/org/freedesktop/NetworkManager/Devices/71) Dec 6 05:20:43 localhost nova_compute[282193]: 2025-12-06 10:20:43.619 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:43 localhost systemd-udevd[329848]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:20:43 localhost ovn_controller[154851]: 2025-12-06T10:20:43Z|00426|binding|INFO|Claiming lport b26e75de-365d-482e-b28d-740529191fa4 for this chassis. Dec 6 05:20:43 localhost ovn_controller[154851]: 2025-12-06T10:20:43Z|00427|binding|INFO|b26e75de-365d-482e-b28d-740529191fa4: Claiming unknown Dec 6 05:20:43 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:43.634 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:1::2/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-f06eaa72-d4d5-4c14-80ef-691411a95b29', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f06eaa72-d4d5-4c14-80ef-691411a95b29', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '24086b701d6b4d4081d2e63578d18d24', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1ae8e12d-32c2-4a99-98bb-39bcacc37749, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=b26e75de-365d-482e-b28d-740529191fa4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:20:43 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:43.635 160509 INFO neutron.agent.ovn.metadata.agent [-] Port b26e75de-365d-482e-b28d-740529191fa4 in datapath f06eaa72-d4d5-4c14-80ef-691411a95b29 bound to our chassis#033[00m Dec 6 05:20:43 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:43.636 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network f06eaa72-d4d5-4c14-80ef-691411a95b29 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:20:43 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:43.637 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[91b4cb0f-28f4-4e6f-bfd7-b8042a8d84c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:20:43 localhost journal[230404]: ethtool ioctl error on tapb26e75de-36: No such device Dec 6 05:20:43 localhost journal[230404]: ethtool ioctl error on tapb26e75de-36: No such device Dec 6 05:20:43 localhost ovn_controller[154851]: 2025-12-06T10:20:43Z|00428|binding|INFO|Setting lport b26e75de-365d-482e-b28d-740529191fa4 ovn-installed in OVS Dec 6 05:20:43 localhost ovn_controller[154851]: 2025-12-06T10:20:43Z|00429|binding|INFO|Setting lport b26e75de-365d-482e-b28d-740529191fa4 up in Southbound Dec 6 05:20:43 localhost nova_compute[282193]: 2025-12-06 10:20:43.651 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:43 localhost journal[230404]: ethtool ioctl error on tapb26e75de-36: No such device Dec 6 05:20:43 localhost journal[230404]: ethtool ioctl error on tapb26e75de-36: No such device Dec 6 05:20:43 localhost journal[230404]: ethtool ioctl error on tapb26e75de-36: No such device Dec 6 05:20:43 localhost journal[230404]: ethtool ioctl error on tapb26e75de-36: No such device Dec 6 05:20:43 localhost journal[230404]: ethtool ioctl error on tapb26e75de-36: No such device Dec 6 05:20:43 localhost journal[230404]: ethtool ioctl error on tapb26e75de-36: No such device Dec 6 05:20:43 localhost nova_compute[282193]: 2025-12-06 10:20:43.683 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:43 localhost nova_compute[282193]: 2025-12-06 10:20:43.705 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:44 localhost nova_compute[282193]: 2025-12-06 10:20:44.182 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:20:44 localhost podman[329919]: Dec 6 05:20:44 localhost podman[329919]: 2025-12-06 10:20:44.540420099 +0000 UTC m=+0.093985184 container create d97fd00f3e78882d783a796607d4bd6632cdb7dd83587cae5671ab11021d365f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f06eaa72-d4d5-4c14-80ef-691411a95b29, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true) Dec 6 05:20:44 localhost systemd[1]: Started libpod-conmon-d97fd00f3e78882d783a796607d4bd6632cdb7dd83587cae5671ab11021d365f.scope. Dec 6 05:20:44 localhost podman[329919]: 2025-12-06 10:20:44.493303389 +0000 UTC m=+0.046868504 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:20:44 localhost systemd[1]: Started libcrun container. Dec 6 05:20:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5a61fe99463007b75615de9ea3a382eb9be1c48dd22fe90e27b8f895cfb3e13/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:20:44 localhost podman[329919]: 2025-12-06 10:20:44.624296983 +0000 UTC m=+0.177862058 container init d97fd00f3e78882d783a796607d4bd6632cdb7dd83587cae5671ab11021d365f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f06eaa72-d4d5-4c14-80ef-691411a95b29, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 6 05:20:44 localhost podman[329919]: 2025-12-06 10:20:44.636585438 +0000 UTC m=+0.190150523 container start d97fd00f3e78882d783a796607d4bd6632cdb7dd83587cae5671ab11021d365f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f06eaa72-d4d5-4c14-80ef-691411a95b29, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Dec 6 05:20:44 localhost dnsmasq[329937]: started, version 2.85 cachesize 150 Dec 6 05:20:44 localhost dnsmasq[329937]: DNS service limited to local subnets Dec 6 05:20:44 localhost dnsmasq[329937]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:20:44 localhost dnsmasq[329937]: warning: no upstream servers configured Dec 6 05:20:44 localhost dnsmasq-dhcp[329937]: DHCPv6, static leases only on 2001:db8:1::, lease time 1d Dec 6 05:20:44 localhost dnsmasq[329937]: read /var/lib/neutron/dhcp/f06eaa72-d4d5-4c14-80ef-691411a95b29/addn_hosts - 0 addresses Dec 6 05:20:44 localhost dnsmasq-dhcp[329937]: read /var/lib/neutron/dhcp/f06eaa72-d4d5-4c14-80ef-691411a95b29/host Dec 6 05:20:44 localhost dnsmasq-dhcp[329937]: read /var/lib/neutron/dhcp/f06eaa72-d4d5-4c14-80ef-691411a95b29/opts Dec 6 05:20:44 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:44.698 263652 INFO neutron.agent.dhcp.agent [None req-8d1fb88d-36ee-4e82-9ff5-b0c494330ddd - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:20:43Z, description=, device_id=b4262675-0888-4a13-bb89-bb38cc732c6a, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=2567678a-da62-49ba-a844-4eedfb76ec89, ip_allocation=immediate, mac_address=fa:16:3e:bd:71:69, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:20:41Z, description=, dns_domain=, id=f06eaa72-d4d5-4c14-80ef-691411a95b29, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-147281999, port_security_enabled=True, project_id=24086b701d6b4d4081d2e63578d18d24, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=2451, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2505, status=ACTIVE, subnets=['57e1c6ad-8a82-4f37-80d2-afc0882db070'], tags=[], tenant_id=24086b701d6b4d4081d2e63578d18d24, updated_at=2025-12-06T10:20:42Z, vlan_transparent=None, network_id=f06eaa72-d4d5-4c14-80ef-691411a95b29, port_security_enabled=False, project_id=24086b701d6b4d4081d2e63578d18d24, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2516, status=DOWN, tags=[], tenant_id=24086b701d6b4d4081d2e63578d18d24, updated_at=2025-12-06T10:20:43Z on network f06eaa72-d4d5-4c14-80ef-691411a95b29#033[00m Dec 6 05:20:44 localhost nova_compute[282193]: 2025-12-06 10:20:44.814 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:44 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:44.837 263652 INFO neutron.agent.dhcp.agent [None req-8bbf9abd-4ac2-4e39-89bb-9ec51351c174 - - - - - -] DHCP configuration for ports {'60798a37-6f91-427b-9ed4-71f6e72da734'} is completed#033[00m Dec 6 05:20:44 localhost dnsmasq[329937]: read /var/lib/neutron/dhcp/f06eaa72-d4d5-4c14-80ef-691411a95b29/addn_hosts - 1 addresses Dec 6 05:20:44 localhost dnsmasq-dhcp[329937]: read /var/lib/neutron/dhcp/f06eaa72-d4d5-4c14-80ef-691411a95b29/host Dec 6 05:20:44 localhost dnsmasq-dhcp[329937]: read /var/lib/neutron/dhcp/f06eaa72-d4d5-4c14-80ef-691411a95b29/opts Dec 6 05:20:44 localhost podman[329955]: 2025-12-06 10:20:44.927907844 +0000 UTC m=+0.066435962 container kill d97fd00f3e78882d783a796607d4bd6632cdb7dd83587cae5671ab11021d365f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f06eaa72-d4d5-4c14-80ef-691411a95b29, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Dec 6 05:20:45 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:45.071 263652 INFO neutron.agent.dhcp.agent [None req-8d1fb88d-36ee-4e82-9ff5-b0c494330ddd - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:20:43Z, description=, device_id=b4262675-0888-4a13-bb89-bb38cc732c6a, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=2567678a-da62-49ba-a844-4eedfb76ec89, ip_allocation=immediate, mac_address=fa:16:3e:bd:71:69, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:20:41Z, description=, dns_domain=, id=f06eaa72-d4d5-4c14-80ef-691411a95b29, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-147281999, port_security_enabled=True, project_id=24086b701d6b4d4081d2e63578d18d24, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=2451, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2505, status=ACTIVE, subnets=['57e1c6ad-8a82-4f37-80d2-afc0882db070'], tags=[], tenant_id=24086b701d6b4d4081d2e63578d18d24, updated_at=2025-12-06T10:20:42Z, vlan_transparent=None, network_id=f06eaa72-d4d5-4c14-80ef-691411a95b29, port_security_enabled=False, project_id=24086b701d6b4d4081d2e63578d18d24, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2516, status=DOWN, tags=[], tenant_id=24086b701d6b4d4081d2e63578d18d24, updated_at=2025-12-06T10:20:43Z on network f06eaa72-d4d5-4c14-80ef-691411a95b29#033[00m Dec 6 05:20:45 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:45.191 263652 INFO neutron.agent.dhcp.agent [None req-3617f720-88e4-4f19-9a4a-13fdec68d3e2 - - - - - -] DHCP configuration for ports {'2567678a-da62-49ba-a844-4eedfb76ec89'} is completed#033[00m Dec 6 05:20:45 localhost dnsmasq[329937]: read /var/lib/neutron/dhcp/f06eaa72-d4d5-4c14-80ef-691411a95b29/addn_hosts - 1 addresses Dec 6 05:20:45 localhost dnsmasq-dhcp[329937]: read /var/lib/neutron/dhcp/f06eaa72-d4d5-4c14-80ef-691411a95b29/host Dec 6 05:20:45 localhost podman[329995]: 2025-12-06 10:20:45.266277717 +0000 UTC m=+0.056910270 container kill d97fd00f3e78882d783a796607d4bd6632cdb7dd83587cae5671ab11021d365f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f06eaa72-d4d5-4c14-80ef-691411a95b29, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 6 05:20:45 localhost dnsmasq-dhcp[329937]: read /var/lib/neutron/dhcp/f06eaa72-d4d5-4c14-80ef-691411a95b29/opts Dec 6 05:20:45 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:45.534 263652 INFO neutron.agent.dhcp.agent [None req-649b9c8f-2faf-49e9-bee9-80e451b0f7af - - - - - -] DHCP configuration for ports {'2567678a-da62-49ba-a844-4eedfb76ec89'} is completed#033[00m Dec 6 05:20:45 localhost systemd[1]: tmp-crun.weHV3K.mount: Deactivated successfully. Dec 6 05:20:46 localhost nova_compute[282193]: 2025-12-06 10:20:46.182 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:20:46 localhost openstack_network_exporter[243110]: ERROR 10:20:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:20:46 localhost openstack_network_exporter[243110]: ERROR 10:20:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:20:46 localhost openstack_network_exporter[243110]: ERROR 10:20:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:20:46 localhost openstack_network_exporter[243110]: Dec 6 05:20:46 localhost openstack_network_exporter[243110]: ERROR 10:20:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:20:46 localhost openstack_network_exporter[243110]: Dec 6 05:20:46 localhost openstack_network_exporter[243110]: ERROR 10:20:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:20:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:20:46 localhost systemd[1]: tmp-crun.bGvbUQ.mount: Deactivated successfully. Dec 6 05:20:46 localhost podman[330016]: 2025-12-06 10:20:46.943033081 +0000 UTC m=+0.099807032 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=multipathd, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3) Dec 6 05:20:46 localhost podman[330016]: 2025-12-06 10:20:46.959406091 +0000 UTC m=+0.116180032 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible) Dec 6 05:20:46 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:20:47 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:47.251 263652 INFO neutron.agent.linux.ip_lib [None req-a7aa87c8-1410-485c-8c21-9683e299cf3f - - - - - -] Device tap4064fcde-48 cannot be used as it has no MAC address#033[00m Dec 6 05:20:47 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:47.264 263652 INFO neutron.agent.linux.ip_lib [None req-2a790702-dd02-43f1-82b0-2834ceb4572d - - - - - -] Device tap879bec11-b0 cannot be used as it has no MAC address#033[00m Dec 6 05:20:47 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e179 e179: 6 total, 6 up, 6 in Dec 6 05:20:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:47.338 160509 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:20:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:47.339 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:20:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:47.339 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:20:47 localhost nova_compute[282193]: 2025-12-06 10:20:47.340 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:47 localhost kernel: device tap4064fcde-48 entered promiscuous mode Dec 6 05:20:47 localhost NetworkManager[5973]: [1765016447.3481] manager: (tap4064fcde-48): new Generic device (/org/freedesktop/NetworkManager/Devices/72) Dec 6 05:20:47 localhost nova_compute[282193]: 2025-12-06 10:20:47.349 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:47 localhost systemd-udevd[330053]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:20:47 localhost ovn_controller[154851]: 2025-12-06T10:20:47Z|00430|binding|INFO|Claiming lport 4064fcde-485d-4c6f-a000-947ac03218a2 for this chassis. Dec 6 05:20:47 localhost ovn_controller[154851]: 2025-12-06T10:20:47Z|00431|binding|INFO|4064fcde-485d-4c6f-a000-947ac03218a2: Claiming unknown Dec 6 05:20:47 localhost nova_compute[282193]: 2025-12-06 10:20:47.353 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:47.362 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:2::2/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-f30bcf50-145e-4db3-b0dc-90655a633fb3', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f30bcf50-145e-4db3-b0dc-90655a633fb3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '24086b701d6b4d4081d2e63578d18d24', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=db91d3ff-70d4-4a4c-b9c4-2f176cf1a088, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=4064fcde-485d-4c6f-a000-947ac03218a2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:20:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:47.366 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 4064fcde-485d-4c6f-a000-947ac03218a2 in datapath f30bcf50-145e-4db3-b0dc-90655a633fb3 bound to our chassis#033[00m Dec 6 05:20:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:47.368 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network f30bcf50-145e-4db3-b0dc-90655a633fb3 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:20:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:47.369 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[81ff2373-92e0-4671-8151-1725cb8ca154]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:20:47 localhost journal[230404]: ethtool ioctl error on tap4064fcde-48: No such device Dec 6 05:20:47 localhost kernel: device tap879bec11-b0 entered promiscuous mode Dec 6 05:20:47 localhost journal[230404]: ethtool ioctl error on tap4064fcde-48: No such device Dec 6 05:20:47 localhost NetworkManager[5973]: [1765016447.3887] manager: (tap879bec11-b0): new Generic device (/org/freedesktop/NetworkManager/Devices/73) Dec 6 05:20:47 localhost ovn_controller[154851]: 2025-12-06T10:20:47Z|00432|binding|INFO|Setting lport 4064fcde-485d-4c6f-a000-947ac03218a2 ovn-installed in OVS Dec 6 05:20:47 localhost ovn_controller[154851]: 2025-12-06T10:20:47Z|00433|binding|INFO|Setting lport 4064fcde-485d-4c6f-a000-947ac03218a2 up in Southbound Dec 6 05:20:47 localhost ovn_controller[154851]: 2025-12-06T10:20:47Z|00434|if_status|INFO|Not updating pb chassis for 879bec11-b088-469d-aa6e-3244ad4a6eaa now as sb is readonly Dec 6 05:20:47 localhost nova_compute[282193]: 2025-12-06 10:20:47.390 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:47 localhost journal[230404]: ethtool ioctl error on tap4064fcde-48: No such device Dec 6 05:20:47 localhost nova_compute[282193]: 2025-12-06 10:20:47.395 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:47 localhost ovn_controller[154851]: 2025-12-06T10:20:47Z|00435|binding|INFO|Claiming lport 879bec11-b088-469d-aa6e-3244ad4a6eaa for this chassis. Dec 6 05:20:47 localhost ovn_controller[154851]: 2025-12-06T10:20:47Z|00436|binding|INFO|879bec11-b088-469d-aa6e-3244ad4a6eaa: Claiming unknown Dec 6 05:20:47 localhost ovn_controller[154851]: 2025-12-06T10:20:47Z|00437|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:20:47 localhost journal[230404]: ethtool ioctl error on tap4064fcde-48: No such device Dec 6 05:20:47 localhost journal[230404]: ethtool ioctl error on tap4064fcde-48: No such device Dec 6 05:20:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:47.412 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-f7147ce7-da0c-41ba-a4e4-4f73649998e1', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f7147ce7-da0c-41ba-a4e4-4f73649998e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fa76bcfc789b4e53acf344cd0b1cd7c5', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a7af776d-5c3a-4693-aedf-d81ded3ff511, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=879bec11-b088-469d-aa6e-3244ad4a6eaa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:20:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:47.414 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 879bec11-b088-469d-aa6e-3244ad4a6eaa in datapath f7147ce7-da0c-41ba-a4e4-4f73649998e1 bound to our chassis#033[00m Dec 6 05:20:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:47.416 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network f7147ce7-da0c-41ba-a4e4-4f73649998e1 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:20:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:47.416 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[9914e9b1-4ca1-4877-966b-c05554c03825]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:20:47 localhost ovn_controller[154851]: 2025-12-06T10:20:47Z|00438|binding|INFO|Setting lport 879bec11-b088-469d-aa6e-3244ad4a6eaa ovn-installed in OVS Dec 6 05:20:47 localhost ovn_controller[154851]: 2025-12-06T10:20:47Z|00439|binding|INFO|Setting lport 879bec11-b088-469d-aa6e-3244ad4a6eaa up in Southbound Dec 6 05:20:47 localhost nova_compute[282193]: 2025-12-06 10:20:47.422 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:47 localhost journal[230404]: ethtool ioctl error on tap4064fcde-48: No such device Dec 6 05:20:47 localhost journal[230404]: ethtool ioctl error on tap4064fcde-48: No such device Dec 6 05:20:47 localhost nova_compute[282193]: 2025-12-06 10:20:47.432 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:47 localhost journal[230404]: ethtool ioctl error on tap4064fcde-48: No such device Dec 6 05:20:47 localhost nova_compute[282193]: 2025-12-06 10:20:47.436 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:47 localhost nova_compute[282193]: 2025-12-06 10:20:47.469 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:47 localhost nova_compute[282193]: 2025-12-06 10:20:47.476 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:47 localhost nova_compute[282193]: 2025-12-06 10:20:47.509 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:48 localhost nova_compute[282193]: 2025-12-06 10:20:48.177 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:20:48 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:20:48 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:48.339 263652 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:20:48 localhost podman[330161]: Dec 6 05:20:48 localhost podman[330161]: 2025-12-06 10:20:48.513952471 +0000 UTC m=+0.164904071 container create ef8cce1d55af1c1fc54e83e6459f29a59a45ba66244ff2947d8423eb55d75c4a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f30bcf50-145e-4db3-b0dc-90655a633fb3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 6 05:20:48 localhost podman[330161]: 2025-12-06 10:20:48.437049889 +0000 UTC m=+0.088001469 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:20:48 localhost podman[330191]: Dec 6 05:20:48 localhost podman[330191]: 2025-12-06 10:20:48.55871784 +0000 UTC m=+0.066976449 container create 8789c720cf321fea4f8f7abd3ba6824d0b89d586574bc84c29a33fbaac0101b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f7147ce7-da0c-41ba-a4e4-4f73649998e1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Dec 6 05:20:48 localhost systemd[1]: Started libpod-conmon-ef8cce1d55af1c1fc54e83e6459f29a59a45ba66244ff2947d8423eb55d75c4a.scope. Dec 6 05:20:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:20:48 localhost nova_compute[282193]: 2025-12-06 10:20:48.596 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:48 localhost systemd[1]: Started libpod-conmon-8789c720cf321fea4f8f7abd3ba6824d0b89d586574bc84c29a33fbaac0101b3.scope. Dec 6 05:20:48 localhost systemd[1]: Started libcrun container. Dec 6 05:20:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57032ec42d0979663c7485c872a98c14b639222c748bbc12b89800ff18f850d9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:20:48 localhost systemd[1]: Started libcrun container. Dec 6 05:20:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f9d7a870fbdefd40cdca250115d626184dfc997615aca80df8a4bcabb41e6a0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:20:48 localhost podman[330191]: 2025-12-06 10:20:48.52732988 +0000 UTC m=+0.035588499 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:20:48 localhost podman[330161]: 2025-12-06 10:20:48.62937525 +0000 UTC m=+0.280326870 container init ef8cce1d55af1c1fc54e83e6459f29a59a45ba66244ff2947d8423eb55d75c4a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f30bcf50-145e-4db3-b0dc-90655a633fb3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:20:48 localhost podman[330161]: 2025-12-06 10:20:48.636326912 +0000 UTC m=+0.287278532 container start ef8cce1d55af1c1fc54e83e6459f29a59a45ba66244ff2947d8423eb55d75c4a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f30bcf50-145e-4db3-b0dc-90655a633fb3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Dec 6 05:20:48 localhost dnsmasq[330223]: started, version 2.85 cachesize 150 Dec 6 05:20:48 localhost dnsmasq[330223]: DNS service limited to local subnets Dec 6 05:20:48 localhost dnsmasq[330223]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:20:48 localhost dnsmasq[330223]: warning: no upstream servers configured Dec 6 05:20:48 localhost dnsmasq-dhcp[330223]: DHCPv6, static leases only on 2001:db8:2::, lease time 1d Dec 6 05:20:48 localhost dnsmasq[330223]: read /var/lib/neutron/dhcp/f30bcf50-145e-4db3-b0dc-90655a633fb3/addn_hosts - 0 addresses Dec 6 05:20:48 localhost dnsmasq-dhcp[330223]: read /var/lib/neutron/dhcp/f30bcf50-145e-4db3-b0dc-90655a633fb3/host Dec 6 05:20:48 localhost dnsmasq-dhcp[330223]: read /var/lib/neutron/dhcp/f30bcf50-145e-4db3-b0dc-90655a633fb3/opts Dec 6 05:20:48 localhost podman[330191]: 2025-12-06 10:20:48.680944166 +0000 UTC m=+0.189202785 container init 8789c720cf321fea4f8f7abd3ba6824d0b89d586574bc84c29a33fbaac0101b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f7147ce7-da0c-41ba-a4e4-4f73649998e1, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:20:48 localhost podman[330208]: 2025-12-06 10:20:48.685190476 +0000 UTC m=+0.080264014 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 6 05:20:48 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:48.690 263652 INFO neutron.agent.dhcp.agent [None req-a7aa87c8-1410-485c-8c21-9683e299cf3f - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:20:46Z, description=, device_id=b4262675-0888-4a13-bb89-bb38cc732c6a, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=a886cd7f-2a4b-48cf-a81c-6f1b5f31d906, ip_allocation=immediate, mac_address=fa:16:3e:2b:09:0d, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:20:45Z, description=, dns_domain=, id=f30bcf50-145e-4db3-b0dc-90655a633fb3, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-2123937341, port_security_enabled=True, project_id=24086b701d6b4d4081d2e63578d18d24, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=34481, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2528, status=ACTIVE, subnets=['62e86bb9-9b29-497a-b55d-2943cb7f321e'], tags=[], tenant_id=24086b701d6b4d4081d2e63578d18d24, updated_at=2025-12-06T10:20:46Z, vlan_transparent=None, network_id=f30bcf50-145e-4db3-b0dc-90655a633fb3, port_security_enabled=False, project_id=24086b701d6b4d4081d2e63578d18d24, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2536, status=DOWN, tags=[], tenant_id=24086b701d6b4d4081d2e63578d18d24, updated_at=2025-12-06T10:20:46Z on network f30bcf50-145e-4db3-b0dc-90655a633fb3#033[00m Dec 6 05:20:48 localhost podman[330191]: 2025-12-06 10:20:48.692405176 +0000 UTC m=+0.200663785 container start 8789c720cf321fea4f8f7abd3ba6824d0b89d586574bc84c29a33fbaac0101b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f7147ce7-da0c-41ba-a4e4-4f73649998e1, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3) Dec 6 05:20:48 localhost dnsmasq[330236]: started, version 2.85 cachesize 150 Dec 6 05:20:48 localhost dnsmasq[330236]: DNS service limited to local subnets Dec 6 05:20:48 localhost dnsmasq[330236]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:20:48 localhost dnsmasq[330236]: warning: no upstream servers configured Dec 6 05:20:48 localhost dnsmasq-dhcp[330236]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 6 05:20:48 localhost dnsmasq[330236]: read /var/lib/neutron/dhcp/f7147ce7-da0c-41ba-a4e4-4f73649998e1/addn_hosts - 0 addresses Dec 6 05:20:48 localhost dnsmasq-dhcp[330236]: read /var/lib/neutron/dhcp/f7147ce7-da0c-41ba-a4e4-4f73649998e1/host Dec 6 05:20:48 localhost dnsmasq-dhcp[330236]: read /var/lib/neutron/dhcp/f7147ce7-da0c-41ba-a4e4-4f73649998e1/opts Dec 6 05:20:48 localhost podman[330208]: 2025-12-06 10:20:48.701127933 +0000 UTC m=+0.096201461 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:20:48 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:20:48 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:48.826 263652 INFO neutron.agent.dhcp.agent [None req-cbfa8911-e9bf-4b29-b644-58576e753796 - - - - - -] DHCP configuration for ports {'c1c8c312-4919-48a9-9e6a-6fe48d28b732', '7e8acf71-084a-4d21-b509-2ae5cdca884c'} is completed#033[00m Dec 6 05:20:48 localhost dnsmasq[330223]: read /var/lib/neutron/dhcp/f30bcf50-145e-4db3-b0dc-90655a633fb3/addn_hosts - 1 addresses Dec 6 05:20:48 localhost dnsmasq-dhcp[330223]: read /var/lib/neutron/dhcp/f30bcf50-145e-4db3-b0dc-90655a633fb3/host Dec 6 05:20:48 localhost dnsmasq-dhcp[330223]: read /var/lib/neutron/dhcp/f30bcf50-145e-4db3-b0dc-90655a633fb3/opts Dec 6 05:20:48 localhost podman[330256]: 2025-12-06 10:20:48.841405231 +0000 UTC m=+0.037435285 container kill ef8cce1d55af1c1fc54e83e6459f29a59a45ba66244ff2947d8423eb55d75c4a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f30bcf50-145e-4db3-b0dc-90655a633fb3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 6 05:20:48 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:48.962 263652 INFO neutron.agent.dhcp.agent [None req-a7aa87c8-1410-485c-8c21-9683e299cf3f - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:20:46Z, description=, device_id=b4262675-0888-4a13-bb89-bb38cc732c6a, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=a886cd7f-2a4b-48cf-a81c-6f1b5f31d906, ip_allocation=immediate, mac_address=fa:16:3e:2b:09:0d, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:20:45Z, description=, dns_domain=, id=f30bcf50-145e-4db3-b0dc-90655a633fb3, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-2123937341, port_security_enabled=True, project_id=24086b701d6b4d4081d2e63578d18d24, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=34481, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2528, status=ACTIVE, subnets=['62e86bb9-9b29-497a-b55d-2943cb7f321e'], tags=[], tenant_id=24086b701d6b4d4081d2e63578d18d24, updated_at=2025-12-06T10:20:46Z, vlan_transparent=None, network_id=f30bcf50-145e-4db3-b0dc-90655a633fb3, port_security_enabled=False, project_id=24086b701d6b4d4081d2e63578d18d24, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2536, status=DOWN, tags=[], tenant_id=24086b701d6b4d4081d2e63578d18d24, updated_at=2025-12-06T10:20:46Z on network f30bcf50-145e-4db3-b0dc-90655a633fb3#033[00m Dec 6 05:20:49 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:49.086 263652 INFO neutron.agent.dhcp.agent [None req-f523fe2a-b847-425f-b8e6-d5718fd58313 - - - - - -] DHCP configuration for ports {'a886cd7f-2a4b-48cf-a81c-6f1b5f31d906'} is completed#033[00m Dec 6 05:20:49 localhost dnsmasq[330223]: read /var/lib/neutron/dhcp/f30bcf50-145e-4db3-b0dc-90655a633fb3/addn_hosts - 1 addresses Dec 6 05:20:49 localhost dnsmasq-dhcp[330223]: read /var/lib/neutron/dhcp/f30bcf50-145e-4db3-b0dc-90655a633fb3/host Dec 6 05:20:49 localhost dnsmasq-dhcp[330223]: read /var/lib/neutron/dhcp/f30bcf50-145e-4db3-b0dc-90655a633fb3/opts Dec 6 05:20:49 localhost podman[330297]: 2025-12-06 10:20:49.088334779 +0000 UTC m=+0.040424016 container kill ef8cce1d55af1c1fc54e83e6459f29a59a45ba66244ff2947d8423eb55d75c4a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f30bcf50-145e-4db3-b0dc-90655a633fb3, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:20:49 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:49.333 263652 INFO neutron.agent.dhcp.agent [None req-44a13889-df8f-4b3b-938a-00ed0b900ce8 - - - - - -] DHCP configuration for ports {'a886cd7f-2a4b-48cf-a81c-6f1b5f31d906'} is completed#033[00m Dec 6 05:20:49 localhost kernel: device tap879bec11-b0 left promiscuous mode Dec 6 05:20:49 localhost nova_compute[282193]: 2025-12-06 10:20:49.353 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:49 localhost ovn_controller[154851]: 2025-12-06T10:20:49Z|00440|binding|INFO|Releasing lport 879bec11-b088-469d-aa6e-3244ad4a6eaa from this chassis (sb_readonly=0) Dec 6 05:20:49 localhost ovn_controller[154851]: 2025-12-06T10:20:49Z|00441|binding|INFO|Setting lport 879bec11-b088-469d-aa6e-3244ad4a6eaa down in Southbound Dec 6 05:20:49 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:49.362 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-f7147ce7-da0c-41ba-a4e4-4f73649998e1', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f7147ce7-da0c-41ba-a4e4-4f73649998e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fa76bcfc789b4e53acf344cd0b1cd7c5', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a7af776d-5c3a-4693-aedf-d81ded3ff511, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=879bec11-b088-469d-aa6e-3244ad4a6eaa) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:20:49 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:49.364 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 879bec11-b088-469d-aa6e-3244ad4a6eaa in datapath f7147ce7-da0c-41ba-a4e4-4f73649998e1 unbound from our chassis#033[00m Dec 6 05:20:49 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:49.368 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f7147ce7-da0c-41ba-a4e4-4f73649998e1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:20:49 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:49.369 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[5e10df99-f3f0-4b59-95b6-86e8dda150b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:20:49 localhost nova_compute[282193]: 2025-12-06 10:20:49.372 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:49 localhost nova_compute[282193]: 2025-12-06 10:20:49.852 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:50 localhost dnsmasq[330236]: read /var/lib/neutron/dhcp/f7147ce7-da0c-41ba-a4e4-4f73649998e1/addn_hosts - 0 addresses Dec 6 05:20:50 localhost podman[330339]: 2025-12-06 10:20:50.621850926 +0000 UTC m=+0.060345826 container kill 8789c720cf321fea4f8f7abd3ba6824d0b89d586574bc84c29a33fbaac0101b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f7147ce7-da0c-41ba-a4e4-4f73649998e1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 6 05:20:50 localhost dnsmasq-dhcp[330236]: read /var/lib/neutron/dhcp/f7147ce7-da0c-41ba-a4e4-4f73649998e1/host Dec 6 05:20:50 localhost dnsmasq-dhcp[330236]: read /var/lib/neutron/dhcp/f7147ce7-da0c-41ba-a4e4-4f73649998e1/opts Dec 6 05:20:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:50.646 263652 ERROR neutron.agent.dhcp.agent [None req-eaf9eb16-a44c-4397-80fe-f0265a5d2f08 - - - - - -] Unable to reload_allocations dhcp for f7147ce7-da0c-41ba-a4e4-4f73649998e1.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap879bec11-b0 not found in namespace qdhcp-f7147ce7-da0c-41ba-a4e4-4f73649998e1. Dec 6 05:20:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:50.646 263652 ERROR neutron.agent.dhcp.agent Traceback (most recent call last): Dec 6 05:20:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:50.646 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver Dec 6 05:20:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:50.646 263652 ERROR neutron.agent.dhcp.agent rv = getattr(driver, action)(**action_kwargs) Dec 6 05:20:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:50.646 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations Dec 6 05:20:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:50.646 263652 ERROR neutron.agent.dhcp.agent self.device_manager.update(self.network, self.interface_name) Dec 6 05:20:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:50.646 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update Dec 6 05:20:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:50.646 263652 ERROR neutron.agent.dhcp.agent self._set_default_route(network, device_name) Dec 6 05:20:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:50.646 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route Dec 6 05:20:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:50.646 263652 ERROR neutron.agent.dhcp.agent self._set_default_route_ip_version(network, device_name, Dec 6 05:20:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:50.646 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version Dec 6 05:20:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:50.646 263652 ERROR neutron.agent.dhcp.agent gateway = device.route.get_gateway(ip_version=ip_version) Dec 6 05:20:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:50.646 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway Dec 6 05:20:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:50.646 263652 ERROR neutron.agent.dhcp.agent routes = self.list_routes(ip_version, scope=scope, table=table) Dec 6 05:20:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:50.646 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes Dec 6 05:20:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:50.646 263652 ERROR neutron.agent.dhcp.agent return list_ip_routes(self._parent.namespace, ip_version, scope=scope, Dec 6 05:20:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:50.646 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes Dec 6 05:20:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:50.646 263652 ERROR neutron.agent.dhcp.agent routes = privileged.list_ip_routes(namespace, ip_version, device=device, Dec 6 05:20:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:50.646 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f Dec 6 05:20:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:50.646 263652 ERROR neutron.agent.dhcp.agent return self(f, *args, **kw) Dec 6 05:20:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:50.646 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__ Dec 6 05:20:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:50.646 263652 ERROR neutron.agent.dhcp.agent do = self.iter(retry_state=retry_state) Dec 6 05:20:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:50.646 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter Dec 6 05:20:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:50.646 263652 ERROR neutron.agent.dhcp.agent return fut.result() Dec 6 05:20:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:50.646 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result Dec 6 05:20:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:50.646 263652 ERROR neutron.agent.dhcp.agent return self.__get_result() Dec 6 05:20:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:50.646 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result Dec 6 05:20:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:50.646 263652 ERROR neutron.agent.dhcp.agent raise self._exception Dec 6 05:20:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:50.646 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__ Dec 6 05:20:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:50.646 263652 ERROR neutron.agent.dhcp.agent result = fn(*args, **kwargs) Dec 6 05:20:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:50.646 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap Dec 6 05:20:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:50.646 263652 ERROR neutron.agent.dhcp.agent return self.channel.remote_call(name, args, kwargs, Dec 6 05:20:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:50.646 263652 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call Dec 6 05:20:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:50.646 263652 ERROR neutron.agent.dhcp.agent raise exc_type(*result[2]) Dec 6 05:20:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:50.646 263652 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap879bec11-b0 not found in namespace qdhcp-f7147ce7-da0c-41ba-a4e4-4f73649998e1. Dec 6 05:20:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:50.646 263652 ERROR neutron.agent.dhcp.agent #033[00m Dec 6 05:20:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:50.650 263652 INFO neutron.agent.dhcp.agent [None req-83cda96d-a56f-4426-a417-563d5a2ec11d - - - - - -] Synchronizing state#033[00m Dec 6 05:20:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:50.795 263652 INFO neutron.agent.dhcp.agent [None req-9c335b9a-62ba-4c9c-9bab-e73d42c01cf5 - - - - - -] All active networks have been fetched through RPC.#033[00m Dec 6 05:20:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:50.796 263652 INFO neutron.agent.dhcp.agent [-] Starting network f7147ce7-da0c-41ba-a4e4-4f73649998e1 dhcp configuration#033[00m Dec 6 05:20:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:50.797 263652 INFO neutron.agent.dhcp.agent [-] Finished network f7147ce7-da0c-41ba-a4e4-4f73649998e1 dhcp configuration#033[00m Dec 6 05:20:50 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:50.797 263652 INFO neutron.agent.dhcp.agent [None req-9c335b9a-62ba-4c9c-9bab-e73d42c01cf5 - - - - - -] Synchronizing state complete#033[00m Dec 6 05:20:50 localhost ovn_controller[154851]: 2025-12-06T10:20:50Z|00442|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:20:51 localhost systemd[1]: tmp-crun.UfWM2f.mount: Deactivated successfully. Dec 6 05:20:51 localhost nova_compute[282193]: 2025-12-06 10:20:51.061 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:51 localhost dnsmasq[330236]: exiting on receipt of SIGTERM Dec 6 05:20:51 localhost podman[330369]: 2025-12-06 10:20:51.063632291 +0000 UTC m=+0.109617162 container kill 8789c720cf321fea4f8f7abd3ba6824d0b89d586574bc84c29a33fbaac0101b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f7147ce7-da0c-41ba-a4e4-4f73649998e1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:20:51 localhost systemd[1]: libpod-8789c720cf321fea4f8f7abd3ba6824d0b89d586574bc84c29a33fbaac0101b3.scope: Deactivated successfully. Dec 6 05:20:51 localhost podman[330381]: 2025-12-06 10:20:51.138645064 +0000 UTC m=+0.060669465 container died 8789c720cf321fea4f8f7abd3ba6824d0b89d586574bc84c29a33fbaac0101b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f7147ce7-da0c-41ba-a4e4-4f73649998e1, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 6 05:20:51 localhost podman[330381]: 2025-12-06 10:20:51.167822326 +0000 UTC m=+0.089846697 container cleanup 8789c720cf321fea4f8f7abd3ba6824d0b89d586574bc84c29a33fbaac0101b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f7147ce7-da0c-41ba-a4e4-4f73649998e1, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:20:51 localhost systemd[1]: libpod-conmon-8789c720cf321fea4f8f7abd3ba6824d0b89d586574bc84c29a33fbaac0101b3.scope: Deactivated successfully. Dec 6 05:20:51 localhost podman[330383]: 2025-12-06 10:20:51.222795556 +0000 UTC m=+0.135959947 container remove 8789c720cf321fea4f8f7abd3ba6824d0b89d586574bc84c29a33fbaac0101b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f7147ce7-da0c-41ba-a4e4-4f73649998e1, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 6 05:20:51 localhost systemd[1]: var-lib-containers-storage-overlay-2f9d7a870fbdefd40cdca250115d626184dfc997615aca80df8a4bcabb41e6a0-merged.mount: Deactivated successfully. Dec 6 05:20:51 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8789c720cf321fea4f8f7abd3ba6824d0b89d586574bc84c29a33fbaac0101b3-userdata-shm.mount: Deactivated successfully. Dec 6 05:20:51 localhost systemd[1]: run-netns-qdhcp\x2df7147ce7\x2dda0c\x2d41ba\x2da4e4\x2d4f73649998e1.mount: Deactivated successfully. Dec 6 05:20:53 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:20:53 localhost nova_compute[282193]: 2025-12-06 10:20:53.501 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:53 localhost nova_compute[282193]: 2025-12-06 10:20:53.601 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:20:53 localhost podman[241090]: time="2025-12-06T10:20:53Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:20:53 localhost systemd[1]: tmp-crun.4NSusJ.mount: Deactivated successfully. Dec 6 05:20:53 localhost podman[330411]: 2025-12-06 10:20:53.929583737 +0000 UTC m=+0.091435316 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller) Dec 6 05:20:53 localhost podman[241090]: @ - - [06/Dec/2025:10:20:53 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 163386 "" "Go-http-client/1.1" Dec 6 05:20:54 localhost podman[330411]: 2025-12-06 10:20:54.063790119 +0000 UTC m=+0.225641708 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:20:54 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:20:54 localhost podman[241090]: @ - - [06/Dec/2025:10:20:53 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 21172 "" "Go-http-client/1.1" Dec 6 05:20:54 localhost nova_compute[282193]: 2025-12-06 10:20:54.885 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:58 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:20:58 localhost nova_compute[282193]: 2025-12-06 10:20:58.646 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:58 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 6 05:20:58 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4153025700' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 6 05:20:58 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 6 05:20:58 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4153025700' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 6 05:20:58 localhost ovn_controller[154851]: 2025-12-06T10:20:58Z|00443|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:20:59 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:58.999 160509 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 9c79be57-161f-4ebe-adf7-dbbe338f4139 with type ""#033[00m Dec 6 05:20:59 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:59.001 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.255.242/28', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-c68f9a6d-f183-4c32-ae20-3af5e94473b3', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c68f9a6d-f183-4c32-ae20-3af5e94473b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fa76bcfc789b4e53acf344cd0b1cd7c5', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548789.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=caee8882-f3cb-4a2a-a1c8-8579f9a721cf, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=3adb2c37-0f70-478d-98be-4e26b3a4f4ff) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:20:59 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:59.003 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 3adb2c37-0f70-478d-98be-4e26b3a4f4ff in datapath c68f9a6d-f183-4c32-ae20-3af5e94473b3 unbound from our chassis#033[00m Dec 6 05:20:59 localhost podman[330452]: 2025-12-06 10:20:59.00422524 +0000 UTC m=+0.068159224 container kill c2781052a3630cf50580874a9e8559954e4ce423e9575765f5c45967475b3fb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c68f9a6d-f183-4c32-ae20-3af5e94473b3, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:20:59 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:59.005 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c68f9a6d-f183-4c32-ae20-3af5e94473b3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:20:59 localhost dnsmasq[328281]: exiting on receipt of SIGTERM Dec 6 05:20:59 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:59.006 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[83a4ac06-624c-40f1-981e-2599c247e2fc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:20:59 localhost systemd[1]: libpod-c2781052a3630cf50580874a9e8559954e4ce423e9575765f5c45967475b3fb4.scope: Deactivated successfully. Dec 6 05:20:59 localhost ovn_controller[154851]: 2025-12-06T10:20:59Z|00444|binding|INFO|Removing iface tap3adb2c37-0f ovn-installed in OVS Dec 6 05:20:59 localhost ovn_controller[154851]: 2025-12-06T10:20:59Z|00445|binding|INFO|Removing lport 3adb2c37-0f70-478d-98be-4e26b3a4f4ff ovn-installed in OVS Dec 6 05:20:59 localhost nova_compute[282193]: 2025-12-06 10:20:59.010 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:59 localhost nova_compute[282193]: 2025-12-06 10:20:59.013 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:59 localhost nova_compute[282193]: 2025-12-06 10:20:59.020 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:59 localhost podman[330465]: 2025-12-06 10:20:59.079097819 +0000 UTC m=+0.056964143 container died c2781052a3630cf50580874a9e8559954e4ce423e9575765f5c45967475b3fb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c68f9a6d-f183-4c32-ae20-3af5e94473b3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Dec 6 05:20:59 localhost podman[330465]: 2025-12-06 10:20:59.114894924 +0000 UTC m=+0.092761238 container cleanup c2781052a3630cf50580874a9e8559954e4ce423e9575765f5c45967475b3fb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c68f9a6d-f183-4c32-ae20-3af5e94473b3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 6 05:20:59 localhost systemd[1]: libpod-conmon-c2781052a3630cf50580874a9e8559954e4ce423e9575765f5c45967475b3fb4.scope: Deactivated successfully. Dec 6 05:20:59 localhost podman[330467]: 2025-12-06 10:20:59.158975521 +0000 UTC m=+0.129918512 container remove c2781052a3630cf50580874a9e8559954e4ce423e9575765f5c45967475b3fb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c68f9a6d-f183-4c32-ae20-3af5e94473b3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Dec 6 05:20:59 localhost nova_compute[282193]: 2025-12-06 10:20:59.169 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:59 localhost kernel: device tap3adb2c37-0f left promiscuous mode Dec 6 05:20:59 localhost nova_compute[282193]: 2025-12-06 10:20:59.184 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:59 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:59.204 263652 INFO neutron.agent.dhcp.agent [None req-db069261-bad8-4c83-9f4e-4725ac28e4b3 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:20:59 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:20:59.205 263652 INFO neutron.agent.dhcp.agent [None req-db069261-bad8-4c83-9f4e-4725ac28e4b3 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:20:59 localhost ovn_controller[154851]: 2025-12-06T10:20:59Z|00446|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:20:59 localhost nova_compute[282193]: 2025-12-06 10:20:59.460 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:59 localhost dnsmasq[330223]: read /var/lib/neutron/dhcp/f30bcf50-145e-4db3-b0dc-90655a633fb3/addn_hosts - 0 addresses Dec 6 05:20:59 localhost dnsmasq-dhcp[330223]: read /var/lib/neutron/dhcp/f30bcf50-145e-4db3-b0dc-90655a633fb3/host Dec 6 05:20:59 localhost dnsmasq-dhcp[330223]: read /var/lib/neutron/dhcp/f30bcf50-145e-4db3-b0dc-90655a633fb3/opts Dec 6 05:20:59 localhost podman[330512]: 2025-12-06 10:20:59.653802906 +0000 UTC m=+0.046605926 container kill ef8cce1d55af1c1fc54e83e6459f29a59a45ba66244ff2947d8423eb55d75c4a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f30bcf50-145e-4db3-b0dc-90655a633fb3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 6 05:20:59 localhost nova_compute[282193]: 2025-12-06 10:20:59.884 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:59 localhost ovn_controller[154851]: 2025-12-06T10:20:59Z|00447|binding|INFO|Releasing lport 4064fcde-485d-4c6f-a000-947ac03218a2 from this chassis (sb_readonly=0) Dec 6 05:20:59 localhost ovn_controller[154851]: 2025-12-06T10:20:59Z|00448|binding|INFO|Setting lport 4064fcde-485d-4c6f-a000-947ac03218a2 down in Southbound Dec 6 05:20:59 localhost kernel: device tap4064fcde-48 left promiscuous mode Dec 6 05:20:59 localhost nova_compute[282193]: 2025-12-06 10:20:59.887 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:20:59 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:59.897 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:2::2/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-f30bcf50-145e-4db3-b0dc-90655a633fb3', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f30bcf50-145e-4db3-b0dc-90655a633fb3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '24086b701d6b4d4081d2e63578d18d24', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548789.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=db91d3ff-70d4-4a4c-b9c4-2f176cf1a088, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=4064fcde-485d-4c6f-a000-947ac03218a2) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:20:59 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:59.899 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 4064fcde-485d-4c6f-a000-947ac03218a2 in datapath f30bcf50-145e-4db3-b0dc-90655a633fb3 unbound from our chassis#033[00m Dec 6 05:20:59 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:59.900 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network f30bcf50-145e-4db3-b0dc-90655a633fb3 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:20:59 localhost ovn_metadata_agent[160504]: 2025-12-06 10:20:59.902 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[fa06f38f-5ebd-477a-8519-36e63904ee44]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:20:59 localhost nova_compute[282193]: 2025-12-06 10:20:59.912 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:00 localhost systemd[1]: var-lib-containers-storage-overlay-6d060ccaa9e70944fae8106a7cb62cb34c009da5ff146db45b624802025af9fb-merged.mount: Deactivated successfully. Dec 6 05:21:00 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c2781052a3630cf50580874a9e8559954e4ce423e9575765f5c45967475b3fb4-userdata-shm.mount: Deactivated successfully. Dec 6 05:21:00 localhost systemd[1]: run-netns-qdhcp\x2dc68f9a6d\x2df183\x2d4c32\x2dae20\x2d3af5e94473b3.mount: Deactivated successfully. Dec 6 05:21:00 localhost dnsmasq[330223]: exiting on receipt of SIGTERM Dec 6 05:21:00 localhost systemd[1]: libpod-ef8cce1d55af1c1fc54e83e6459f29a59a45ba66244ff2947d8423eb55d75c4a.scope: Deactivated successfully. Dec 6 05:21:00 localhost podman[330586]: 2025-12-06 10:21:00.32652679 +0000 UTC m=+0.047817352 container kill ef8cce1d55af1c1fc54e83e6459f29a59a45ba66244ff2947d8423eb55d75c4a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f30bcf50-145e-4db3-b0dc-90655a633fb3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 6 05:21:00 localhost podman[330607]: 2025-12-06 10:21:00.395135028 +0000 UTC m=+0.041805519 container died ef8cce1d55af1c1fc54e83e6459f29a59a45ba66244ff2947d8423eb55d75c4a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f30bcf50-145e-4db3-b0dc-90655a633fb3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 6 05:21:00 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ef8cce1d55af1c1fc54e83e6459f29a59a45ba66244ff2947d8423eb55d75c4a-userdata-shm.mount: Deactivated successfully. Dec 6 05:21:00 localhost systemd[1]: var-lib-containers-storage-overlay-57032ec42d0979663c7485c872a98c14b639222c748bbc12b89800ff18f850d9-merged.mount: Deactivated successfully. Dec 6 05:21:00 localhost podman[330607]: 2025-12-06 10:21:00.442401912 +0000 UTC m=+0.089072353 container remove ef8cce1d55af1c1fc54e83e6459f29a59a45ba66244ff2947d8423eb55d75c4a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f30bcf50-145e-4db3-b0dc-90655a633fb3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 6 05:21:00 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:21:00.473 263652 INFO neutron.agent.dhcp.agent [None req-0d6bffb3-fd31-4fe0-9497-5449b3d5b3ab - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:21:00 localhost systemd[1]: libpod-conmon-ef8cce1d55af1c1fc54e83e6459f29a59a45ba66244ff2947d8423eb55d75c4a.scope: Deactivated successfully. Dec 6 05:21:00 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:21:00.521 263652 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:21:00 localhost ovn_controller[154851]: 2025-12-06T10:21:00Z|00449|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:21:00 localhost nova_compute[282193]: 2025-12-06 10:21:00.719 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:01 localhost systemd[1]: run-netns-qdhcp\x2df30bcf50\x2d145e\x2d4db3\x2db0dc\x2d90655a633fb3.mount: Deactivated successfully. Dec 6 05:21:01 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:21:01 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:21:01 localhost dnsmasq[329937]: read /var/lib/neutron/dhcp/f06eaa72-d4d5-4c14-80ef-691411a95b29/addn_hosts - 0 addresses Dec 6 05:21:01 localhost podman[330694]: 2025-12-06 10:21:01.78683607 +0000 UTC m=+0.069538286 container kill d97fd00f3e78882d783a796607d4bd6632cdb7dd83587cae5671ab11021d365f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f06eaa72-d4d5-4c14-80ef-691411a95b29, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Dec 6 05:21:01 localhost dnsmasq-dhcp[329937]: read /var/lib/neutron/dhcp/f06eaa72-d4d5-4c14-80ef-691411a95b29/host Dec 6 05:21:01 localhost dnsmasq-dhcp[329937]: read /var/lib/neutron/dhcp/f06eaa72-d4d5-4c14-80ef-691411a95b29/opts Dec 6 05:21:01 localhost ovn_controller[154851]: 2025-12-06T10:21:01Z|00450|binding|INFO|Releasing lport b26e75de-365d-482e-b28d-740529191fa4 from this chassis (sb_readonly=0) Dec 6 05:21:01 localhost kernel: device tapb26e75de-36 left promiscuous mode Dec 6 05:21:01 localhost ovn_controller[154851]: 2025-12-06T10:21:01Z|00451|binding|INFO|Setting lport b26e75de-365d-482e-b28d-740529191fa4 down in Southbound Dec 6 05:21:01 localhost nova_compute[282193]: 2025-12-06 10:21:01.955 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:01 localhost ovn_metadata_agent[160504]: 2025-12-06 10:21:01.970 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:1::2/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-f06eaa72-d4d5-4c14-80ef-691411a95b29', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f06eaa72-d4d5-4c14-80ef-691411a95b29', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '24086b701d6b4d4081d2e63578d18d24', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548789.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1ae8e12d-32c2-4a99-98bb-39bcacc37749, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=b26e75de-365d-482e-b28d-740529191fa4) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:21:01 localhost ovn_metadata_agent[160504]: 2025-12-06 10:21:01.972 160509 INFO neutron.agent.ovn.metadata.agent [-] Port b26e75de-365d-482e-b28d-740529191fa4 in datapath f06eaa72-d4d5-4c14-80ef-691411a95b29 unbound from our chassis#033[00m Dec 6 05:21:01 localhost ovn_metadata_agent[160504]: 2025-12-06 10:21:01.974 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network f06eaa72-d4d5-4c14-80ef-691411a95b29 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:21:01 localhost nova_compute[282193]: 2025-12-06 10:21:01.974 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:01 localhost ovn_metadata_agent[160504]: 2025-12-06 10:21:01.976 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[2aa683a4-7a4e-4ad5-aff3-8c30525169e0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:21:02 localhost sshd[330727]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:21:02 localhost dnsmasq[329937]: exiting on receipt of SIGTERM Dec 6 05:21:02 localhost systemd[1]: tmp-crun.vnXDLf.mount: Deactivated successfully. Dec 6 05:21:02 localhost podman[330737]: 2025-12-06 10:21:02.499896427 +0000 UTC m=+0.053908478 container kill d97fd00f3e78882d783a796607d4bd6632cdb7dd83587cae5671ab11021d365f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f06eaa72-d4d5-4c14-80ef-691411a95b29, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 6 05:21:02 localhost systemd[1]: libpod-d97fd00f3e78882d783a796607d4bd6632cdb7dd83587cae5671ab11021d365f.scope: Deactivated successfully. Dec 6 05:21:02 localhost podman[330751]: 2025-12-06 10:21:02.546170221 +0000 UTC m=+0.038113655 container died d97fd00f3e78882d783a796607d4bd6632cdb7dd83587cae5671ab11021d365f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f06eaa72-d4d5-4c14-80ef-691411a95b29, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:21:02 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:21:02 localhost podman[330751]: 2025-12-06 10:21:02.624504776 +0000 UTC m=+0.116448180 container cleanup d97fd00f3e78882d783a796607d4bd6632cdb7dd83587cae5671ab11021d365f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f06eaa72-d4d5-4c14-80ef-691411a95b29, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125) Dec 6 05:21:02 localhost systemd[1]: libpod-conmon-d97fd00f3e78882d783a796607d4bd6632cdb7dd83587cae5671ab11021d365f.scope: Deactivated successfully. Dec 6 05:21:02 localhost podman[330756]: 2025-12-06 10:21:02.646869699 +0000 UTC m=+0.130940642 container remove d97fd00f3e78882d783a796607d4bd6632cdb7dd83587cae5671ab11021d365f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f06eaa72-d4d5-4c14-80ef-691411a95b29, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:21:02 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:21:02.675 263652 INFO neutron.agent.dhcp.agent [None req-f0da0179-f0c2-4644-8abf-3e5f20050ff3 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:21:02 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:21:02.675 263652 INFO neutron.agent.dhcp.agent [None req-f0da0179-f0c2-4644-8abf-3e5f20050ff3 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:21:02 localhost ovn_controller[154851]: 2025-12-06T10:21:02Z|00452|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:21:02 localhost nova_compute[282193]: 2025-12-06 10:21:02.806 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:03 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:21:03.217 263652 INFO neutron.agent.linux.ip_lib [None req-b27466b5-66c1-43f6-bb93-5c3ac987bdd4 - - - - - -] Device tap48aef5f2-0c cannot be used as it has no MAC address#033[00m Dec 6 05:21:03 localhost nova_compute[282193]: 2025-12-06 10:21:03.243 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:03 localhost kernel: device tap48aef5f2-0c entered promiscuous mode Dec 6 05:21:03 localhost nova_compute[282193]: 2025-12-06 10:21:03.250 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:03 localhost ovn_controller[154851]: 2025-12-06T10:21:03Z|00453|binding|INFO|Claiming lport 48aef5f2-0c04-4b34-bd2d-f71862404e37 for this chassis. Dec 6 05:21:03 localhost ovn_controller[154851]: 2025-12-06T10:21:03Z|00454|binding|INFO|48aef5f2-0c04-4b34-bd2d-f71862404e37: Claiming unknown Dec 6 05:21:03 localhost NetworkManager[5973]: [1765016463.2531] manager: (tap48aef5f2-0c): new Generic device (/org/freedesktop/NetworkManager/Devices/74) Dec 6 05:21:03 localhost systemd-udevd[330788]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:21:03 localhost ovn_metadata_agent[160504]: 2025-12-06 10:21:03.266 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-5986df1f-13f3-42c1-bcc4-79dcf74a49a9', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5986df1f-13f3-42c1-bcc4-79dcf74a49a9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e82deaff368b4feea9fec0f06459a6ca', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d5413063-0727-4de9-8605-e62b7d56e9f4, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=48aef5f2-0c04-4b34-bd2d-f71862404e37) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:21:03 localhost ovn_metadata_agent[160504]: 2025-12-06 10:21:03.268 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 48aef5f2-0c04-4b34-bd2d-f71862404e37 in datapath 5986df1f-13f3-42c1-bcc4-79dcf74a49a9 bound to our chassis#033[00m Dec 6 05:21:03 localhost ovn_metadata_agent[160504]: 2025-12-06 10:21:03.271 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 5986df1f-13f3-42c1-bcc4-79dcf74a49a9 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:21:03 localhost ovn_metadata_agent[160504]: 2025-12-06 10:21:03.272 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[16fe7515-fb92-4450-9b52-cfd23738e8ec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:21:03 localhost journal[230404]: ethtool ioctl error on tap48aef5f2-0c: No such device Dec 6 05:21:03 localhost ovn_controller[154851]: 2025-12-06T10:21:03Z|00455|binding|INFO|Setting lport 48aef5f2-0c04-4b34-bd2d-f71862404e37 ovn-installed in OVS Dec 6 05:21:03 localhost ovn_controller[154851]: 2025-12-06T10:21:03Z|00456|binding|INFO|Setting lport 48aef5f2-0c04-4b34-bd2d-f71862404e37 up in Southbound Dec 6 05:21:03 localhost nova_compute[282193]: 2025-12-06 10:21:03.290 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:03 localhost journal[230404]: ethtool ioctl error on tap48aef5f2-0c: No such device Dec 6 05:21:03 localhost journal[230404]: ethtool ioctl error on tap48aef5f2-0c: No such device Dec 6 05:21:03 localhost journal[230404]: ethtool ioctl error on tap48aef5f2-0c: No such device Dec 6 05:21:03 localhost journal[230404]: ethtool ioctl error on tap48aef5f2-0c: No such device Dec 6 05:21:03 localhost journal[230404]: ethtool ioctl error on tap48aef5f2-0c: No such device Dec 6 05:21:03 localhost journal[230404]: ethtool ioctl error on tap48aef5f2-0c: No such device Dec 6 05:21:03 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:21:03 localhost journal[230404]: ethtool ioctl error on tap48aef5f2-0c: No such device Dec 6 05:21:03 localhost nova_compute[282193]: 2025-12-06 10:21:03.334 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:03 localhost nova_compute[282193]: 2025-12-06 10:21:03.361 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:03 localhost systemd[1]: var-lib-containers-storage-overlay-e5a61fe99463007b75615de9ea3a382eb9be1c48dd22fe90e27b8f895cfb3e13-merged.mount: Deactivated successfully. Dec 6 05:21:03 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d97fd00f3e78882d783a796607d4bd6632cdb7dd83587cae5671ab11021d365f-userdata-shm.mount: Deactivated successfully. Dec 6 05:21:03 localhost systemd[1]: run-netns-qdhcp\x2df06eaa72\x2dd4d5\x2d4c14\x2d80ef\x2d691411a95b29.mount: Deactivated successfully. Dec 6 05:21:03 localhost nova_compute[282193]: 2025-12-06 10:21:03.700 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:03 localhost podman[330852]: 2025-12-06 10:21:03.814703288 +0000 UTC m=+0.036753205 container kill 195bb5c97a1927fb0566a11eeb513a05a2f050ad98f5e7dc5af8ef23feb91058 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-610fcf3f-6e70-4d5d-9d9e-df794ff4196d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:21:03 localhost dnsmasq[329722]: read /var/lib/neutron/dhcp/610fcf3f-6e70-4d5d-9d9e-df794ff4196d/addn_hosts - 0 addresses Dec 6 05:21:03 localhost dnsmasq-dhcp[329722]: read /var/lib/neutron/dhcp/610fcf3f-6e70-4d5d-9d9e-df794ff4196d/host Dec 6 05:21:03 localhost dnsmasq-dhcp[329722]: read /var/lib/neutron/dhcp/610fcf3f-6e70-4d5d-9d9e-df794ff4196d/opts Dec 6 05:21:04 localhost nova_compute[282193]: 2025-12-06 10:21:04.117 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:04 localhost ovn_controller[154851]: 2025-12-06T10:21:04Z|00457|binding|INFO|Releasing lport 0769d4f9-3cf8-430d-87d0-faa554cf4d51 from this chassis (sb_readonly=0) Dec 6 05:21:04 localhost ovn_controller[154851]: 2025-12-06T10:21:04Z|00458|binding|INFO|Setting lport 0769d4f9-3cf8-430d-87d0-faa554cf4d51 down in Southbound Dec 6 05:21:04 localhost kernel: device tap0769d4f9-3c left promiscuous mode Dec 6 05:21:04 localhost ovn_metadata_agent[160504]: 2025-12-06 10:21:04.127 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-610fcf3f-6e70-4d5d-9d9e-df794ff4196d', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-610fcf3f-6e70-4d5d-9d9e-df794ff4196d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '24086b701d6b4d4081d2e63578d18d24', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548789.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=22180776-fd75-4bd6-be28-febd70acf464, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=0769d4f9-3cf8-430d-87d0-faa554cf4d51) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:21:04 localhost ovn_metadata_agent[160504]: 2025-12-06 10:21:04.129 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 0769d4f9-3cf8-430d-87d0-faa554cf4d51 in datapath 610fcf3f-6e70-4d5d-9d9e-df794ff4196d unbound from our chassis#033[00m Dec 6 05:21:04 localhost ovn_metadata_agent[160504]: 2025-12-06 10:21:04.130 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 610fcf3f-6e70-4d5d-9d9e-df794ff4196d or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:21:04 localhost ovn_metadata_agent[160504]: 2025-12-06 10:21:04.133 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[65fddb1c-4123-4f9a-a5e5-2c19aae4ea83]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:21:04 localhost nova_compute[282193]: 2025-12-06 10:21:04.140 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:04 localhost podman[330898]: Dec 6 05:21:04 localhost podman[330898]: 2025-12-06 10:21:04.174324081 +0000 UTC m=+0.080308906 container create 4f9a605bde30cb63d1258bafc5b6357b3e8a6178585b25833a2d7aa12944aa53 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5986df1f-13f3-42c1-bcc4-79dcf74a49a9, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Dec 6 05:21:04 localhost systemd[1]: Started libpod-conmon-4f9a605bde30cb63d1258bafc5b6357b3e8a6178585b25833a2d7aa12944aa53.scope. Dec 6 05:21:04 localhost podman[330898]: 2025-12-06 10:21:04.127429968 +0000 UTC m=+0.033414843 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:21:04 localhost systemd[1]: Started libcrun container. Dec 6 05:21:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ed23ebe6a60fad4889f2ca706b63873c667d977225c0a5045f8dfc5999e212b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:21:04 localhost podman[330898]: 2025-12-06 10:21:04.266466137 +0000 UTC m=+0.172450932 container init 4f9a605bde30cb63d1258bafc5b6357b3e8a6178585b25833a2d7aa12944aa53 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5986df1f-13f3-42c1-bcc4-79dcf74a49a9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0) Dec 6 05:21:04 localhost podman[330898]: 2025-12-06 10:21:04.276673519 +0000 UTC m=+0.182658314 container start 4f9a605bde30cb63d1258bafc5b6357b3e8a6178585b25833a2d7aa12944aa53 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5986df1f-13f3-42c1-bcc4-79dcf74a49a9, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:21:04 localhost dnsmasq[330917]: started, version 2.85 cachesize 150 Dec 6 05:21:04 localhost dnsmasq[330917]: DNS service limited to local subnets Dec 6 05:21:04 localhost dnsmasq[330917]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:21:04 localhost dnsmasq[330917]: warning: no upstream servers configured Dec 6 05:21:04 localhost dnsmasq-dhcp[330917]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 6 05:21:04 localhost dnsmasq[330917]: read /var/lib/neutron/dhcp/5986df1f-13f3-42c1-bcc4-79dcf74a49a9/addn_hosts - 0 addresses Dec 6 05:21:04 localhost dnsmasq-dhcp[330917]: read /var/lib/neutron/dhcp/5986df1f-13f3-42c1-bcc4-79dcf74a49a9/host Dec 6 05:21:04 localhost dnsmasq-dhcp[330917]: read /var/lib/neutron/dhcp/5986df1f-13f3-42c1-bcc4-79dcf74a49a9/opts Dec 6 05:21:04 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:21:04.430 263652 INFO neutron.agent.dhcp.agent [None req-ef5cbd84-a521-453c-b2ba-f80430a0d2b1 - - - - - -] DHCP configuration for ports {'a0b3edf2-b200-4d5e-9f11-a2af9c2d7b08'} is completed#033[00m Dec 6 05:21:04 localhost systemd[1]: tmp-crun.rOlUzt.mount: Deactivated successfully. Dec 6 05:21:04 localhost sshd[330918]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:21:04 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:04.617 2 INFO neutron.agent.securitygroups_rpc [None req-a8006eb6-87bf-4e51-be14-a8e4ed75c69e a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['f89459b7-5955-49a9-980d-ccf671c641e2']#033[00m Dec 6 05:21:04 localhost podman[330937]: 2025-12-06 10:21:04.882202819 +0000 UTC m=+0.055210208 container kill 195bb5c97a1927fb0566a11eeb513a05a2f050ad98f5e7dc5af8ef23feb91058 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-610fcf3f-6e70-4d5d-9d9e-df794ff4196d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125) Dec 6 05:21:04 localhost systemd[1]: tmp-crun.c9bQGp.mount: Deactivated successfully. Dec 6 05:21:04 localhost dnsmasq[329722]: exiting on receipt of SIGTERM Dec 6 05:21:04 localhost systemd[1]: libpod-195bb5c97a1927fb0566a11eeb513a05a2f050ad98f5e7dc5af8ef23feb91058.scope: Deactivated successfully. Dec 6 05:21:04 localhost nova_compute[282193]: 2025-12-06 10:21:04.917 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:04 localhost podman[330951]: 2025-12-06 10:21:04.979240156 +0000 UTC m=+0.054394804 container died 195bb5c97a1927fb0566a11eeb513a05a2f050ad98f5e7dc5af8ef23feb91058 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-610fcf3f-6e70-4d5d-9d9e-df794ff4196d, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125) Dec 6 05:21:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:21:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:21:05 localhost podman[330951]: 2025-12-06 10:21:05.093190379 +0000 UTC m=+0.168344977 container cleanup 195bb5c97a1927fb0566a11eeb513a05a2f050ad98f5e7dc5af8ef23feb91058 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-610fcf3f-6e70-4d5d-9d9e-df794ff4196d, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Dec 6 05:21:05 localhost systemd[1]: libpod-conmon-195bb5c97a1927fb0566a11eeb513a05a2f050ad98f5e7dc5af8ef23feb91058.scope: Deactivated successfully. Dec 6 05:21:05 localhost podman[330952]: 2025-12-06 10:21:05.123088242 +0000 UTC m=+0.195111334 container remove 195bb5c97a1927fb0566a11eeb513a05a2f050ad98f5e7dc5af8ef23feb91058 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-610fcf3f-6e70-4d5d-9d9e-df794ff4196d, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 6 05:21:05 localhost podman[330978]: 2025-12-06 10:21:05.173629657 +0000 UTC m=+0.137916916 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Dec 6 05:21:05 localhost podman[330978]: 2025-12-06 10:21:05.183210711 +0000 UTC m=+0.147497990 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 6 05:21:05 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:21:05.196 263652 INFO neutron.agent.dhcp.agent [None req-5b4974ab-1455-46f4-ace8-29cbc2e54a8a - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:21:05 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:21:05 localhost podman[330979]: 2025-12-06 10:21:05.238347816 +0000 UTC m=+0.195685162 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:21:05 localhost podman[330979]: 2025-12-06 10:21:05.26954698 +0000 UTC m=+0.226884296 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 6 05:21:05 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:21:05 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:21:05.441 263652 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:21:05 localhost systemd[1]: tmp-crun.w2zIry.mount: Deactivated successfully. Dec 6 05:21:05 localhost systemd[1]: var-lib-containers-storage-overlay-ac8af9b2a52417f488cf9670e26c867e6ab859687c3a4f72e20580bf57f1cadb-merged.mount: Deactivated successfully. Dec 6 05:21:05 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-195bb5c97a1927fb0566a11eeb513a05a2f050ad98f5e7dc5af8ef23feb91058-userdata-shm.mount: Deactivated successfully. Dec 6 05:21:05 localhost systemd[1]: run-netns-qdhcp\x2d610fcf3f\x2d6e70\x2d4d5d\x2d9d9e\x2ddf794ff4196d.mount: Deactivated successfully. Dec 6 05:21:05 localhost ovn_controller[154851]: 2025-12-06T10:21:05Z|00459|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:21:05 localhost nova_compute[282193]: 2025-12-06 10:21:05.729 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:06 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:06.329 2 INFO neutron.agent.securitygroups_rpc [None req-58c814b2-70d6-487c-b572-f1a393accc35 a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['f89459b7-5955-49a9-980d-ccf671c641e2']#033[00m Dec 6 05:21:06 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:06.563 2 INFO neutron.agent.securitygroups_rpc [None req-58c814b2-70d6-487c-b572-f1a393accc35 a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['f89459b7-5955-49a9-980d-ccf671c641e2']#033[00m Dec 6 05:21:07 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:21:07.387 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:21:06Z, description=, device_id=7ecd23ba-4ca3-4eae-9829-cff158a165a0, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=3f7091ee-adf1-4a41-bf51-535f147c89c5, ip_allocation=immediate, mac_address=fa:16:3e:67:09:c2, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:21:00Z, description=, dns_domain=, id=5986df1f-13f3-42c1-bcc4-79dcf74a49a9, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1321854165, port_security_enabled=True, project_id=e82deaff368b4feea9fec0f06459a6ca, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=10820, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2609, status=ACTIVE, subnets=['b8481c35-3dcb-4ca8-9bae-441805cdac62'], tags=[], tenant_id=e82deaff368b4feea9fec0f06459a6ca, updated_at=2025-12-06T10:21:02Z, vlan_transparent=None, network_id=5986df1f-13f3-42c1-bcc4-79dcf74a49a9, port_security_enabled=False, project_id=e82deaff368b4feea9fec0f06459a6ca, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2656, status=DOWN, tags=[], tenant_id=e82deaff368b4feea9fec0f06459a6ca, updated_at=2025-12-06T10:21:07Z on network 5986df1f-13f3-42c1-bcc4-79dcf74a49a9#033[00m Dec 6 05:21:07 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:07.529 2 INFO neutron.agent.securitygroups_rpc [None req-beefe55f-e6d8-4aae-b3a4-9c077707e8ab a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['f89459b7-5955-49a9-980d-ccf671c641e2']#033[00m Dec 6 05:21:07 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:21:07.549 263652 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:21:07 localhost dnsmasq[330917]: read /var/lib/neutron/dhcp/5986df1f-13f3-42c1-bcc4-79dcf74a49a9/addn_hosts - 1 addresses Dec 6 05:21:07 localhost dnsmasq-dhcp[330917]: read /var/lib/neutron/dhcp/5986df1f-13f3-42c1-bcc4-79dcf74a49a9/host Dec 6 05:21:07 localhost dnsmasq-dhcp[330917]: read /var/lib/neutron/dhcp/5986df1f-13f3-42c1-bcc4-79dcf74a49a9/opts Dec 6 05:21:07 localhost podman[331040]: 2025-12-06 10:21:07.604930078 +0000 UTC m=+0.048839103 container kill 4f9a605bde30cb63d1258bafc5b6357b3e8a6178585b25833a2d7aa12944aa53 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5986df1f-13f3-42c1-bcc4-79dcf74a49a9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:21:07 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 6 05:21:07 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3418725974' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 6 05:21:07 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 6 05:21:07 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3418725974' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.915 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'name': 'test', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005548789.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'hostId': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.916 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.920 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7ff64df5-1d46-42f0-9389-4835c5ed86a9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:21:07.916486', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '476fb4f6-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 12886.165819636, 'message_signature': 'a8116c88aaac6ed31d27cdbfcf86bf1a073c0810c45244d9aa4ca4b4edfbacb6'}]}, 'timestamp': '2025-12-06 10:21:07.921374', '_unique_id': '3063b1d99ec4418a8e6e128a1714f575'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.923 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.924 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.924 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd700e681-36d9-4445-8925-1da916dd1656', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:21:07.924832', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '47705028-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 12886.165819636, 'message_signature': '281a09345acc4ceb66e259964dfd5d0588406e3fabac0e4b003e776c5a64db8c'}]}, 'timestamp': '2025-12-06 10:21:07.925184', '_unique_id': '7f4dbdfab1d7479eafe392419801374d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.925 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.926 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.937 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.938 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'dce0d41e-c09e-4687-a251-66c959482fa5', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:21:07.926788', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '47724978-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 12886.176148892, 'message_signature': '69142bf732726df01f02411a8436af9e2ce0abcfc283d1d0560b3b7bc316ae7f'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:21:07.926788', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '47726458-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 12886.176148892, 'message_signature': 'd72c1b4a26daf25f85018b4fe9ce8bbc98faef546d1945f2a94ad38447bb984f'}]}, 'timestamp': '2025-12-06 10:21:07.938893', '_unique_id': 'b4d3dc24355a4fd5bbc921b3381f292f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.940 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.942 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.942 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.bytes volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7b46c9a5-33a1-4075-9f19-4f7b793a820a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:21:07.942162', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '4772fa1c-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 12886.165819636, 'message_signature': '2e9594678f7c711123d7d4cd0a466be63a8bd01798738a54f935c2c7b1138695'}]}, 'timestamp': '2025-12-06 10:21:07.942712', '_unique_id': 'a7e6ab283d3a42cf8919d2797ba15019'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.944 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.946 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.946 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets volume: 60 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:21:07 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:21:07.945 263652 INFO neutron.agent.dhcp.agent [None req-7a18cd85-4f16-4020-a5c4-3b18d05ef74b - - - - - -] DHCP configuration for ports {'3f7091ee-adf1-4a41-bf51-535f147c89c5'} is completed#033[00m Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a46e9bf0-dd55-4ef8-8715-9848a2dee6f6', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 60, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:21:07.946529', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '4773aa52-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 12886.165819636, 'message_signature': '38c9c1121bcf7f2e2e1d876c3dca37ce491df8239e40bb23a6444f80b17d042a'}]}, 'timestamp': '2025-12-06 10:21:07.947460', '_unique_id': '2261033e4f1d44ebb0f1a3df0385ac5e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.948 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.950 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.950 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.950 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'df33f971-93bf-48b8-a2a8-f9bd5919e83f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:21:07.950585', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '47744638-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 12886.165819636, 'message_signature': 'ece13a0ee25152f7285c339508311f11f9f8414d03d0e21af989ad7bf3245815'}]}, 'timestamp': '2025-12-06 10:21:07.951312', '_unique_id': '245f9ffae1c74f1998e5e07ac8b69f64'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.952 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.954 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.986 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.latency volume: 1525105336 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.987 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.latency volume: 106716064 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3aa9fc0e-d9cd-49e8-a9e0-fa229d486e4b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1525105336, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:21:07.954692', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4779be6a-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 12886.20423423, 'message_signature': '4b003e5dd74879618f3a156e75c82a1456ccab535b0d079102e99d6e6ed8400b'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 106716064, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:21:07.954692', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4779d292-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 12886.20423423, 'message_signature': 'f6579e9ba956858112bb9ad6f06253984960bd862fdae01e202f577b3e34eba5'}]}, 'timestamp': '2025-12-06 10:21:07.987546', '_unique_id': '382482c07233401caab11686e0f21326'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.989 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.990 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.991 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.bytes volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '56815010-2f2a-4e6f-ac1c-32eff6764621', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:21:07.991026', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '477a6d92-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 12886.165819636, 'message_signature': 'bd60571b238dd35b35ed66301f654b3721ed77a922fd572f1740eb2783105e84'}]}, 'timestamp': '2025-12-06 10:21:07.991672', '_unique_id': 'ec3fb65239e24d1dba6cef9ce56a49a4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.992 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:07.994 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.010 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/memory.usage volume: 51.80859375 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7fd8bc16-cf2f-477c-9f23-d13d70c15939', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.80859375, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'timestamp': '2025-12-06T10:21:07.994711', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '477d5eee-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 12886.259447318, 'message_signature': '360bcb06b3d64126591c5a04364fdd6a319534899409c25d8ffd4ba37849f3db'}]}, 'timestamp': '2025-12-06 10:21:08.010914', '_unique_id': '3da0899143144767acc586d66cd3cd08'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.012 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.013 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.013 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.014 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '51e6ca14-cee8-4c8b-b783-c1ed65a82bd2', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:21:08.013875', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '477ded8c-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 12886.176148892, 'message_signature': '04ef3857d19f81e388a3208e1d4df35f643e8a07dae003e7c9ea606c8b250d8b'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:21:08.013875', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '477e00ce-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 12886.176148892, 'message_signature': '70562f9024703b0109965949997ec90feeb7f538958375b691bb5cf417fe5592'}]}, 'timestamp': '2025-12-06 10:21:08.014999', '_unique_id': '2b7313cb68fc45008a1418ff6ff82b0a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.016 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.017 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.017 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.requests volume: 47 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.018 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5902eb1a-7ea2-494c-a02d-aa93517a49a4', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 47, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:21:08.017529', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '477e7a22-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 12886.20423423, 'message_signature': '9b874b6dbb53fa3e40f8cc3ef7ae038922a007136ffd4f99b10d91380a822d79'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:21:08.017529', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '477e8be8-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 12886.20423423, 'message_signature': 'd8f50070d704eca3a189e0fd7ddd730fe92cf65abc3c4282d63cc54f9522abde'}]}, 'timestamp': '2025-12-06 10:21:08.018497', '_unique_id': 'b9896e64673d4e689ec7b0771bfedf2e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.019 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.020 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.020 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.021 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c2f4c53a-a6e2-4c2a-be11-e78e00e19131', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:21:08.020916', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '477efd1c-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 12886.20423423, 'message_signature': '4095b4d897fed516baa3fd440bc44352f39208d74edef3113033e164e0ce74f0'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:21:08.020916', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '477f10a4-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 12886.20423423, 'message_signature': '501c685a1258f21f51ef2125eba117f9df9016356dd74df7689793290d16bda7'}]}, 'timestamp': '2025-12-06 10:21:08.021929', '_unique_id': '3c35314656ef444aaeed4bb727cfcb9b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.022 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.024 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.024 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.024 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/cpu volume: 18120000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '16b7913b-666c-4f87-b11c-33d0021df07e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 18120000000, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'timestamp': '2025-12-06T10:21:08.024923', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '477f992a-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 12886.259447318, 'message_signature': '6a42028a5ad482f765fe66ecb8c0cdce02cf7b7a98eea37d741bab456f011849'}]}, 'timestamp': '2025-12-06 10:21:08.025444', '_unique_id': '74b2031bea154dbdba05e6b33da6e47f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.026 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.027 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.027 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets volume: 114 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '307ce6bf-eb8f-4fce-baa2-b3bccec308ff', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 114, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:21:08.027799', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '47800a72-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 12886.165819636, 'message_signature': 'fb053d1eda83b3b6dcd127eb2d6a6b8b83ab28f2010acfff5d1f1140ad507077'}]}, 'timestamp': '2025-12-06 10:21:08.028324', '_unique_id': '1670a4c12b704344a147b5c99c218c7e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.029 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.030 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.030 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.031 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f5cafd48-f1f5-497d-a435-0f7df2a32ad4', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:21:08.030856', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4780816e-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 12886.20423423, 'message_signature': '2b7f5ddd0cb5d117140ebadf3865b008a42a0c247b58c554ca2e6bcccba74429'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:21:08.030856', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '47809302-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 12886.20423423, 'message_signature': '1584d3a6d74c625965a15a239bc26b816b6339d638b9f1703288bcfc6194d9c6'}]}, 'timestamp': '2025-12-06 10:21:08.031852', '_unique_id': '07cd2ad0d33045a085e05bf9bf0b1377'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.032 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.034 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.034 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.034 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '98dfa394-d37e-4e33-a2a8-87dc52b6ecd8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:21:08.034706', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '47811f7a-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 12886.165819636, 'message_signature': '10fd1d11dcc4c2fd735823b1ecb6d105e11bbec351158368a510c707bab3e1c6'}]}, 'timestamp': '2025-12-06 10:21:08.035520', '_unique_id': 'ba817ad0a29e4cf4b8468a5b5f912124'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.036 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.038 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.038 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.039 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9db31de4-ea3c-42ff-856f-ee16995f158f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:21:08.038564', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4781b2e6-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 12886.176148892, 'message_signature': 'c6df7914702b5af2cfbe9602a20e2d97f72d1afcb17fb332a0529746979dc690'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:21:08.038564', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4781c7d6-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 12886.176148892, 'message_signature': '1fae707ab140ce786f6dcfcc4dc35d612399b2f4d6062d0609766e0e35c31a97'}]}, 'timestamp': '2025-12-06 10:21:08.039713', '_unique_id': '06567edceb3f41e8941dbcd0397d8b40'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.040 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.042 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.042 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.042 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e7fb265a-85bd-4c90-a44e-f203189c8abe', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:21:08.042626', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '478252b4-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 12886.165819636, 'message_signature': '0e0bf24c1723e38bb4c8593d5f77531e4888f27fe5fd6dc41efbbde6d3075ea3'}]}, 'timestamp': '2025-12-06 10:21:08.043284', '_unique_id': '1a9046cb48ea41b19f10e8dcee0bcca4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.044 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.045 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.045 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.bytes volume: 389120 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.046 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f8c16fde-761d-4a2d-86c0-306a8d868df7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 389120, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:21:08.045794', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '4782cc1c-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 12886.20423423, 'message_signature': '5a8bbb6b739940245c72cd4acadcd26708eaa757f180b3434fa7aa66ae9de6c1'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:21:08.045794', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4782dc34-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 12886.20423423, 'message_signature': '935c7054c800ceef23c2d5329ff24ceffab127871b61ca4a134cf3bd8004c7bc'}]}, 'timestamp': '2025-12-06 10:21:08.046685', '_unique_id': 'e6225dd6b33a423ca1eccc088eeb1482'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.047 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.048 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.048 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.latency volume: 1252245154 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.048 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.latency volume: 27668224 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '8bae18df-68f0-4020-aca4-a8771fa54657', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1252245154, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:21:08.048165', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '47832360-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 12886.20423423, 'message_signature': '83781222efa8d8575414aa46d658de500d6edb8954947b0709b5245972e7d677'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 27668224, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:21:08.048165', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '4783312a-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 12886.20423423, 'message_signature': '86ef0f6ba7ead2905ebf685d53365a1f27d8702c6aa6deffce6e9162ad9ab83b'}]}, 'timestamp': '2025-12-06 10:21:08.048923', '_unique_id': '0e93c74b021844569753cae5e9e693bd'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.049 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.050 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.050 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '97514aaa-cd39-4be8-908d-e7bd2d893c02', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:21:08.050908', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '47838eae-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 12886.165819636, 'message_signature': '5723cfa44abd4559504f1f79117ef6ce238b9fdc3424adf0e5e1d9f12e634140'}]}, 'timestamp': '2025-12-06 10:21:08.051290', '_unique_id': 'fff598fd3096457185c4eb98a91e1348'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:21:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:21:08.051 12 ERROR oslo_messaging.notify.messaging Dec 6 05:21:08 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:21:08 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:08.502 2 INFO neutron.agent.securitygroups_rpc [None req-4469b72b-a2a8-46c0-b170-fb645c70fec6 a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['f89459b7-5955-49a9-980d-ccf671c641e2']#033[00m Dec 6 05:21:08 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 6 05:21:08 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/126387762' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 6 05:21:08 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 6 05:21:08 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/126387762' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 6 05:21:08 localhost nova_compute[282193]: 2025-12-06 10:21:08.735 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:08 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 6 05:21:08 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3307122424' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 6 05:21:08 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 6 05:21:08 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3307122424' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 6 05:21:09 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:21:09.277 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:21:06Z, description=, device_id=7ecd23ba-4ca3-4eae-9829-cff158a165a0, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=3f7091ee-adf1-4a41-bf51-535f147c89c5, ip_allocation=immediate, mac_address=fa:16:3e:67:09:c2, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:21:00Z, description=, dns_domain=, id=5986df1f-13f3-42c1-bcc4-79dcf74a49a9, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1321854165, port_security_enabled=True, project_id=e82deaff368b4feea9fec0f06459a6ca, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=10820, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2609, status=ACTIVE, subnets=['b8481c35-3dcb-4ca8-9bae-441805cdac62'], tags=[], tenant_id=e82deaff368b4feea9fec0f06459a6ca, updated_at=2025-12-06T10:21:02Z, vlan_transparent=None, network_id=5986df1f-13f3-42c1-bcc4-79dcf74a49a9, port_security_enabled=False, project_id=e82deaff368b4feea9fec0f06459a6ca, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2656, status=DOWN, tags=[], tenant_id=e82deaff368b4feea9fec0f06459a6ca, updated_at=2025-12-06T10:21:07Z on network 5986df1f-13f3-42c1-bcc4-79dcf74a49a9#033[00m Dec 6 05:21:09 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:21:09.314 263652 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:21:09 localhost dnsmasq[330917]: read /var/lib/neutron/dhcp/5986df1f-13f3-42c1-bcc4-79dcf74a49a9/addn_hosts - 1 addresses Dec 6 05:21:09 localhost dnsmasq-dhcp[330917]: read /var/lib/neutron/dhcp/5986df1f-13f3-42c1-bcc4-79dcf74a49a9/host Dec 6 05:21:09 localhost dnsmasq-dhcp[330917]: read /var/lib/neutron/dhcp/5986df1f-13f3-42c1-bcc4-79dcf74a49a9/opts Dec 6 05:21:09 localhost podman[331079]: 2025-12-06 10:21:09.520891877 +0000 UTC m=+0.070032722 container kill 4f9a605bde30cb63d1258bafc5b6357b3e8a6178585b25833a2d7aa12944aa53 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5986df1f-13f3-42c1-bcc4-79dcf74a49a9, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 6 05:21:09 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:21:09.824 263652 INFO neutron.agent.dhcp.agent [None req-83c88103-66c6-4f19-88b2-1aa7358b69a2 - - - - - -] DHCP configuration for ports {'3f7091ee-adf1-4a41-bf51-535f147c89c5'} is completed#033[00m Dec 6 05:21:09 localhost nova_compute[282193]: 2025-12-06 10:21:09.967 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:12 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:12.055 2 INFO neutron.agent.securitygroups_rpc [None req-a8044dd6-257e-4d6a-a5a6-1617984725a4 a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['f89459b7-5955-49a9-980d-ccf671c641e2']#033[00m Dec 6 05:21:12 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:12.762 2 INFO neutron.agent.securitygroups_rpc [None req-ce6b28f9-e93f-45d2-8a9b-cc88bd3abac1 a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['f89459b7-5955-49a9-980d-ccf671c641e2']#033[00m Dec 6 05:21:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:21:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:21:12 localhost podman[331101]: 2025-12-06 10:21:12.930693768 +0000 UTC m=+0.087217337 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm) Dec 6 05:21:12 localhost systemd[1]: tmp-crun.FXH0zB.mount: Deactivated successfully. Dec 6 05:21:12 localhost podman[331101]: 2025-12-06 10:21:12.972332171 +0000 UTC m=+0.128855770 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:21:12 localhost podman[331100]: 2025-12-06 10:21:12.979608053 +0000 UTC m=+0.139057252 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, vcs-type=git, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, architecture=x86_64, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.openshift.expose-services=) Dec 6 05:21:12 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:21:12 localhost podman[331100]: 2025-12-06 10:21:12.996128459 +0000 UTC m=+0.155577668 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, config_id=edpm, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, vcs-type=git) Dec 6 05:21:13 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:21:13 localhost ovn_metadata_agent[160504]: 2025-12-06 10:21:13.079 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:6c:02', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:a8:2f:0c:cb:a1'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:21:13 localhost ovn_metadata_agent[160504]: 2025-12-06 10:21:13.080 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 6 05:21:13 localhost nova_compute[282193]: 2025-12-06 10:21:13.081 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:13 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:21:13 localhost nova_compute[282193]: 2025-12-06 10:21:13.773 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:14 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:14.547 2 INFO neutron.agent.securitygroups_rpc [None req-ff4891e2-327c-4b51-b5fc-5a809ca2f304 14db54f428d84c88aba32e6937011f75 969bd3cedbbc4b03a4546a8b852f13f2 - - default default] Security group rule updated ['903bc67e-3e9d-4b17-b669-c51b1cfd9fe6']#033[00m Dec 6 05:21:14 localhost dnsmasq[330917]: read /var/lib/neutron/dhcp/5986df1f-13f3-42c1-bcc4-79dcf74a49a9/addn_hosts - 0 addresses Dec 6 05:21:14 localhost dnsmasq-dhcp[330917]: read /var/lib/neutron/dhcp/5986df1f-13f3-42c1-bcc4-79dcf74a49a9/host Dec 6 05:21:14 localhost podman[331154]: 2025-12-06 10:21:14.601838573 +0000 UTC m=+0.062925315 container kill 4f9a605bde30cb63d1258bafc5b6357b3e8a6178585b25833a2d7aa12944aa53 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5986df1f-13f3-42c1-bcc4-79dcf74a49a9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:21:14 localhost dnsmasq-dhcp[330917]: read /var/lib/neutron/dhcp/5986df1f-13f3-42c1-bcc4-79dcf74a49a9/opts Dec 6 05:21:14 localhost nova_compute[282193]: 2025-12-06 10:21:14.799 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:14 localhost kernel: device tap48aef5f2-0c left promiscuous mode Dec 6 05:21:14 localhost ovn_controller[154851]: 2025-12-06T10:21:14Z|00460|binding|INFO|Releasing lport 48aef5f2-0c04-4b34-bd2d-f71862404e37 from this chassis (sb_readonly=0) Dec 6 05:21:14 localhost ovn_controller[154851]: 2025-12-06T10:21:14Z|00461|binding|INFO|Setting lport 48aef5f2-0c04-4b34-bd2d-f71862404e37 down in Southbound Dec 6 05:21:14 localhost ovn_metadata_agent[160504]: 2025-12-06 10:21:14.814 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-5986df1f-13f3-42c1-bcc4-79dcf74a49a9', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5986df1f-13f3-42c1-bcc4-79dcf74a49a9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e82deaff368b4feea9fec0f06459a6ca', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548789.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d5413063-0727-4de9-8605-e62b7d56e9f4, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=48aef5f2-0c04-4b34-bd2d-f71862404e37) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:21:14 localhost ovn_metadata_agent[160504]: 2025-12-06 10:21:14.817 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 48aef5f2-0c04-4b34-bd2d-f71862404e37 in datapath 5986df1f-13f3-42c1-bcc4-79dcf74a49a9 unbound from our chassis#033[00m Dec 6 05:21:14 localhost ovn_metadata_agent[160504]: 2025-12-06 10:21:14.819 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5986df1f-13f3-42c1-bcc4-79dcf74a49a9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:21:14 localhost ovn_metadata_agent[160504]: 2025-12-06 10:21:14.820 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[42af6f00-5eee-4c55-91dd-5fc8f7ae0dea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:21:14 localhost nova_compute[282193]: 2025-12-06 10:21:14.821 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:14 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:14.888 2 INFO neutron.agent.securitygroups_rpc [None req-4caeb4e0-6839-4739-a438-dff319ba5ebb 14db54f428d84c88aba32e6937011f75 969bd3cedbbc4b03a4546a8b852f13f2 - - default default] Security group rule updated ['903bc67e-3e9d-4b17-b669-c51b1cfd9fe6']#033[00m Dec 6 05:21:14 localhost nova_compute[282193]: 2025-12-06 10:21:14.969 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:15 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:15.813 2 INFO neutron.agent.securitygroups_rpc [None req-90100f1d-f80a-4a04-b568-2870d38561f7 14db54f428d84c88aba32e6937011f75 969bd3cedbbc4b03a4546a8b852f13f2 - - default default] Security group rule updated ['0559edeb-45f4-4489-a671-f1d11bd0b93c']#033[00m Dec 6 05:21:16 localhost dnsmasq[330917]: exiting on receipt of SIGTERM Dec 6 05:21:16 localhost podman[331194]: 2025-12-06 10:21:16.086816606 +0000 UTC m=+0.058151869 container kill 4f9a605bde30cb63d1258bafc5b6357b3e8a6178585b25833a2d7aa12944aa53 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5986df1f-13f3-42c1-bcc4-79dcf74a49a9, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:21:16 localhost systemd[1]: libpod-4f9a605bde30cb63d1258bafc5b6357b3e8a6178585b25833a2d7aa12944aa53.scope: Deactivated successfully. Dec 6 05:21:16 localhost podman[331206]: 2025-12-06 10:21:16.165330006 +0000 UTC m=+0.064722690 container died 4f9a605bde30cb63d1258bafc5b6357b3e8a6178585b25833a2d7aa12944aa53 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5986df1f-13f3-42c1-bcc4-79dcf74a49a9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 6 05:21:16 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:16.206 2 INFO neutron.agent.securitygroups_rpc [None req-1962285c-ec71-4abe-922e-2812550a1f59 14db54f428d84c88aba32e6937011f75 969bd3cedbbc4b03a4546a8b852f13f2 - - default default] Security group rule updated ['0559edeb-45f4-4489-a671-f1d11bd0b93c']#033[00m Dec 6 05:21:16 localhost podman[331206]: 2025-12-06 10:21:16.235980725 +0000 UTC m=+0.135373379 container cleanup 4f9a605bde30cb63d1258bafc5b6357b3e8a6178585b25833a2d7aa12944aa53 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5986df1f-13f3-42c1-bcc4-79dcf74a49a9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2) Dec 6 05:21:16 localhost systemd[1]: libpod-conmon-4f9a605bde30cb63d1258bafc5b6357b3e8a6178585b25833a2d7aa12944aa53.scope: Deactivated successfully. Dec 6 05:21:16 localhost podman[331208]: 2025-12-06 10:21:16.26034701 +0000 UTC m=+0.153367699 container remove 4f9a605bde30cb63d1258bafc5b6357b3e8a6178585b25833a2d7aa12944aa53 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5986df1f-13f3-42c1-bcc4-79dcf74a49a9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 6 05:21:16 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:21:16.444 263652 INFO neutron.agent.dhcp.agent [None req-108f80ac-61fb-4ef4-8ace-74d386acc748 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:21:16 localhost openstack_network_exporter[243110]: ERROR 10:21:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:21:16 localhost openstack_network_exporter[243110]: ERROR 10:21:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:21:16 localhost openstack_network_exporter[243110]: ERROR 10:21:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:21:16 localhost openstack_network_exporter[243110]: ERROR 10:21:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:21:16 localhost openstack_network_exporter[243110]: Dec 6 05:21:16 localhost openstack_network_exporter[243110]: ERROR 10:21:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:21:16 localhost openstack_network_exporter[243110]: Dec 6 05:21:16 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:16.656 2 INFO neutron.agent.securitygroups_rpc [None req-62279be6-b47d-4f82-9bae-6dcf35358e74 14db54f428d84c88aba32e6937011f75 969bd3cedbbc4b03a4546a8b852f13f2 - - default default] Security group rule updated ['0559edeb-45f4-4489-a671-f1d11bd0b93c']#033[00m Dec 6 05:21:16 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:16.875 2 INFO neutron.agent.securitygroups_rpc [None req-c061e6e9-3e5b-41ac-9ef0-5285d101333c 14db54f428d84c88aba32e6937011f75 969bd3cedbbc4b03a4546a8b852f13f2 - - default default] Security group rule updated ['0559edeb-45f4-4489-a671-f1d11bd0b93c']#033[00m Dec 6 05:21:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:21:17 localhost systemd[1]: var-lib-containers-storage-overlay-4ed23ebe6a60fad4889f2ca706b63873c667d977225c0a5045f8dfc5999e212b-merged.mount: Deactivated successfully. Dec 6 05:21:17 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4f9a605bde30cb63d1258bafc5b6357b3e8a6178585b25833a2d7aa12944aa53-userdata-shm.mount: Deactivated successfully. Dec 6 05:21:17 localhost systemd[1]: run-netns-qdhcp\x2d5986df1f\x2d13f3\x2d42c1\x2dbcc4\x2d79dcf74a49a9.mount: Deactivated successfully. Dec 6 05:21:17 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:17.119 2 INFO neutron.agent.securitygroups_rpc [None req-2a7b6fd7-f3e3-4d7a-9d68-bfb25de0babb 14db54f428d84c88aba32e6937011f75 969bd3cedbbc4b03a4546a8b852f13f2 - - default default] Security group rule updated ['0559edeb-45f4-4489-a671-f1d11bd0b93c']#033[00m Dec 6 05:21:17 localhost podman[331237]: 2025-12-06 10:21:17.179334142 +0000 UTC m=+0.083251306 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Dec 6 05:21:17 localhost podman[331237]: 2025-12-06 10:21:17.191101822 +0000 UTC m=+0.095018966 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:21:17 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:21:17 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:21:17.221 263652 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:21:17 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:17.299 2 INFO neutron.agent.securitygroups_rpc [None req-a7542a85-dea5-4041-a20f-6eded131077b 14db54f428d84c88aba32e6937011f75 969bd3cedbbc4b03a4546a8b852f13f2 - - default default] Security group rule updated ['0559edeb-45f4-4489-a671-f1d11bd0b93c']#033[00m Dec 6 05:21:17 localhost ovn_controller[154851]: 2025-12-06T10:21:17Z|00462|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:21:17 localhost nova_compute[282193]: 2025-12-06 10:21:17.450 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:17 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:17.519 2 INFO neutron.agent.securitygroups_rpc [None req-2962579a-8cbd-4a57-b13b-f9182cbc39c2 14db54f428d84c88aba32e6937011f75 969bd3cedbbc4b03a4546a8b852f13f2 - - default default] Security group rule updated ['0559edeb-45f4-4489-a671-f1d11bd0b93c']#033[00m Dec 6 05:21:18 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:18.070 2 INFO neutron.agent.securitygroups_rpc [None req-c4974b4a-fd74-4b95-bbd1-546aef178ffe 14db54f428d84c88aba32e6937011f75 969bd3cedbbc4b03a4546a8b852f13f2 - - default default] Security group rule updated ['0559edeb-45f4-4489-a671-f1d11bd0b93c']#033[00m Dec 6 05:21:18 localhost ovn_metadata_agent[160504]: 2025-12-06 10:21:18.081 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b142a5ef-fbed-4e92-aa78-e3ad080c6370, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 05:21:18 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:18.195 2 INFO neutron.agent.securitygroups_rpc [None req-3a1797f5-55dd-437d-aa3c-dfbf79d9d8b2 14db54f428d84c88aba32e6937011f75 969bd3cedbbc4b03a4546a8b852f13f2 - - default default] Security group rule updated ['0559edeb-45f4-4489-a671-f1d11bd0b93c']#033[00m Dec 6 05:21:18 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:18.212 2 INFO neutron.agent.securitygroups_rpc [None req-44e28314-226f-4847-9798-44b59d6b4b35 a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['f89459b7-5955-49a9-980d-ccf671c641e2']#033[00m Dec 6 05:21:18 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:21:18 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:18.334 2 INFO neutron.agent.securitygroups_rpc [None req-b55ae0b1-9a25-4f2d-ae6a-9ce8bc1e6fe6 14db54f428d84c88aba32e6937011f75 969bd3cedbbc4b03a4546a8b852f13f2 - - default default] Security group rule updated ['0559edeb-45f4-4489-a671-f1d11bd0b93c']#033[00m Dec 6 05:21:18 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:18.576 2 INFO neutron.agent.securitygroups_rpc [None req-b097e5fe-7591-4c88-9845-0ee25de4ff5d a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['f89459b7-5955-49a9-980d-ccf671c641e2']#033[00m Dec 6 05:21:18 localhost nova_compute[282193]: 2025-12-06 10:21:18.803 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:21:18 localhost systemd[1]: tmp-crun.jTBrqu.mount: Deactivated successfully. Dec 6 05:21:18 localhost podman[331257]: 2025-12-06 10:21:18.922491247 +0000 UTC m=+0.084651079 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:21:18 localhost podman[331257]: 2025-12-06 10:21:18.934319568 +0000 UTC m=+0.096479380 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:21:18 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:21:19 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:19.021 2 INFO neutron.agent.securitygroups_rpc [None req-634a6650-1d18-4bf0-bb6f-8096dc9b484b a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['f89459b7-5955-49a9-980d-ccf671c641e2']#033[00m Dec 6 05:21:19 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:19.197 2 INFO neutron.agent.securitygroups_rpc [None req-8740ee6b-b668-48f2-97f8-ee50cd2a4f10 14db54f428d84c88aba32e6937011f75 969bd3cedbbc4b03a4546a8b852f13f2 - - default default] Security group rule updated ['3e4cda00-96df-465b-a218-fdd9aa158162']#033[00m Dec 6 05:21:19 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:19.439 2 INFO neutron.agent.securitygroups_rpc [None req-44297d43-b394-4987-b7fa-1e1c5c65d7e5 a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['f89459b7-5955-49a9-980d-ccf671c641e2']#033[00m Dec 6 05:21:19 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:19.802 2 INFO neutron.agent.securitygroups_rpc [None req-ec21728d-7a8b-4886-a802-a9f2fd832d5f a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['f89459b7-5955-49a9-980d-ccf671c641e2']#033[00m Dec 6 05:21:20 localhost nova_compute[282193]: 2025-12-06 10:21:20.005 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:20 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:20.071 2 INFO neutron.agent.securitygroups_rpc [None req-ad373519-aa70-4a24-9c68-f8b3d34f07d1 05cea3733946411abb747782f855ad13 e82deaff368b4feea9fec0f06459a6ca - - default default] Security group member updated ['49ffd6de-2ba3-48fb-87b6-b485622383ee']#033[00m Dec 6 05:21:20 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:20.105 2 INFO neutron.agent.securitygroups_rpc [None req-387cce7d-30a2-4fff-b469-9e054d53578e 14db54f428d84c88aba32e6937011f75 969bd3cedbbc4b03a4546a8b852f13f2 - - default default] Security group rule updated ['2dafdbdc-1eca-4442-97f4-c504a138db8a']#033[00m Dec 6 05:21:20 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e180 e180: 6 total, 6 up, 6 in Dec 6 05:21:20 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:20.585 2 INFO neutron.agent.securitygroups_rpc [None req-d0a643e4-c9dc-4942-bc89-0b7141ebc4ce 14db54f428d84c88aba32e6937011f75 969bd3cedbbc4b03a4546a8b852f13f2 - - default default] Security group rule updated ['2dafdbdc-1eca-4442-97f4-c504a138db8a']#033[00m Dec 6 05:21:20 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:20.877 2 INFO neutron.agent.securitygroups_rpc [None req-46e2ac08-cde4-4c43-95df-6267eb9b2508 a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['f89459b7-5955-49a9-980d-ccf671c641e2']#033[00m Dec 6 05:21:21 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:21.309 2 INFO neutron.agent.securitygroups_rpc [None req-7161a7af-e2d1-4aef-85f9-991736510e62 14db54f428d84c88aba32e6937011f75 969bd3cedbbc4b03a4546a8b852f13f2 - - default default] Security group rule updated ['9ec4be56-d6a3-492b-8ed5-b0e035114ef3']#033[00m Dec 6 05:21:21 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:21.955 2 INFO neutron.agent.securitygroups_rpc [None req-5446130a-9f42-445f-9ada-e84f5ef55ebf 14db54f428d84c88aba32e6937011f75 969bd3cedbbc4b03a4546a8b852f13f2 - - default default] Security group rule updated ['9ec4be56-d6a3-492b-8ed5-b0e035114ef3']#033[00m Dec 6 05:21:22 localhost sshd[331278]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:21:23 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:23.029 2 INFO neutron.agent.securitygroups_rpc [None req-027b5fbe-7eaf-4a47-82ae-b4c37a4f7304 14db54f428d84c88aba32e6937011f75 969bd3cedbbc4b03a4546a8b852f13f2 - - default default] Security group rule updated ['e92480e8-6d06-4c62-9df4-cfcbb83c433c']#033[00m Dec 6 05:21:23 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 6 05:21:23 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1202839971' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 6 05:21:23 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 6 05:21:23 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1202839971' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 6 05:21:23 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:21:23 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:23.384 2 INFO neutron.agent.securitygroups_rpc [None req-045a3cff-be52-4b3f-bacc-9f67cad8a71e 14db54f428d84c88aba32e6937011f75 969bd3cedbbc4b03a4546a8b852f13f2 - - default default] Security group rule updated ['e92480e8-6d06-4c62-9df4-cfcbb83c433c']#033[00m Dec 6 05:21:23 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:21:23.735 263652 INFO neutron.agent.linux.ip_lib [None req-c5e83136-da2e-4565-b830-7a6712a783bf - - - - - -] Device tap07894cb1-b1 cannot be used as it has no MAC address#033[00m Dec 6 05:21:23 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:23.765 2 INFO neutron.agent.securitygroups_rpc [None req-7256d2df-f723-495d-9fc4-3babfe884545 14db54f428d84c88aba32e6937011f75 969bd3cedbbc4b03a4546a8b852f13f2 - - default default] Security group rule updated ['e92480e8-6d06-4c62-9df4-cfcbb83c433c']#033[00m Dec 6 05:21:23 localhost nova_compute[282193]: 2025-12-06 10:21:23.795 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:23 localhost kernel: device tap07894cb1-b1 entered promiscuous mode Dec 6 05:21:23 localhost ovn_controller[154851]: 2025-12-06T10:21:23Z|00463|binding|INFO|Claiming lport 07894cb1-b1eb-4745-bfa0-45277bc8102d for this chassis. Dec 6 05:21:23 localhost ovn_controller[154851]: 2025-12-06T10:21:23Z|00464|binding|INFO|07894cb1-b1eb-4745-bfa0-45277bc8102d: Claiming unknown Dec 6 05:21:23 localhost NetworkManager[5973]: [1765016483.8085] manager: (tap07894cb1-b1): new Generic device (/org/freedesktop/NetworkManager/Devices/75) Dec 6 05:21:23 localhost nova_compute[282193]: 2025-12-06 10:21:23.806 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:23 localhost systemd-udevd[331289]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:21:23 localhost nova_compute[282193]: 2025-12-06 10:21:23.811 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:23 localhost ovn_metadata_agent[160504]: 2025-12-06 10:21:23.818 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-6d1e1b6f-f3ba-4f93-818d-9da1cd142ed4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d1e1b6f-f3ba-4f93-818d-9da1cd142ed4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a18f82f0d09644c7b6d23e2fece8be4f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1bca2059-94f6-4a4c-a1db-49e48579cd24, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=07894cb1-b1eb-4745-bfa0-45277bc8102d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:21:23 localhost ovn_metadata_agent[160504]: 2025-12-06 10:21:23.820 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 07894cb1-b1eb-4745-bfa0-45277bc8102d in datapath 6d1e1b6f-f3ba-4f93-818d-9da1cd142ed4 bound to our chassis#033[00m Dec 6 05:21:23 localhost ovn_metadata_agent[160504]: 2025-12-06 10:21:23.822 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 6d1e1b6f-f3ba-4f93-818d-9da1cd142ed4 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:21:23 localhost ovn_metadata_agent[160504]: 2025-12-06 10:21:23.822 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[913eb280-36a1-4c26-8b79-41ecf66743a9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:21:23 localhost journal[230404]: ethtool ioctl error on tap07894cb1-b1: No such device Dec 6 05:21:23 localhost journal[230404]: ethtool ioctl error on tap07894cb1-b1: No such device Dec 6 05:21:23 localhost ovn_controller[154851]: 2025-12-06T10:21:23Z|00465|binding|INFO|Setting lport 07894cb1-b1eb-4745-bfa0-45277bc8102d ovn-installed in OVS Dec 6 05:21:23 localhost ovn_controller[154851]: 2025-12-06T10:21:23Z|00466|binding|INFO|Setting lport 07894cb1-b1eb-4745-bfa0-45277bc8102d up in Southbound Dec 6 05:21:23 localhost journal[230404]: ethtool ioctl error on tap07894cb1-b1: No such device Dec 6 05:21:23 localhost nova_compute[282193]: 2025-12-06 10:21:23.851 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:23 localhost journal[230404]: ethtool ioctl error on tap07894cb1-b1: No such device Dec 6 05:21:23 localhost journal[230404]: ethtool ioctl error on tap07894cb1-b1: No such device Dec 6 05:21:23 localhost journal[230404]: ethtool ioctl error on tap07894cb1-b1: No such device Dec 6 05:21:23 localhost journal[230404]: ethtool ioctl error on tap07894cb1-b1: No such device Dec 6 05:21:23 localhost journal[230404]: ethtool ioctl error on tap07894cb1-b1: No such device Dec 6 05:21:23 localhost nova_compute[282193]: 2025-12-06 10:21:23.888 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:23 localhost podman[241090]: time="2025-12-06T10:21:23Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:21:23 localhost nova_compute[282193]: 2025-12-06 10:21:23.920 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:23 localhost podman[241090]: @ - - [06/Dec/2025:10:21:23 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156104 "" "Go-http-client/1.1" Dec 6 05:21:23 localhost podman[241090]: @ - - [06/Dec/2025:10:21:23 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19272 "" "Go-http-client/1.1" Dec 6 05:21:24 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:24.017 2 INFO neutron.agent.securitygroups_rpc [None req-41802285-db31-4507-ae87-65b8788f6146 05cea3733946411abb747782f855ad13 e82deaff368b4feea9fec0f06459a6ca - - default default] Security group member updated ['49ffd6de-2ba3-48fb-87b6-b485622383ee']#033[00m Dec 6 05:21:24 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:24.194 2 INFO neutron.agent.securitygroups_rpc [None req-2554a227-dcf4-4ba9-820f-f0670d58a0fd 14db54f428d84c88aba32e6937011f75 969bd3cedbbc4b03a4546a8b852f13f2 - - default default] Security group rule updated ['e92480e8-6d06-4c62-9df4-cfcbb83c433c']#033[00m Dec 6 05:21:24 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:24.513 2 INFO neutron.agent.securitygroups_rpc [None req-59f267c4-4d57-4206-9795-2fa0d0bba04b a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['f89459b7-5955-49a9-980d-ccf671c641e2']#033[00m Dec 6 05:21:24 localhost podman[331360]: Dec 6 05:21:24 localhost podman[331360]: 2025-12-06 10:21:24.754834911 +0000 UTC m=+0.094317125 container create 871e5234add3e791e06f36666b5266db5c5bf235c01c63595bbc267f8da5356b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6d1e1b6f-f3ba-4f93-818d-9da1cd142ed4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Dec 6 05:21:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:21:24 localhost systemd[1]: Started libpod-conmon-871e5234add3e791e06f36666b5266db5c5bf235c01c63595bbc267f8da5356b.scope. Dec 6 05:21:24 localhost podman[331360]: 2025-12-06 10:21:24.711482316 +0000 UTC m=+0.050964540 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:21:24 localhost systemd[1]: tmp-crun.HUxFGA.mount: Deactivated successfully. Dec 6 05:21:24 localhost systemd[1]: Started libcrun container. Dec 6 05:21:24 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c32e1a0810f6a82ee408c06255873f9870f9500df5b5b512ff2b2f1018e06472/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:21:24 localhost podman[331360]: 2025-12-06 10:21:24.852161976 +0000 UTC m=+0.191644190 container init 871e5234add3e791e06f36666b5266db5c5bf235c01c63595bbc267f8da5356b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6d1e1b6f-f3ba-4f93-818d-9da1cd142ed4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:21:24 localhost podman[331360]: 2025-12-06 10:21:24.86374636 +0000 UTC m=+0.203228574 container start 871e5234add3e791e06f36666b5266db5c5bf235c01c63595bbc267f8da5356b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6d1e1b6f-f3ba-4f93-818d-9da1cd142ed4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2) Dec 6 05:21:24 localhost dnsmasq[331390]: started, version 2.85 cachesize 150 Dec 6 05:21:24 localhost dnsmasq[331390]: DNS service limited to local subnets Dec 6 05:21:24 localhost dnsmasq[331390]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:21:24 localhost dnsmasq[331390]: warning: no upstream servers configured Dec 6 05:21:24 localhost dnsmasq-dhcp[331390]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 6 05:21:24 localhost dnsmasq[331390]: read /var/lib/neutron/dhcp/6d1e1b6f-f3ba-4f93-818d-9da1cd142ed4/addn_hosts - 0 addresses Dec 6 05:21:24 localhost dnsmasq-dhcp[331390]: read /var/lib/neutron/dhcp/6d1e1b6f-f3ba-4f93-818d-9da1cd142ed4/host Dec 6 05:21:24 localhost dnsmasq-dhcp[331390]: read /var/lib/neutron/dhcp/6d1e1b6f-f3ba-4f93-818d-9da1cd142ed4/opts Dec 6 05:21:24 localhost podman[331375]: 2025-12-06 10:21:24.901645539 +0000 UTC m=+0.097119100 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller) Dec 6 05:21:24 localhost podman[331375]: 2025-12-06 10:21:24.973130784 +0000 UTC m=+0.168604355 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 6 05:21:24 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:21:25 localhost nova_compute[282193]: 2025-12-06 10:21:25.007 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:25 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:25.009 2 INFO neutron.agent.securitygroups_rpc [None req-e2ff1412-56d0-4e73-b6d0-7b3abe3cbdda 14db54f428d84c88aba32e6937011f75 969bd3cedbbc4b03a4546a8b852f13f2 - - default default] Security group rule updated ['e92480e8-6d06-4c62-9df4-cfcbb83c433c']#033[00m Dec 6 05:21:25 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:21:25.111 263652 INFO neutron.agent.dhcp.agent [None req-8564d077-3b52-421d-b7d5-566575246df5 - - - - - -] DHCP configuration for ports {'b209dc58-7d4e-4e85-a9ad-fc7f3eb9fd41'} is completed#033[00m Dec 6 05:21:25 localhost dnsmasq[331390]: exiting on receipt of SIGTERM Dec 6 05:21:25 localhost podman[331420]: 2025-12-06 10:21:25.253630338 +0000 UTC m=+0.067151974 container kill 871e5234add3e791e06f36666b5266db5c5bf235c01c63595bbc267f8da5356b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6d1e1b6f-f3ba-4f93-818d-9da1cd142ed4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:21:25 localhost systemd[1]: libpod-871e5234add3e791e06f36666b5266db5c5bf235c01c63595bbc267f8da5356b.scope: Deactivated successfully. Dec 6 05:21:25 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:25.307 2 INFO neutron.agent.securitygroups_rpc [None req-502d7035-1806-4b5f-89df-e87b8d86a05e 14db54f428d84c88aba32e6937011f75 969bd3cedbbc4b03a4546a8b852f13f2 - - default default] Security group rule updated ['e92480e8-6d06-4c62-9df4-cfcbb83c433c']#033[00m Dec 6 05:21:25 localhost podman[331434]: 2025-12-06 10:21:25.330477798 +0000 UTC m=+0.061681967 container died 871e5234add3e791e06f36666b5266db5c5bf235c01c63595bbc267f8da5356b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6d1e1b6f-f3ba-4f93-818d-9da1cd142ed4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:21:25 localhost podman[331434]: 2025-12-06 10:21:25.357993768 +0000 UTC m=+0.089197887 container cleanup 871e5234add3e791e06f36666b5266db5c5bf235c01c63595bbc267f8da5356b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6d1e1b6f-f3ba-4f93-818d-9da1cd142ed4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0) Dec 6 05:21:25 localhost systemd[1]: libpod-conmon-871e5234add3e791e06f36666b5266db5c5bf235c01c63595bbc267f8da5356b.scope: Deactivated successfully. Dec 6 05:21:25 localhost podman[331435]: 2025-12-06 10:21:25.400310882 +0000 UTC m=+0.126709945 container remove 871e5234add3e791e06f36666b5266db5c5bf235c01c63595bbc267f8da5356b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6d1e1b6f-f3ba-4f93-818d-9da1cd142ed4, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 6 05:21:25 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:25.523 2 INFO neutron.agent.securitygroups_rpc [None req-4635dd6b-d978-4577-967f-9e6194773640 a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['f89459b7-5955-49a9-980d-ccf671c641e2']#033[00m Dec 6 05:21:25 localhost systemd[1]: var-lib-containers-storage-overlay-c32e1a0810f6a82ee408c06255873f9870f9500df5b5b512ff2b2f1018e06472-merged.mount: Deactivated successfully. Dec 6 05:21:25 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-871e5234add3e791e06f36666b5266db5c5bf235c01c63595bbc267f8da5356b-userdata-shm.mount: Deactivated successfully. Dec 6 05:21:25 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 6 05:21:25 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/921077461' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 6 05:21:25 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 6 05:21:25 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/921077461' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 6 05:21:26 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:26.028 2 INFO neutron.agent.securitygroups_rpc [None req-47fed9fe-9d6c-4fec-9c42-e4b99f91a654 a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['f89459b7-5955-49a9-980d-ccf671c641e2']#033[00m Dec 6 05:21:26 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:26.084 2 INFO neutron.agent.securitygroups_rpc [None req-23962b7d-779d-4700-97cd-a1c3604fb216 14db54f428d84c88aba32e6937011f75 969bd3cedbbc4b03a4546a8b852f13f2 - - default default] Security group rule updated ['4227525c-3196-4e1b-83f0-62a3222dd04d']#033[00m Dec 6 05:21:26 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:26.540 2 INFO neutron.agent.securitygroups_rpc [None req-79d1c16c-d523-499a-9021-4bdb2b87f9e2 a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['f89459b7-5955-49a9-980d-ccf671c641e2']#033[00m Dec 6 05:21:26 localhost podman[331511]: Dec 6 05:21:26 localhost podman[331511]: 2025-12-06 10:21:26.806828897 +0000 UTC m=+0.083605847 container create 50f88f562d9e110cd7c73bab40c97e60533228fd1ce1f9aa890b9537a1d43ea4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6d1e1b6f-f3ba-4f93-818d-9da1cd142ed4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125) Dec 6 05:21:26 localhost systemd[1]: Started libpod-conmon-50f88f562d9e110cd7c73bab40c97e60533228fd1ce1f9aa890b9537a1d43ea4.scope. Dec 6 05:21:26 localhost systemd[1]: Started libcrun container. Dec 6 05:21:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/751c4114672752fe297d8f8b90618e47d59106a9537bfb331c2c081878c120d1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:21:26 localhost podman[331511]: 2025-12-06 10:21:26.764920846 +0000 UTC m=+0.041697846 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:21:26 localhost podman[331511]: 2025-12-06 10:21:26.871303718 +0000 UTC m=+0.148080678 container init 50f88f562d9e110cd7c73bab40c97e60533228fd1ce1f9aa890b9537a1d43ea4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6d1e1b6f-f3ba-4f93-818d-9da1cd142ed4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 6 05:21:26 localhost podman[331511]: 2025-12-06 10:21:26.880084206 +0000 UTC m=+0.156861156 container start 50f88f562d9e110cd7c73bab40c97e60533228fd1ce1f9aa890b9537a1d43ea4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6d1e1b6f-f3ba-4f93-818d-9da1cd142ed4, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:21:26 localhost dnsmasq[331529]: started, version 2.85 cachesize 150 Dec 6 05:21:26 localhost dnsmasq[331529]: DNS service limited to local subnets Dec 6 05:21:26 localhost dnsmasq[331529]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:21:26 localhost dnsmasq[331529]: warning: no upstream servers configured Dec 6 05:21:26 localhost dnsmasq-dhcp[331529]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 6 05:21:26 localhost dnsmasq-dhcp[331529]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d Dec 6 05:21:26 localhost dnsmasq[331529]: read /var/lib/neutron/dhcp/6d1e1b6f-f3ba-4f93-818d-9da1cd142ed4/addn_hosts - 2 addresses Dec 6 05:21:26 localhost dnsmasq-dhcp[331529]: read /var/lib/neutron/dhcp/6d1e1b6f-f3ba-4f93-818d-9da1cd142ed4/host Dec 6 05:21:26 localhost dnsmasq-dhcp[331529]: read /var/lib/neutron/dhcp/6d1e1b6f-f3ba-4f93-818d-9da1cd142ed4/opts Dec 6 05:21:26 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:21:26.938 263652 INFO neutron.agent.dhcp.agent [None req-57d6ff30-5daa-4f69-9fc3-53cf5f818934 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:21:24Z, description=, device_id=, device_owner=, dns_assignment=[, ], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[, ], id=a00dffc7-6f3c-4c71-a5e8-356fd7271314, ip_allocation=immediate, mac_address=fa:16:3e:31:78:60, name=tempest-PortsIpV6TestJSON-1981047382, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:21:21Z, description=, dns_domain=, id=6d1e1b6f-f3ba-4f93-818d-9da1cd142ed4, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsIpV6TestJSON-2107693199, port_security_enabled=True, project_id=a18f82f0d09644c7b6d23e2fece8be4f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=45674, qos_policy_id=None, revision_number=3, router:external=False, shared=False, standard_attr_id=2771, status=ACTIVE, subnets=['156375fe-1af8-48ec-b1e8-48e2e596ca8a', '15947303-a9dd-45df-aac2-e0a5c586be61'], tags=[], tenant_id=a18f82f0d09644c7b6d23e2fece8be4f, updated_at=2025-12-06T10:21:23Z, vlan_transparent=None, network_id=6d1e1b6f-f3ba-4f93-818d-9da1cd142ed4, port_security_enabled=True, project_id=a18f82f0d09644c7b6d23e2fece8be4f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['f89459b7-5955-49a9-980d-ccf671c641e2'], standard_attr_id=2802, status=DOWN, tags=[], tenant_id=a18f82f0d09644c7b6d23e2fece8be4f, updated_at=2025-12-06T10:21:24Z on network 6d1e1b6f-f3ba-4f93-818d-9da1cd142ed4#033[00m Dec 6 05:21:27 localhost dnsmasq[331529]: read /var/lib/neutron/dhcp/6d1e1b6f-f3ba-4f93-818d-9da1cd142ed4/addn_hosts - 2 addresses Dec 6 05:21:27 localhost dnsmasq-dhcp[331529]: read /var/lib/neutron/dhcp/6d1e1b6f-f3ba-4f93-818d-9da1cd142ed4/host Dec 6 05:21:27 localhost dnsmasq-dhcp[331529]: read /var/lib/neutron/dhcp/6d1e1b6f-f3ba-4f93-818d-9da1cd142ed4/opts Dec 6 05:21:27 localhost podman[331547]: 2025-12-06 10:21:27.139216017 +0000 UTC m=+0.066481963 container kill 50f88f562d9e110cd7c73bab40c97e60533228fd1ce1f9aa890b9537a1d43ea4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6d1e1b6f-f3ba-4f93-818d-9da1cd142ed4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 6 05:21:27 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:21:27.197 263652 INFO neutron.agent.dhcp.agent [None req-68974d9d-792f-4d9e-9829-90267bdb6192 - - - - - -] DHCP configuration for ports {'07894cb1-b1eb-4745-bfa0-45277bc8102d', 'a00dffc7-6f3c-4c71-a5e8-356fd7271314', 'b209dc58-7d4e-4e85-a9ad-fc7f3eb9fd41'} is completed#033[00m Dec 6 05:21:27 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:21:27.320 263652 INFO neutron.agent.dhcp.agent [None req-2d5f6596-b50a-47f0-887f-a8c8ac8e535a - - - - - -] DHCP configuration for ports {'a00dffc7-6f3c-4c71-a5e8-356fd7271314'} is completed#033[00m Dec 6 05:21:27 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e181 e181: 6 total, 6 up, 6 in Dec 6 05:21:27 localhost dnsmasq[331529]: exiting on receipt of SIGTERM Dec 6 05:21:27 localhost podman[331586]: 2025-12-06 10:21:27.550837399 +0000 UTC m=+0.069776414 container kill 50f88f562d9e110cd7c73bab40c97e60533228fd1ce1f9aa890b9537a1d43ea4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6d1e1b6f-f3ba-4f93-818d-9da1cd142ed4, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:21:27 localhost systemd[1]: libpod-50f88f562d9e110cd7c73bab40c97e60533228fd1ce1f9aa890b9537a1d43ea4.scope: Deactivated successfully. Dec 6 05:21:27 localhost podman[331599]: 2025-12-06 10:21:27.623227162 +0000 UTC m=+0.059443358 container died 50f88f562d9e110cd7c73bab40c97e60533228fd1ce1f9aa890b9537a1d43ea4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6d1e1b6f-f3ba-4f93-818d-9da1cd142ed4, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:21:27 localhost podman[331599]: 2025-12-06 10:21:27.657629114 +0000 UTC m=+0.093845270 container cleanup 50f88f562d9e110cd7c73bab40c97e60533228fd1ce1f9aa890b9537a1d43ea4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6d1e1b6f-f3ba-4f93-818d-9da1cd142ed4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:21:27 localhost systemd[1]: libpod-conmon-50f88f562d9e110cd7c73bab40c97e60533228fd1ce1f9aa890b9537a1d43ea4.scope: Deactivated successfully. Dec 6 05:21:27 localhost podman[331601]: 2025-12-06 10:21:27.695092569 +0000 UTC m=+0.125365813 container remove 50f88f562d9e110cd7c73bab40c97e60533228fd1ce1f9aa890b9537a1d43ea4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6d1e1b6f-f3ba-4f93-818d-9da1cd142ed4, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 6 05:21:27 localhost nova_compute[282193]: 2025-12-06 10:21:27.708 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:27 localhost kernel: device tap07894cb1-b1 left promiscuous mode Dec 6 05:21:27 localhost ovn_controller[154851]: 2025-12-06T10:21:27Z|00467|binding|INFO|Releasing lport 07894cb1-b1eb-4745-bfa0-45277bc8102d from this chassis (sb_readonly=0) Dec 6 05:21:27 localhost ovn_controller[154851]: 2025-12-06T10:21:27Z|00468|binding|INFO|Setting lport 07894cb1-b1eb-4745-bfa0-45277bc8102d down in Southbound Dec 6 05:21:27 localhost ovn_metadata_agent[160504]: 2025-12-06 10:21:27.724 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1::2/64 2001:db8::2/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-6d1e1b6f-f3ba-4f93-818d-9da1cd142ed4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d1e1b6f-f3ba-4f93-818d-9da1cd142ed4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a18f82f0d09644c7b6d23e2fece8be4f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548789.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1bca2059-94f6-4a4c-a1db-49e48579cd24, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=07894cb1-b1eb-4745-bfa0-45277bc8102d) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:21:27 localhost ovn_metadata_agent[160504]: 2025-12-06 10:21:27.726 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 07894cb1-b1eb-4745-bfa0-45277bc8102d in datapath 6d1e1b6f-f3ba-4f93-818d-9da1cd142ed4 unbound from our chassis#033[00m Dec 6 05:21:27 localhost ovn_metadata_agent[160504]: 2025-12-06 10:21:27.727 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 6d1e1b6f-f3ba-4f93-818d-9da1cd142ed4 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:21:27 localhost ovn_metadata_agent[160504]: 2025-12-06 10:21:27.728 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[19e30417-38c3-4fe4-99cf-78d2872cb7a6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:21:27 localhost nova_compute[282193]: 2025-12-06 10:21:27.738 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:27 localhost systemd[1]: var-lib-containers-storage-overlay-751c4114672752fe297d8f8b90618e47d59106a9537bfb331c2c081878c120d1-merged.mount: Deactivated successfully. Dec 6 05:21:27 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-50f88f562d9e110cd7c73bab40c97e60533228fd1ce1f9aa890b9537a1d43ea4-userdata-shm.mount: Deactivated successfully. Dec 6 05:21:28 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:21:28.074 263652 INFO neutron.agent.dhcp.agent [None req-5806e92b-5bf2-4aa5-a473-94a90813d877 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:21:28 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:21:28.075 263652 INFO neutron.agent.dhcp.agent [None req-5806e92b-5bf2-4aa5-a473-94a90813d877 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:21:28 localhost systemd[1]: run-netns-qdhcp\x2d6d1e1b6f\x2df3ba\x2d4f93\x2d818d\x2d9da1cd142ed4.mount: Deactivated successfully. Dec 6 05:21:28 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:21:28.076 263652 INFO neutron.agent.dhcp.agent [None req-5806e92b-5bf2-4aa5-a473-94a90813d877 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:21:28 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:21:28.300 263652 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:21:28 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:21:28 localhost ovn_controller[154851]: 2025-12-06T10:21:28Z|00469|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:21:28 localhost nova_compute[282193]: 2025-12-06 10:21:28.518 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:28 localhost nova_compute[282193]: 2025-12-06 10:21:28.850 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:30 localhost nova_compute[282193]: 2025-12-06 10:21:30.052 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:30 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:30.643 2 INFO neutron.agent.securitygroups_rpc [None req-650d6f16-dff9-4e9f-a532-00c1d7cd9ac8 a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['f89459b7-5955-49a9-980d-ccf671c641e2']#033[00m Dec 6 05:21:31 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:31.378 2 INFO neutron.agent.securitygroups_rpc [None req-1b25accf-a6fc-48da-91f7-6e8e14cb52bd a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['f89459b7-5955-49a9-980d-ccf671c641e2']#033[00m Dec 6 05:21:31 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:31.438 2 INFO neutron.agent.securitygroups_rpc [None req-253b8277-210a-4573-82a4-3ce2f38be71e cc9a0aebc5df40baa5d30408481c8824 5ea98fc77f0c4728a4c2d7a5429d8129 - - default default] Security group rule updated ['113d3ef2-1b05-41a6-846b-b981d95adda0']#033[00m Dec 6 05:21:31 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:31.887 2 INFO neutron.agent.securitygroups_rpc [None req-0a2b9713-3499-411d-9f50-733d3731fca5 a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['f89459b7-5955-49a9-980d-ccf671c641e2']#033[00m Dec 6 05:21:32 localhost sshd[331629]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:21:32 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:32.510 2 INFO neutron.agent.securitygroups_rpc [None req-9523440b-4459-4eba-b7c1-0671f42e7aa4 a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['f89459b7-5955-49a9-980d-ccf671c641e2']#033[00m Dec 6 05:21:33 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:21:33 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 6 05:21:33 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/620957375' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 6 05:21:33 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 6 05:21:33 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/620957375' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 6 05:21:33 localhost nova_compute[282193]: 2025-12-06 10:21:33.888 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:34 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:21:34.363 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:21:34Z, description=, device_id=d3712577-f8e8-4c55-825d-3cfca431b537, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=ef99384a-2dd1-472a-bed4-f4d953c92a45, ip_allocation=immediate, mac_address=fa:16:3e:2f:bd:38, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2873, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-06T10:21:34Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:21:34 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 2 addresses Dec 6 05:21:34 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:21:34 localhost podman[331647]: 2025-12-06 10:21:34.579282255 +0000 UTC m=+0.054149486 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:21:34 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:21:34 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:21:34.822 263652 INFO neutron.agent.dhcp.agent [None req-23df8721-1a62-4d30-9cf3-cc03f9ef211d - - - - - -] DHCP configuration for ports {'ef99384a-2dd1-472a-bed4-f4d953c92a45'} is completed#033[00m Dec 6 05:21:35 localhost nova_compute[282193]: 2025-12-06 10:21:35.097 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:35 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e182 e182: 6 total, 6 up, 6 in Dec 6 05:21:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:21:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:21:35 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:21:35.580 263652 INFO neutron.agent.linux.ip_lib [None req-2028a43a-dd13-41c7-b6a9-90eb06345c1d - - - - - -] Device tap7a29805b-bc cannot be used as it has no MAC address#033[00m Dec 6 05:21:35 localhost kernel: device tap7a29805b-bc entered promiscuous mode Dec 6 05:21:35 localhost ovn_controller[154851]: 2025-12-06T10:21:35Z|00470|binding|INFO|Claiming lport 7a29805b-bc7f-4a1f-9655-ca6a5d1f8ed8 for this chassis. Dec 6 05:21:35 localhost ovn_controller[154851]: 2025-12-06T10:21:35Z|00471|binding|INFO|7a29805b-bc7f-4a1f-9655-ca6a5d1f8ed8: Claiming unknown Dec 6 05:21:35 localhost NetworkManager[5973]: [1765016495.6171] manager: (tap7a29805b-bc): new Generic device (/org/freedesktop/NetworkManager/Devices/76) Dec 6 05:21:35 localhost nova_compute[282193]: 2025-12-06 10:21:35.606 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:35 localhost nova_compute[282193]: 2025-12-06 10:21:35.617 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:35 localhost systemd-udevd[331705]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:21:35 localhost ovn_metadata_agent[160504]: 2025-12-06 10:21:35.625 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-1ceb092b-0721-47ff-8a32-871f82b9c9c0', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ceb092b-0721-47ff-8a32-871f82b9c9c0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a18f82f0d09644c7b6d23e2fece8be4f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cc3e50f0-1643-4b6d-9347-eecb3387f433, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=7a29805b-bc7f-4a1f-9655-ca6a5d1f8ed8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:21:35 localhost ovn_metadata_agent[160504]: 2025-12-06 10:21:35.628 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 7a29805b-bc7f-4a1f-9655-ca6a5d1f8ed8 in datapath 1ceb092b-0721-47ff-8a32-871f82b9c9c0 bound to our chassis#033[00m Dec 6 05:21:35 localhost ovn_metadata_agent[160504]: 2025-12-06 10:21:35.629 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 1ceb092b-0721-47ff-8a32-871f82b9c9c0 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:21:35 localhost ovn_metadata_agent[160504]: 2025-12-06 10:21:35.630 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[14f42e30-baed-47dd-9f8d-1c01595d48c1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:21:35 localhost ovn_controller[154851]: 2025-12-06T10:21:35Z|00472|binding|INFO|Setting lport 7a29805b-bc7f-4a1f-9655-ca6a5d1f8ed8 ovn-installed in OVS Dec 6 05:21:35 localhost ovn_controller[154851]: 2025-12-06T10:21:35Z|00473|binding|INFO|Setting lport 7a29805b-bc7f-4a1f-9655-ca6a5d1f8ed8 up in Southbound Dec 6 05:21:35 localhost podman[331673]: 2025-12-06 10:21:35.675364251 +0000 UTC m=+0.147118608 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:21:35 localhost nova_compute[282193]: 2025-12-06 10:21:35.679 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:35 localhost podman[331671]: 2025-12-06 10:21:35.633874963 +0000 UTC m=+0.112080277 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 6 05:21:35 localhost nova_compute[282193]: 2025-12-06 10:21:35.703 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:35 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:35.706 2 INFO neutron.agent.securitygroups_rpc [None req-b9509f86-2a19-49fe-82cf-8c23b9e8fca9 a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['f89459b7-5955-49a9-980d-ccf671c641e2']#033[00m Dec 6 05:21:35 localhost podman[331673]: 2025-12-06 10:21:35.707916946 +0000 UTC m=+0.179671283 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 6 05:21:35 localhost podman[331671]: 2025-12-06 10:21:35.714004102 +0000 UTC m=+0.192209406 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3) Dec 6 05:21:35 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:21:35 localhost nova_compute[282193]: 2025-12-06 10:21:35.733 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:35 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:21:36 localhost nova_compute[282193]: 2025-12-06 10:21:36.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:21:36 localhost nova_compute[282193]: 2025-12-06 10:21:36.225 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:21:36 localhost nova_compute[282193]: 2025-12-06 10:21:36.225 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:21:36 localhost nova_compute[282193]: 2025-12-06 10:21:36.226 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:21:36 localhost nova_compute[282193]: 2025-12-06 10:21:36.226 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Auditing locally available compute resources for np0005548789.localdomain (node: np0005548789.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 6 05:21:36 localhost nova_compute[282193]: 2025-12-06 10:21:36.227 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:21:36 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e183 e183: 6 total, 6 up, 6 in Dec 6 05:21:36 localhost podman[331792]: Dec 6 05:21:36 localhost podman[331792]: 2025-12-06 10:21:36.610886048 +0000 UTC m=+0.106768725 container create 20c8dd0cc628914b02b73de177e93e44e95860183716d594861a309709f95dd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1ceb092b-0721-47ff-8a32-871f82b9c9c0, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 6 05:21:36 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:21:36 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2846497516' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:21:36 localhost nova_compute[282193]: 2025-12-06 10:21:36.644 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:21:36 localhost podman[331792]: 2025-12-06 10:21:36.552751511 +0000 UTC m=+0.048634218 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:21:36 localhost systemd[1]: Started libpod-conmon-20c8dd0cc628914b02b73de177e93e44e95860183716d594861a309709f95dd2.scope. Dec 6 05:21:36 localhost systemd[1]: Started libcrun container. Dec 6 05:21:36 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f16e1d58a341217a815476c881a76cf024f00fba0caf0e395a3447aa5ff6a5ef/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:21:36 localhost podman[331792]: 2025-12-06 10:21:36.701031843 +0000 UTC m=+0.196914510 container init 20c8dd0cc628914b02b73de177e93e44e95860183716d594861a309709f95dd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1ceb092b-0721-47ff-8a32-871f82b9c9c0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true) Dec 6 05:21:36 localhost podman[331792]: 2025-12-06 10:21:36.70745836 +0000 UTC m=+0.203341007 container start 20c8dd0cc628914b02b73de177e93e44e95860183716d594861a309709f95dd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1ceb092b-0721-47ff-8a32-871f82b9c9c0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:21:36 localhost dnsmasq[331812]: started, version 2.85 cachesize 150 Dec 6 05:21:36 localhost dnsmasq[331812]: DNS service limited to local subnets Dec 6 05:21:36 localhost dnsmasq[331812]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:21:36 localhost dnsmasq[331812]: warning: no upstream servers configured Dec 6 05:21:36 localhost dnsmasq-dhcp[331812]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 6 05:21:36 localhost dnsmasq[331812]: read /var/lib/neutron/dhcp/1ceb092b-0721-47ff-8a32-871f82b9c9c0/addn_hosts - 0 addresses Dec 6 05:21:36 localhost dnsmasq-dhcp[331812]: read /var/lib/neutron/dhcp/1ceb092b-0721-47ff-8a32-871f82b9c9c0/host Dec 6 05:21:36 localhost dnsmasq-dhcp[331812]: read /var/lib/neutron/dhcp/1ceb092b-0721-47ff-8a32-871f82b9c9c0/opts Dec 6 05:21:36 localhost nova_compute[282193]: 2025-12-06 10:21:36.733 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:21:36 localhost nova_compute[282193]: 2025-12-06 10:21:36.734 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:21:36 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:21:36.760 263652 INFO neutron.agent.dhcp.agent [None req-2028a43a-dd13-41c7-b6a9-90eb06345c1d - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:21:35Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=90f7a28e-a8f2-4712-a6e0-d2bc46b5745f, ip_allocation=immediate, mac_address=fa:16:3e:09:74:62, name=tempest-PortsIpV6TestJSON-1589482234, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:21:33Z, description=, dns_domain=, id=1ceb092b-0721-47ff-8a32-871f82b9c9c0, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsIpV6TestJSON-2073272237, port_security_enabled=True, project_id=a18f82f0d09644c7b6d23e2fece8be4f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=35875, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2870, status=ACTIVE, subnets=['07d85612-b0dd-4672-9a31-4432908cf38c'], tags=[], tenant_id=a18f82f0d09644c7b6d23e2fece8be4f, updated_at=2025-12-06T10:21:34Z, vlan_transparent=None, network_id=1ceb092b-0721-47ff-8a32-871f82b9c9c0, port_security_enabled=True, project_id=a18f82f0d09644c7b6d23e2fece8be4f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['f89459b7-5955-49a9-980d-ccf671c641e2'], standard_attr_id=2885, status=DOWN, tags=[], tenant_id=a18f82f0d09644c7b6d23e2fece8be4f, updated_at=2025-12-06T10:21:35Z on network 1ceb092b-0721-47ff-8a32-871f82b9c9c0#033[00m Dec 6 05:21:36 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:21:36.813 263652 INFO neutron.agent.dhcp.agent [None req-6ffbc636-7bf6-4525-bc7c-10147ebbc29b - - - - - -] DHCP configuration for ports {'9cdca5e6-0cc0-4c47-8039-0c19344c47c0'} is completed#033[00m Dec 6 05:21:36 localhost nova_compute[282193]: 2025-12-06 10:21:36.887 282197 WARNING nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 6 05:21:36 localhost nova_compute[282193]: 2025-12-06 10:21:36.888 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Hypervisor/Node resource view: name=np0005548789.localdomain free_ram=11163MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 6 05:21:36 localhost nova_compute[282193]: 2025-12-06 10:21:36.889 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:21:36 localhost nova_compute[282193]: 2025-12-06 10:21:36.889 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:21:36 localhost dnsmasq[331812]: read /var/lib/neutron/dhcp/1ceb092b-0721-47ff-8a32-871f82b9c9c0/addn_hosts - 1 addresses Dec 6 05:21:36 localhost dnsmasq-dhcp[331812]: read /var/lib/neutron/dhcp/1ceb092b-0721-47ff-8a32-871f82b9c9c0/host Dec 6 05:21:36 localhost dnsmasq-dhcp[331812]: read /var/lib/neutron/dhcp/1ceb092b-0721-47ff-8a32-871f82b9c9c0/opts Dec 6 05:21:36 localhost podman[331831]: 2025-12-06 10:21:36.930363223 +0000 UTC m=+0.061436249 container kill 20c8dd0cc628914b02b73de177e93e44e95860183716d594861a309709f95dd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1ceb092b-0721-47ff-8a32-871f82b9c9c0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:21:36 localhost nova_compute[282193]: 2025-12-06 10:21:36.957 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 6 05:21:36 localhost nova_compute[282193]: 2025-12-06 10:21:36.958 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 6 05:21:36 localhost nova_compute[282193]: 2025-12-06 10:21:36.958 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Final resource view: name=np0005548789.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 6 05:21:36 localhost nova_compute[282193]: 2025-12-06 10:21:36.986 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:21:37 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:21:37.154 263652 INFO neutron.agent.dhcp.agent [None req-030c8905-92f4-4789-b960-81f70de49bdf - - - - - -] DHCP configuration for ports {'90f7a28e-a8f2-4712-a6e0-d2bc46b5745f'} is completed#033[00m Dec 6 05:21:37 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:21:37 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1607464631' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:21:37 localhost nova_compute[282193]: 2025-12-06 10:21:37.445 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:21:37 localhost nova_compute[282193]: 2025-12-06 10:21:37.452 282197 DEBUG nova.compute.provider_tree [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed in ProviderTree for provider: 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 6 05:21:37 localhost nova_compute[282193]: 2025-12-06 10:21:37.467 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 6 05:21:37 localhost nova_compute[282193]: 2025-12-06 10:21:37.470 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Compute_service record updated for np0005548789.localdomain:np0005548789.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 6 05:21:37 localhost nova_compute[282193]: 2025-12-06 10:21:37.470 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:21:37 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e184 e184: 6 total, 6 up, 6 in Dec 6 05:21:38 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:21:38 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e185 e185: 6 total, 6 up, 6 in Dec 6 05:21:38 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:21:38.822 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:21:35Z, description=, device_id=07d96743-59c2-40bb-9e6f-79db48bad162, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=90f7a28e-a8f2-4712-a6e0-d2bc46b5745f, ip_allocation=immediate, mac_address=fa:16:3e:09:74:62, name=tempest-PortsIpV6TestJSON-1589482234, network_id=1ceb092b-0721-47ff-8a32-871f82b9c9c0, port_security_enabled=True, project_id=a18f82f0d09644c7b6d23e2fece8be4f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=3, security_groups=['f89459b7-5955-49a9-980d-ccf671c641e2'], standard_attr_id=2885, status=ACTIVE, tags=[], tenant_id=a18f82f0d09644c7b6d23e2fece8be4f, updated_at=2025-12-06T10:21:37Z on network 1ceb092b-0721-47ff-8a32-871f82b9c9c0#033[00m Dec 6 05:21:38 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 1 addresses Dec 6 05:21:38 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:21:38 localhost podman[331891]: 2025-12-06 10:21:38.933018442 +0000 UTC m=+0.081951846 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Dec 6 05:21:38 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:21:38 localhost nova_compute[282193]: 2025-12-06 10:21:38.936 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:39 localhost dnsmasq[331812]: read /var/lib/neutron/dhcp/1ceb092b-0721-47ff-8a32-871f82b9c9c0/addn_hosts - 1 addresses Dec 6 05:21:39 localhost dnsmasq-dhcp[331812]: read /var/lib/neutron/dhcp/1ceb092b-0721-47ff-8a32-871f82b9c9c0/host Dec 6 05:21:39 localhost dnsmasq-dhcp[331812]: read /var/lib/neutron/dhcp/1ceb092b-0721-47ff-8a32-871f82b9c9c0/opts Dec 6 05:21:39 localhost podman[331920]: 2025-12-06 10:21:39.02191105 +0000 UTC m=+0.049136293 container kill 20c8dd0cc628914b02b73de177e93e44e95860183716d594861a309709f95dd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1ceb092b-0721-47ff-8a32-871f82b9c9c0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Dec 6 05:21:39 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:21:39.330 263652 INFO neutron.agent.dhcp.agent [None req-56fe9e9a-a0a1-4cf0-ae91-907ac3f34e5e - - - - - -] DHCP configuration for ports {'90f7a28e-a8f2-4712-a6e0-d2bc46b5745f'} is completed#033[00m Dec 6 05:21:39 localhost nova_compute[282193]: 2025-12-06 10:21:39.466 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:21:39 localhost sshd[331947]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:21:39 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:39.902 2 INFO neutron.agent.securitygroups_rpc [None req-07bddad5-b128-4210-8a04-c1adeb45eb18 a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['f89459b7-5955-49a9-980d-ccf671c641e2']#033[00m Dec 6 05:21:40 localhost nova_compute[282193]: 2025-12-06 10:21:40.144 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:40 localhost dnsmasq[331812]: read /var/lib/neutron/dhcp/1ceb092b-0721-47ff-8a32-871f82b9c9c0/addn_hosts - 0 addresses Dec 6 05:21:40 localhost dnsmasq-dhcp[331812]: read /var/lib/neutron/dhcp/1ceb092b-0721-47ff-8a32-871f82b9c9c0/host Dec 6 05:21:40 localhost dnsmasq-dhcp[331812]: read /var/lib/neutron/dhcp/1ceb092b-0721-47ff-8a32-871f82b9c9c0/opts Dec 6 05:21:40 localhost podman[331966]: 2025-12-06 10:21:40.149342702 +0000 UTC m=+0.102335189 container kill 20c8dd0cc628914b02b73de177e93e44e95860183716d594861a309709f95dd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1ceb092b-0721-47ff-8a32-871f82b9c9c0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 6 05:21:40 localhost nova_compute[282193]: 2025-12-06 10:21:40.327 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:40 localhost ovn_controller[154851]: 2025-12-06T10:21:40Z|00474|binding|INFO|Releasing lport 7a29805b-bc7f-4a1f-9655-ca6a5d1f8ed8 from this chassis (sb_readonly=0) Dec 6 05:21:40 localhost kernel: device tap7a29805b-bc left promiscuous mode Dec 6 05:21:40 localhost ovn_controller[154851]: 2025-12-06T10:21:40Z|00475|binding|INFO|Setting lport 7a29805b-bc7f-4a1f-9655-ca6a5d1f8ed8 down in Southbound Dec 6 05:21:40 localhost ovn_metadata_agent[160504]: 2025-12-06 10:21:40.355 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-1ceb092b-0721-47ff-8a32-871f82b9c9c0', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ceb092b-0721-47ff-8a32-871f82b9c9c0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a18f82f0d09644c7b6d23e2fece8be4f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cc3e50f0-1643-4b6d-9347-eecb3387f433, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=7a29805b-bc7f-4a1f-9655-ca6a5d1f8ed8) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:21:40 localhost nova_compute[282193]: 2025-12-06 10:21:40.356 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:40 localhost ovn_metadata_agent[160504]: 2025-12-06 10:21:40.358 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 7a29805b-bc7f-4a1f-9655-ca6a5d1f8ed8 in datapath 1ceb092b-0721-47ff-8a32-871f82b9c9c0 unbound from our chassis#033[00m Dec 6 05:21:40 localhost ovn_metadata_agent[160504]: 2025-12-06 10:21:40.360 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 1ceb092b-0721-47ff-8a32-871f82b9c9c0 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:21:40 localhost ovn_metadata_agent[160504]: 2025-12-06 10:21:40.362 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[372e837b-2a2b-4ff4-9cc9-061ae7283fd4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:21:40 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e186 e186: 6 total, 6 up, 6 in Dec 6 05:21:41 localhost nova_compute[282193]: 2025-12-06 10:21:41.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:21:41 localhost nova_compute[282193]: 2025-12-06 10:21:41.181 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 6 05:21:41 localhost nova_compute[282193]: 2025-12-06 10:21:41.182 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 6 05:21:41 localhost nova_compute[282193]: 2025-12-06 10:21:41.292 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 6 05:21:41 localhost nova_compute[282193]: 2025-12-06 10:21:41.293 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquired lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 6 05:21:41 localhost nova_compute[282193]: 2025-12-06 10:21:41.293 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 6 05:21:41 localhost nova_compute[282193]: 2025-12-06 10:21:41.294 282197 DEBUG nova.objects.instance [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 6 05:21:41 localhost dnsmasq[331812]: exiting on receipt of SIGTERM Dec 6 05:21:41 localhost podman[332004]: 2025-12-06 10:21:41.557799817 +0000 UTC m=+0.061451320 container kill 20c8dd0cc628914b02b73de177e93e44e95860183716d594861a309709f95dd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1ceb092b-0721-47ff-8a32-871f82b9c9c0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:21:41 localhost systemd[1]: libpod-20c8dd0cc628914b02b73de177e93e44e95860183716d594861a309709f95dd2.scope: Deactivated successfully. Dec 6 05:21:41 localhost podman[332018]: 2025-12-06 10:21:41.61251502 +0000 UTC m=+0.045338337 container died 20c8dd0cc628914b02b73de177e93e44e95860183716d594861a309709f95dd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1ceb092b-0721-47ff-8a32-871f82b9c9c0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125) Dec 6 05:21:41 localhost podman[332018]: 2025-12-06 10:21:41.663823138 +0000 UTC m=+0.096646385 container cleanup 20c8dd0cc628914b02b73de177e93e44e95860183716d594861a309709f95dd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1ceb092b-0721-47ff-8a32-871f82b9c9c0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 6 05:21:41 localhost systemd[1]: libpod-conmon-20c8dd0cc628914b02b73de177e93e44e95860183716d594861a309709f95dd2.scope: Deactivated successfully. Dec 6 05:21:41 localhost sshd[332045]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:21:41 localhost podman[332020]: 2025-12-06 10:21:41.692209466 +0000 UTC m=+0.114653996 container remove 20c8dd0cc628914b02b73de177e93e44e95860183716d594861a309709f95dd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1ceb092b-0721-47ff-8a32-871f82b9c9c0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:21:41 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:21:41.718 263652 INFO neutron.agent.dhcp.agent [None req-89bfc89d-0180-4220-9f43-2f27efa1794d - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:21:41 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:21:41.721 263652 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:21:42 localhost ovn_controller[154851]: 2025-12-06T10:21:42Z|00476|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:21:42 localhost nova_compute[282193]: 2025-12-06 10:21:42.060 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:42 localhost nova_compute[282193]: 2025-12-06 10:21:42.142 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updating instance_info_cache with network_info: [{"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 6 05:21:42 localhost nova_compute[282193]: 2025-12-06 10:21:42.174 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Releasing lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 6 05:21:42 localhost nova_compute[282193]: 2025-12-06 10:21:42.174 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 6 05:21:42 localhost nova_compute[282193]: 2025-12-06 10:21:42.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:21:42 localhost nova_compute[282193]: 2025-12-06 10:21:42.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:21:42 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e187 e187: 6 total, 6 up, 6 in Dec 6 05:21:42 localhost systemd[1]: tmp-crun.mniz3H.mount: Deactivated successfully. Dec 6 05:21:42 localhost systemd[1]: var-lib-containers-storage-overlay-f16e1d58a341217a815476c881a76cf024f00fba0caf0e395a3447aa5ff6a5ef-merged.mount: Deactivated successfully. Dec 6 05:21:42 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-20c8dd0cc628914b02b73de177e93e44e95860183716d594861a309709f95dd2-userdata-shm.mount: Deactivated successfully. Dec 6 05:21:42 localhost systemd[1]: run-netns-qdhcp\x2d1ceb092b\x2d0721\x2d47ff\x2d8a32\x2d871f82b9c9c0.mount: Deactivated successfully. Dec 6 05:21:43 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 6 05:21:43 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/518841331' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 6 05:21:43 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 6 05:21:43 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/518841331' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 6 05:21:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:21:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:21:43 localhost podman[332049]: 2025-12-06 10:21:43.143154088 +0000 UTC m=+0.088841466 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, vendor=Red Hat, Inc., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible) Dec 6 05:21:43 localhost podman[332049]: 2025-12-06 10:21:43.15662107 +0000 UTC m=+0.102308478 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-type=git, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, version=9.6) Dec 6 05:21:43 localhost nova_compute[282193]: 2025-12-06 10:21:43.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:21:43 localhost nova_compute[282193]: 2025-12-06 10:21:43.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:21:43 localhost nova_compute[282193]: 2025-12-06 10:21:43.182 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 6 05:21:43 localhost systemd[1]: tmp-crun.8diATm.mount: Deactivated successfully. Dec 6 05:21:43 localhost podman[332050]: 2025-12-06 10:21:43.205513594 +0000 UTC m=+0.148072716 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, tcib_managed=true) Dec 6 05:21:43 localhost podman[332050]: 2025-12-06 10:21:43.214614093 +0000 UTC m=+0.157173245 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0) Dec 6 05:21:43 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:21:43 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:21:43 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:21:43 localhost nova_compute[282193]: 2025-12-06 10:21:43.940 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:44 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:44.320 2 INFO neutron.agent.securitygroups_rpc [None req-e0373cd9-ce28-4174-b7a6-2d0d511546d5 a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['ecf618e7-df48-4fb6-89e3-d9952de70569']#033[00m Dec 6 05:21:44 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:21:44.870 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:21:44Z, description=, device_id=ce829857-8272-489e-9d8d-e074f2d58a5d, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=cb29ec91-f044-43bb-99c4-9e015acf18ae, ip_allocation=immediate, mac_address=fa:16:3e:31:4a:d6, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2962, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-06T10:21:44Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:21:45 localhost podman[332104]: 2025-12-06 10:21:45.079409106 +0000 UTC m=+0.056620331 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 6 05:21:45 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 2 addresses Dec 6 05:21:45 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:21:45 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:21:45 localhost nova_compute[282193]: 2025-12-06 10:21:45.179 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:45 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:21:45.315 263652 INFO neutron.agent.dhcp.agent [None req-2fa99148-40fa-4aeb-809c-22e93103a036 - - - - - -] DHCP configuration for ports {'cb29ec91-f044-43bb-99c4-9e015acf18ae'} is completed#033[00m Dec 6 05:21:46 localhost nova_compute[282193]: 2025-12-06 10:21:46.182 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:21:46 localhost nova_compute[282193]: 2025-12-06 10:21:46.183 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:21:46 localhost nova_compute[282193]: 2025-12-06 10:21:46.450 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:46 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:46.597 2 INFO neutron.agent.securitygroups_rpc [None req-f9e919cd-0c1a-4346-a33d-5b1944f06d7b a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['d5c7cc25-d8ea-40c9-b1b2-04cf074e64bb', 'ecf618e7-df48-4fb6-89e3-d9952de70569']#033[00m Dec 6 05:21:46 localhost openstack_network_exporter[243110]: ERROR 10:21:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:21:46 localhost openstack_network_exporter[243110]: ERROR 10:21:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:21:46 localhost openstack_network_exporter[243110]: ERROR 10:21:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:21:46 localhost openstack_network_exporter[243110]: ERROR 10:21:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:21:46 localhost openstack_network_exporter[243110]: Dec 6 05:21:46 localhost openstack_network_exporter[243110]: ERROR 10:21:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:21:46 localhost openstack_network_exporter[243110]: Dec 6 05:21:47 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:47.250 2 INFO neutron.agent.securitygroups_rpc [None req-cf0a2394-56b4-45a3-97ca-aedf09b47c4d a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['d5c7cc25-d8ea-40c9-b1b2-04cf074e64bb']#033[00m Dec 6 05:21:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:21:47.339 160509 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:21:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:21:47.340 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:21:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:21:47.341 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:21:47 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e188 e188: 6 total, 6 up, 6 in Dec 6 05:21:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:21:47 localhost podman[332124]: 2025-12-06 10:21:47.758063257 +0000 UTC m=+0.082178483 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:21:47 localhost podman[332124]: 2025-12-06 10:21:47.792072488 +0000 UTC m=+0.116187694 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.build-date=20251125, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 6 05:21:47 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:21:48 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:21:48 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e189 e189: 6 total, 6 up, 6 in Dec 6 05:21:49 localhost nova_compute[282193]: 2025-12-06 10:21:49.057 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:21:49 localhost podman[332143]: 2025-12-06 10:21:49.925456182 +0000 UTC m=+0.086807814 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 6 05:21:49 localhost podman[332143]: 2025-12-06 10:21:49.934165218 +0000 UTC m=+0.095516800 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 6 05:21:49 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:21:49 localhost nova_compute[282193]: 2025-12-06 10:21:49.968 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:50 localhost nova_compute[282193]: 2025-12-06 10:21:50.210 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:51 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:51.079 2 INFO neutron.agent.securitygroups_rpc [None req-9aa7129f-1e5c-4b11-befb-2e4631e91223 a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['2129cf94-a39f-4e4e-ab36-0d488acfdae6']#033[00m Dec 6 05:21:52 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e190 e190: 6 total, 6 up, 6 in Dec 6 05:21:52 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:52.768 2 INFO neutron.agent.securitygroups_rpc [None req-1911ca26-173a-428d-a937-0c8a9ec17a18 a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['2129cf94-a39f-4e4e-ab36-0d488acfdae6', '398ea86c-4672-439c-a6e6-0b07306b07fb', 'fb8e4f1d-f459-47af-ae16-0205f1ef2540']#033[00m Dec 6 05:21:53 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:53.251 2 INFO neutron.agent.securitygroups_rpc [None req-e01a41a9-2c25-4070-a80a-0e758300cea8 a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['398ea86c-4672-439c-a6e6-0b07306b07fb', 'fb8e4f1d-f459-47af-ae16-0205f1ef2540']#033[00m Dec 6 05:21:53 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:21:53 localhost podman[241090]: time="2025-12-06T10:21:53Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:21:53 localhost podman[241090]: @ - - [06/Dec/2025:10:21:53 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156104 "" "Go-http-client/1.1" Dec 6 05:21:53 localhost podman[241090]: @ - - [06/Dec/2025:10:21:53 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19275 "" "Go-http-client/1.1" Dec 6 05:21:54 localhost nova_compute[282193]: 2025-12-06 10:21:54.119 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:54 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 6 05:21:54 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1393965120' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 6 05:21:54 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 6 05:21:54 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1393965120' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 6 05:21:54 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:21:54.554 263652 INFO neutron.agent.linux.ip_lib [None req-eb07b3ed-4dbd-48c8-9b8c-33ed0e899d4c - - - - - -] Device tap72ee7f1b-13 cannot be used as it has no MAC address#033[00m Dec 6 05:21:54 localhost nova_compute[282193]: 2025-12-06 10:21:54.575 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:54 localhost kernel: device tap72ee7f1b-13 entered promiscuous mode Dec 6 05:21:54 localhost NetworkManager[5973]: [1765016514.5842] manager: (tap72ee7f1b-13): new Generic device (/org/freedesktop/NetworkManager/Devices/77) Dec 6 05:21:54 localhost nova_compute[282193]: 2025-12-06 10:21:54.583 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:54 localhost ovn_controller[154851]: 2025-12-06T10:21:54Z|00477|binding|INFO|Claiming lport 72ee7f1b-132c-4087-bcf6-0dc2886ccb24 for this chassis. Dec 6 05:21:54 localhost ovn_controller[154851]: 2025-12-06T10:21:54Z|00478|binding|INFO|72ee7f1b-132c-4087-bcf6-0dc2886ccb24: Claiming unknown Dec 6 05:21:54 localhost systemd-udevd[332175]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:21:54 localhost ovn_metadata_agent[160504]: 2025-12-06 10:21:54.602 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-23332176-d495-43f0-b960-60f576e19db9', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-23332176-d495-43f0-b960-60f576e19db9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e82deaff368b4feea9fec0f06459a6ca', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b8461d03-d92a-4a33-8802-d634072db402, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=72ee7f1b-132c-4087-bcf6-0dc2886ccb24) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:21:54 localhost ovn_metadata_agent[160504]: 2025-12-06 10:21:54.605 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 72ee7f1b-132c-4087-bcf6-0dc2886ccb24 in datapath 23332176-d495-43f0-b960-60f576e19db9 bound to our chassis#033[00m Dec 6 05:21:54 localhost ovn_metadata_agent[160504]: 2025-12-06 10:21:54.607 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Port 6dcc78be-11ab-40d5-a0de-d91ea8357630 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 6 05:21:54 localhost ovn_metadata_agent[160504]: 2025-12-06 10:21:54.607 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 23332176-d495-43f0-b960-60f576e19db9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:21:54 localhost ovn_metadata_agent[160504]: 2025-12-06 10:21:54.609 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[58cb6de8-6f04-4fa7-a6b9-6035080d04fc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:21:54 localhost journal[230404]: ethtool ioctl error on tap72ee7f1b-13: No such device Dec 6 05:21:54 localhost nova_compute[282193]: 2025-12-06 10:21:54.616 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:54 localhost ovn_controller[154851]: 2025-12-06T10:21:54Z|00479|binding|INFO|Setting lport 72ee7f1b-132c-4087-bcf6-0dc2886ccb24 ovn-installed in OVS Dec 6 05:21:54 localhost ovn_controller[154851]: 2025-12-06T10:21:54Z|00480|binding|INFO|Setting lport 72ee7f1b-132c-4087-bcf6-0dc2886ccb24 up in Southbound Dec 6 05:21:54 localhost nova_compute[282193]: 2025-12-06 10:21:54.622 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:54 localhost journal[230404]: ethtool ioctl error on tap72ee7f1b-13: No such device Dec 6 05:21:54 localhost nova_compute[282193]: 2025-12-06 10:21:54.625 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:54 localhost journal[230404]: ethtool ioctl error on tap72ee7f1b-13: No such device Dec 6 05:21:54 localhost journal[230404]: ethtool ioctl error on tap72ee7f1b-13: No such device Dec 6 05:21:54 localhost journal[230404]: ethtool ioctl error on tap72ee7f1b-13: No such device Dec 6 05:21:54 localhost journal[230404]: ethtool ioctl error on tap72ee7f1b-13: No such device Dec 6 05:21:54 localhost journal[230404]: ethtool ioctl error on tap72ee7f1b-13: No such device Dec 6 05:21:54 localhost nova_compute[282193]: 2025-12-06 10:21:54.649 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:54 localhost journal[230404]: ethtool ioctl error on tap72ee7f1b-13: No such device Dec 6 05:21:54 localhost nova_compute[282193]: 2025-12-06 10:21:54.678 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:55 localhost nova_compute[282193]: 2025-12-06 10:21:55.244 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:55 localhost neutron_sriov_agent[256690]: 2025-12-06 10:21:55.376 2 INFO neutron.agent.securitygroups_rpc [None req-41969300-114c-4f41-96fa-3fc73bdf2a0b a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['f89459b7-5955-49a9-980d-ccf671c641e2']#033[00m Dec 6 05:21:55 localhost podman[332245]: Dec 6 05:21:55 localhost podman[332245]: 2025-12-06 10:21:55.587445409 +0000 UTC m=+0.064994577 container create 071dbc5c9b86072f2d5bda447e10edc83b5e25677868e8ab3267454575f41827 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-23332176-d495-43f0-b960-60f576e19db9, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0) Dec 6 05:21:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:21:55 localhost systemd[1]: Started libpod-conmon-071dbc5c9b86072f2d5bda447e10edc83b5e25677868e8ab3267454575f41827.scope. Dec 6 05:21:55 localhost systemd[1]: Started libcrun container. Dec 6 05:21:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/988891ef40577becefdf38dadb1a51a6fd48e58dae4cb75c05349582a2ed8956/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:21:55 localhost podman[332245]: 2025-12-06 10:21:55.558211636 +0000 UTC m=+0.035760824 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:21:55 localhost podman[332245]: 2025-12-06 10:21:55.664841306 +0000 UTC m=+0.142390494 container init 071dbc5c9b86072f2d5bda447e10edc83b5e25677868e8ab3267454575f41827 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-23332176-d495-43f0-b960-60f576e19db9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:21:55 localhost dnsmasq[332273]: started, version 2.85 cachesize 150 Dec 6 05:21:55 localhost dnsmasq[332273]: DNS service limited to local subnets Dec 6 05:21:55 localhost dnsmasq[332273]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:21:55 localhost dnsmasq[332273]: warning: no upstream servers configured Dec 6 05:21:55 localhost dnsmasq-dhcp[332273]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 6 05:21:55 localhost dnsmasq[332273]: read /var/lib/neutron/dhcp/23332176-d495-43f0-b960-60f576e19db9/addn_hosts - 0 addresses Dec 6 05:21:55 localhost dnsmasq-dhcp[332273]: read /var/lib/neutron/dhcp/23332176-d495-43f0-b960-60f576e19db9/host Dec 6 05:21:55 localhost dnsmasq-dhcp[332273]: read /var/lib/neutron/dhcp/23332176-d495-43f0-b960-60f576e19db9/opts Dec 6 05:21:55 localhost nova_compute[282193]: 2025-12-06 10:21:55.701 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:55 localhost podman[332258]: 2025-12-06 10:21:55.713444971 +0000 UTC m=+0.084276767 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller) Dec 6 05:21:55 localhost podman[332245]: 2025-12-06 10:21:55.738179797 +0000 UTC m=+0.215728995 container start 071dbc5c9b86072f2d5bda447e10edc83b5e25677868e8ab3267454575f41827 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-23332176-d495-43f0-b960-60f576e19db9, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:21:55 localhost podman[332258]: 2025-12-06 10:21:55.761207741 +0000 UTC m=+0.132039577 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:21:55 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:21:55 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:21:55.794 263652 INFO neutron.agent.dhcp.agent [None req-b3262e7d-0aad-4820-bcd0-9f351ab7bcd0 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:21:54Z, description=, device_id=2b50982b-9c62-4665-9d93-c7bc12ad6e52, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=9b798f40-f638-4713-ad94-f68eb006cdb3, ip_allocation=immediate, mac_address=fa:16:3e:b6:af:bd, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:21:51Z, description=, dns_domain=, id=23332176-d495-43f0-b960-60f576e19db9, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1157192730, port_security_enabled=True, project_id=e82deaff368b4feea9fec0f06459a6ca, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=31598, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3002, status=ACTIVE, subnets=['02f360c2-c764-4a10-aa33-ee333a17b366'], tags=[], tenant_id=e82deaff368b4feea9fec0f06459a6ca, updated_at=2025-12-06T10:21:52Z, vlan_transparent=None, network_id=23332176-d495-43f0-b960-60f576e19db9, port_security_enabled=False, project_id=e82deaff368b4feea9fec0f06459a6ca, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3014, status=DOWN, tags=[], tenant_id=e82deaff368b4feea9fec0f06459a6ca, updated_at=2025-12-06T10:21:54Z on network 23332176-d495-43f0-b960-60f576e19db9#033[00m Dec 6 05:21:55 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 6 05:21:55 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/880620488' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 6 05:21:55 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 6 05:21:55 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/880620488' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 6 05:21:55 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:21:55.882 263652 INFO neutron.agent.dhcp.agent [None req-09210cce-daf8-4170-849d-ab83d4d5b6c9 - - - - - -] DHCP configuration for ports {'94818738-f83b-4c5a-b7b7-2e0ae5a8c2de'} is completed#033[00m Dec 6 05:21:56 localhost dnsmasq[332273]: read /var/lib/neutron/dhcp/23332176-d495-43f0-b960-60f576e19db9/addn_hosts - 1 addresses Dec 6 05:21:56 localhost dnsmasq-dhcp[332273]: read /var/lib/neutron/dhcp/23332176-d495-43f0-b960-60f576e19db9/host Dec 6 05:21:56 localhost dnsmasq-dhcp[332273]: read /var/lib/neutron/dhcp/23332176-d495-43f0-b960-60f576e19db9/opts Dec 6 05:21:56 localhost podman[332306]: 2025-12-06 10:21:56.050637149 +0000 UTC m=+0.059066466 container kill 071dbc5c9b86072f2d5bda447e10edc83b5e25677868e8ab3267454575f41827 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-23332176-d495-43f0-b960-60f576e19db9, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 6 05:21:56 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:21:56.229 263652 INFO neutron.agent.dhcp.agent [None req-244435d4-e5a4-45ab-b7ed-171608c3b4e2 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:21:54Z, description=, device_id=2b50982b-9c62-4665-9d93-c7bc12ad6e52, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=9b798f40-f638-4713-ad94-f68eb006cdb3, ip_allocation=immediate, mac_address=fa:16:3e:b6:af:bd, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:21:51Z, description=, dns_domain=, id=23332176-d495-43f0-b960-60f576e19db9, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1157192730, port_security_enabled=True, project_id=e82deaff368b4feea9fec0f06459a6ca, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=31598, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3002, status=ACTIVE, subnets=['02f360c2-c764-4a10-aa33-ee333a17b366'], tags=[], tenant_id=e82deaff368b4feea9fec0f06459a6ca, updated_at=2025-12-06T10:21:52Z, vlan_transparent=None, network_id=23332176-d495-43f0-b960-60f576e19db9, port_security_enabled=False, project_id=e82deaff368b4feea9fec0f06459a6ca, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3014, status=DOWN, tags=[], tenant_id=e82deaff368b4feea9fec0f06459a6ca, updated_at=2025-12-06T10:21:54Z on network 23332176-d495-43f0-b960-60f576e19db9#033[00m Dec 6 05:21:56 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:21:56.393 263652 INFO neutron.agent.dhcp.agent [None req-fcb1b711-ad49-4536-bf4a-87c0b06c49fb - - - - - -] DHCP configuration for ports {'9b798f40-f638-4713-ad94-f68eb006cdb3'} is completed#033[00m Dec 6 05:21:56 localhost dnsmasq[332273]: read /var/lib/neutron/dhcp/23332176-d495-43f0-b960-60f576e19db9/addn_hosts - 1 addresses Dec 6 05:21:56 localhost dnsmasq-dhcp[332273]: read /var/lib/neutron/dhcp/23332176-d495-43f0-b960-60f576e19db9/host Dec 6 05:21:56 localhost podman[332344]: 2025-12-06 10:21:56.582123205 +0000 UTC m=+0.054643210 container kill 071dbc5c9b86072f2d5bda447e10edc83b5e25677868e8ab3267454575f41827 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-23332176-d495-43f0-b960-60f576e19db9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:21:56 localhost dnsmasq-dhcp[332273]: read /var/lib/neutron/dhcp/23332176-d495-43f0-b960-60f576e19db9/opts Dec 6 05:21:56 localhost systemd[1]: tmp-crun.x5AqMl.mount: Deactivated successfully. Dec 6 05:21:56 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:21:56.892 263652 INFO neutron.agent.dhcp.agent [None req-52fd6933-6149-4c44-aba7-ea2f5b1b0450 - - - - - -] DHCP configuration for ports {'9b798f40-f638-4713-ad94-f68eb006cdb3'} is completed#033[00m Dec 6 05:21:57 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e191 e191: 6 total, 6 up, 6 in Dec 6 05:21:58 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:21:59 localhost nova_compute[282193]: 2025-12-06 10:21:59.125 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:21:59 localhost nova_compute[282193]: 2025-12-06 10:21:59.697 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:00 localhost nova_compute[282193]: 2025-12-06 10:22:00.246 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:00 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e192 e192: 6 total, 6 up, 6 in Dec 6 05:22:01 localhost ceph-osd[31726]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1. Dec 6 05:22:01 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:22:01.945 263652 INFO neutron.agent.linux.ip_lib [None req-258ecf09-4f1d-4e9a-94a9-a7b71bd1d829 - - - - - -] Device tap27deb0c2-77 cannot be used as it has no MAC address#033[00m Dec 6 05:22:01 localhost nova_compute[282193]: 2025-12-06 10:22:01.972 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:01 localhost kernel: device tap27deb0c2-77 entered promiscuous mode Dec 6 05:22:01 localhost NetworkManager[5973]: [1765016521.9815] manager: (tap27deb0c2-77): new Generic device (/org/freedesktop/NetworkManager/Devices/78) Dec 6 05:22:01 localhost ovn_controller[154851]: 2025-12-06T10:22:01Z|00481|binding|INFO|Claiming lport 27deb0c2-77bb-47c5-bae8-ebe8fdf091d3 for this chassis. Dec 6 05:22:01 localhost ovn_controller[154851]: 2025-12-06T10:22:01Z|00482|binding|INFO|27deb0c2-77bb-47c5-bae8-ebe8fdf091d3: Claiming unknown Dec 6 05:22:01 localhost nova_compute[282193]: 2025-12-06 10:22:01.986 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:01 localhost systemd-udevd[332439]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:22:02 localhost journal[230404]: ethtool ioctl error on tap27deb0c2-77: No such device Dec 6 05:22:02 localhost nova_compute[282193]: 2025-12-06 10:22:02.018 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:02 localhost ovn_controller[154851]: 2025-12-06T10:22:02Z|00483|binding|INFO|Setting lport 27deb0c2-77bb-47c5-bae8-ebe8fdf091d3 ovn-installed in OVS Dec 6 05:22:02 localhost journal[230404]: ethtool ioctl error on tap27deb0c2-77: No such device Dec 6 05:22:02 localhost nova_compute[282193]: 2025-12-06 10:22:02.021 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:02 localhost nova_compute[282193]: 2025-12-06 10:22:02.023 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:02 localhost journal[230404]: ethtool ioctl error on tap27deb0c2-77: No such device Dec 6 05:22:02 localhost journal[230404]: ethtool ioctl error on tap27deb0c2-77: No such device Dec 6 05:22:02 localhost ovn_controller[154851]: 2025-12-06T10:22:02Z|00484|binding|INFO|Setting lport 27deb0c2-77bb-47c5-bae8-ebe8fdf091d3 up in Southbound Dec 6 05:22:02 localhost ovn_metadata_agent[160504]: 2025-12-06 10:22:02.034 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-b212e2be-4a1a-42c0-a3cd-d5a930c0c30a', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b212e2be-4a1a-42c0-a3cd-d5a930c0c30a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4daafaf0e264da6a728bdd60c5d6377', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=788a81a8-1f71-42fc-a0bb-d48b0228b6cf, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=27deb0c2-77bb-47c5-bae8-ebe8fdf091d3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:22:02 localhost ovn_metadata_agent[160504]: 2025-12-06 10:22:02.036 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 27deb0c2-77bb-47c5-bae8-ebe8fdf091d3 in datapath b212e2be-4a1a-42c0-a3cd-d5a930c0c30a bound to our chassis#033[00m Dec 6 05:22:02 localhost ovn_metadata_agent[160504]: 2025-12-06 10:22:02.038 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network b212e2be-4a1a-42c0-a3cd-d5a930c0c30a or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:22:02 localhost ovn_metadata_agent[160504]: 2025-12-06 10:22:02.039 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[409e442c-c054-40eb-bd60-1f4f65121ede]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:22:02 localhost journal[230404]: ethtool ioctl error on tap27deb0c2-77: No such device Dec 6 05:22:02 localhost journal[230404]: ethtool ioctl error on tap27deb0c2-77: No such device Dec 6 05:22:02 localhost journal[230404]: ethtool ioctl error on tap27deb0c2-77: No such device Dec 6 05:22:02 localhost journal[230404]: ethtool ioctl error on tap27deb0c2-77: No such device Dec 6 05:22:02 localhost nova_compute[282193]: 2025-12-06 10:22:02.065 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:02 localhost nova_compute[282193]: 2025-12-06 10:22:02.098 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:02 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #46. Immutable memtables: 0. Dec 6 05:22:02 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:22:02.379044) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 6 05:22:02 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 46 Dec 6 05:22:02 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016522379103, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2406, "num_deletes": 261, "total_data_size": 4407873, "memory_usage": 4468232, "flush_reason": "Manual Compaction"} Dec 6 05:22:02 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #47: started Dec 6 05:22:02 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016522414267, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 47, "file_size": 2373234, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27379, "largest_seqno": 29779, "table_properties": {"data_size": 2365392, "index_size": 4347, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 21482, "raw_average_key_size": 22, "raw_value_size": 2347822, "raw_average_value_size": 2443, "num_data_blocks": 187, "num_entries": 961, "num_filter_entries": 961, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765016395, "oldest_key_time": 1765016395, "file_creation_time": 1765016522, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ec2a3a6-c7d0-4f1e-9923-5a3b782725ae", "db_session_id": "JMBO5KX1IJCJ8FWC64EX", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}} Dec 6 05:22:02 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 35276 microseconds, and 4843 cpu microseconds. Dec 6 05:22:02 localhost ceph-mon[298582]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 6 05:22:02 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:22:02.414319) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #47: 2373234 bytes OK Dec 6 05:22:02 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:22:02.414348) [db/memtable_list.cc:519] [default] Level-0 commit table #47 started Dec 6 05:22:02 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:22:02.417874) [db/memtable_list.cc:722] [default] Level-0 commit table #47: memtable #1 done Dec 6 05:22:02 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:22:02.417910) EVENT_LOG_v1 {"time_micros": 1765016522417900, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 6 05:22:02 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:22:02.417935) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 6 05:22:02 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 4396686, prev total WAL file size 4397435, number of live WAL files 2. Dec 6 05:22:02 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000043.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:22:02 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:22:02.418964) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740034303034' seq:72057594037927935, type:22 .. '6D6772737461740034323536' seq:0, type:0; will stop at (end) Dec 6 05:22:02 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 6 05:22:02 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [47(2317KB)], [45(17MB)] Dec 6 05:22:02 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016522418998, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [47], "files_L6": [45], "score": -1, "input_data_size": 21195557, "oldest_snapshot_seqno": -1} Dec 6 05:22:02 localhost systemd[1]: tmp-crun.ymtRvn.mount: Deactivated successfully. Dec 6 05:22:02 localhost podman[332521]: 2025-12-06 10:22:02.456983669 +0000 UTC m=+0.126142757 container exec ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, vendor=Red Hat, Inc., ceph=True, version=7, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, GIT_BRANCH=main, io.buildah.version=1.41.4, release=1763362218, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.tags=rhceph ceph, distribution-scope=public) Dec 6 05:22:02 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #48: 13318 keys, 19558016 bytes, temperature: kUnknown Dec 6 05:22:02 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016522498531, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 48, "file_size": 19558016, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19482747, "index_size": 40864, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33349, "raw_key_size": 355670, "raw_average_key_size": 26, "raw_value_size": 19257210, "raw_average_value_size": 1445, "num_data_blocks": 1544, "num_entries": 13318, "num_filter_entries": 13318, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015623, "oldest_key_time": 0, "file_creation_time": 1765016522, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ec2a3a6-c7d0-4f1e-9923-5a3b782725ae", "db_session_id": "JMBO5KX1IJCJ8FWC64EX", "orig_file_number": 48, "seqno_to_time_mapping": "N/A"}} Dec 6 05:22:02 localhost ceph-mon[298582]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 6 05:22:02 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:22:02.498842) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 19558016 bytes Dec 6 05:22:02 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:22:02.501139) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 266.2 rd, 245.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 18.0 +0.0 blob) out(18.7 +0.0 blob), read-write-amplify(17.2) write-amplify(8.2) OK, records in: 13796, records dropped: 478 output_compression: NoCompression Dec 6 05:22:02 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:22:02.501164) EVENT_LOG_v1 {"time_micros": 1765016522501153, "job": 26, "event": "compaction_finished", "compaction_time_micros": 79627, "compaction_time_cpu_micros": 31108, "output_level": 6, "num_output_files": 1, "total_output_size": 19558016, "num_input_records": 13796, "num_output_records": 13318, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 6 05:22:02 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:22:02 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016522501488, "job": 26, "event": "table_file_deletion", "file_number": 47} Dec 6 05:22:02 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000045.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:22:02 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016522503205, "job": 26, "event": "table_file_deletion", "file_number": 45} Dec 6 05:22:02 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:22:02.418852) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:22:02 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:22:02.503281) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:22:02 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:22:02.503292) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:22:02 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:22:02.503297) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:22:02 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:22:02.503301) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:22:02 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:22:02.503306) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:22:02 localhost podman[332521]: 2025-12-06 10:22:02.605458137 +0000 UTC m=+0.274617195 container exec_died ecefb99ff092b207f760d60e8fd97152c624df0e1016549024435ae641ee2e06 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548789, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, RELEASE=main, release=1763362218, distribution-scope=public, build-date=2025-11-26T19:44:28Z) Dec 6 05:22:02 localhost podman[332621]: Dec 6 05:22:02 localhost podman[332621]: 2025-12-06 10:22:02.928532664 +0000 UTC m=+0.087192726 container create 98149fe39736f2b06b84af45fc052baa53e53a5b040ce189b48bcd27e51b9b5b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b212e2be-4a1a-42c0-a3cd-d5a930c0c30a, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 6 05:22:02 localhost systemd[1]: Started libpod-conmon-98149fe39736f2b06b84af45fc052baa53e53a5b040ce189b48bcd27e51b9b5b.scope. Dec 6 05:22:02 localhost podman[332621]: 2025-12-06 10:22:02.890208142 +0000 UTC m=+0.048868184 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:22:02 localhost systemd[1]: Started libcrun container. Dec 6 05:22:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf868e69c05e71759c1e3d850cc649a6050ae655aa7438d5cc3256a2e4a2d10a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:22:03 localhost podman[332621]: 2025-12-06 10:22:03.015899854 +0000 UTC m=+0.174559936 container init 98149fe39736f2b06b84af45fc052baa53e53a5b040ce189b48bcd27e51b9b5b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b212e2be-4a1a-42c0-a3cd-d5a930c0c30a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 6 05:22:03 localhost dnsmasq[332659]: started, version 2.85 cachesize 150 Dec 6 05:22:03 localhost dnsmasq[332659]: DNS service limited to local subnets Dec 6 05:22:03 localhost dnsmasq[332659]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:22:03 localhost dnsmasq[332659]: warning: no upstream servers configured Dec 6 05:22:03 localhost dnsmasq-dhcp[332659]: DHCPv6, static leases only on 2001:db8::, lease time 1d Dec 6 05:22:03 localhost podman[332621]: 2025-12-06 10:22:03.044583341 +0000 UTC m=+0.203243403 container start 98149fe39736f2b06b84af45fc052baa53e53a5b040ce189b48bcd27e51b9b5b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b212e2be-4a1a-42c0-a3cd-d5a930c0c30a, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 6 05:22:03 localhost dnsmasq[332659]: read /var/lib/neutron/dhcp/b212e2be-4a1a-42c0-a3cd-d5a930c0c30a/addn_hosts - 0 addresses Dec 6 05:22:03 localhost dnsmasq-dhcp[332659]: read /var/lib/neutron/dhcp/b212e2be-4a1a-42c0-a3cd-d5a930c0c30a/host Dec 6 05:22:03 localhost dnsmasq-dhcp[332659]: read /var/lib/neutron/dhcp/b212e2be-4a1a-42c0-a3cd-d5a930c0c30a/opts Dec 6 05:22:03 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:22:03 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:22:03.463 263652 INFO neutron.agent.dhcp.agent [None req-4949f628-6e62-4804-8a88-e673699185ca - - - - - -] DHCP configuration for ports {'9cdb81a3-442c-4dfd-a5c8-275c2f6a56c6'} is completed#033[00m Dec 6 05:22:04 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:22:04 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:22:04 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:22:04 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:22:04 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:22:04 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:22:04 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 6 05:22:04 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Dec 6 05:22:04 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 6 05:22:04 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Dec 6 05:22:04 localhost nova_compute[282193]: 2025-12-06 10:22:04.157 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:04 localhost nova_compute[282193]: 2025-12-06 10:22:04.312 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:05 localhost ceph-mon[298582]: Adjusting osd_memory_target on np0005548790.localdomain to 836.6M Dec 6 05:22:05 localhost ceph-mon[298582]: Unable to set osd_memory_target on np0005548790.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 6 05:22:05 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 6 05:22:05 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 6 05:22:05 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Dec 6 05:22:05 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Dec 6 05:22:05 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 6 05:22:05 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 6 05:22:05 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Dec 6 05:22:05 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Dec 6 05:22:05 localhost ceph-mon[298582]: Adjusting osd_memory_target on np0005548789.localdomain to 836.6M Dec 6 05:22:05 localhost ceph-mon[298582]: Adjusting osd_memory_target on np0005548788.localdomain to 836.6M Dec 6 05:22:05 localhost ceph-mon[298582]: Unable to set osd_memory_target on np0005548788.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096 Dec 6 05:22:05 localhost ceph-mon[298582]: Unable to set osd_memory_target on np0005548789.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Dec 6 05:22:05 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:22:05 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:22:05 localhost nova_compute[282193]: 2025-12-06 10:22:05.270 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:22:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:22:05 localhost podman[332779]: 2025-12-06 10:22:05.936154151 +0000 UTC m=+0.080906594 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 6 05:22:05 localhost podman[332779]: 2025-12-06 10:22:05.942210227 +0000 UTC m=+0.086962670 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 6 05:22:05 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:22:05 localhost systemd[1]: tmp-crun.NmVjtt.mount: Deactivated successfully. Dec 6 05:22:05 localhost podman[332778]: 2025-12-06 10:22:05.997988151 +0000 UTC m=+0.143061474 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS) Dec 6 05:22:06 localhost podman[332778]: 2025-12-06 10:22:06.006013546 +0000 UTC m=+0.151086889 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0) Dec 6 05:22:06 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:22:06 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:22:06.346 263652 INFO neutron.agent.linux.ip_lib [None req-5a0f97f5-1431-492f-9642-7c750c8f561f - - - - - -] Device tap4430879c-2d cannot be used as it has no MAC address#033[00m Dec 6 05:22:06 localhost nova_compute[282193]: 2025-12-06 10:22:06.404 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:06 localhost kernel: device tap4430879c-2d entered promiscuous mode Dec 6 05:22:06 localhost NetworkManager[5973]: [1765016526.4135] manager: (tap4430879c-2d): new Generic device (/org/freedesktop/NetworkManager/Devices/79) Dec 6 05:22:06 localhost nova_compute[282193]: 2025-12-06 10:22:06.416 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:06 localhost ovn_controller[154851]: 2025-12-06T10:22:06Z|00485|binding|INFO|Claiming lport 4430879c-2d53-4fe9-afc4-ecc4737ece3d for this chassis. Dec 6 05:22:06 localhost ovn_controller[154851]: 2025-12-06T10:22:06Z|00486|binding|INFO|4430879c-2d53-4fe9-afc4-ecc4737ece3d: Claiming unknown Dec 6 05:22:06 localhost systemd-udevd[332828]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:22:06 localhost journal[230404]: ethtool ioctl error on tap4430879c-2d: No such device Dec 6 05:22:06 localhost ovn_controller[154851]: 2025-12-06T10:22:06Z|00487|binding|INFO|Setting lport 4430879c-2d53-4fe9-afc4-ecc4737ece3d ovn-installed in OVS Dec 6 05:22:06 localhost journal[230404]: ethtool ioctl error on tap4430879c-2d: No such device Dec 6 05:22:06 localhost nova_compute[282193]: 2025-12-06 10:22:06.456 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:06 localhost journal[230404]: ethtool ioctl error on tap4430879c-2d: No such device Dec 6 05:22:06 localhost journal[230404]: ethtool ioctl error on tap4430879c-2d: No such device Dec 6 05:22:06 localhost journal[230404]: ethtool ioctl error on tap4430879c-2d: No such device Dec 6 05:22:06 localhost journal[230404]: ethtool ioctl error on tap4430879c-2d: No such device Dec 6 05:22:06 localhost journal[230404]: ethtool ioctl error on tap4430879c-2d: No such device Dec 6 05:22:06 localhost journal[230404]: ethtool ioctl error on tap4430879c-2d: No such device Dec 6 05:22:06 localhost ovn_metadata_agent[160504]: 2025-12-06 10:22:06.492 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.103.0.2/28', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-0f63818b-46da-4610-917f-48a4c73bfa86', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f63818b-46da-4610-917f-48a4c73bfa86', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e82deaff368b4feea9fec0f06459a6ca', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4628e25-25c8-4fdc-a3c4-cda346229624, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=4430879c-2d53-4fe9-afc4-ecc4737ece3d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:22:06 localhost ovn_controller[154851]: 2025-12-06T10:22:06Z|00488|binding|INFO|Setting lport 4430879c-2d53-4fe9-afc4-ecc4737ece3d up in Southbound Dec 6 05:22:06 localhost ovn_metadata_agent[160504]: 2025-12-06 10:22:06.493 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 4430879c-2d53-4fe9-afc4-ecc4737ece3d in datapath 0f63818b-46da-4610-917f-48a4c73bfa86 bound to our chassis#033[00m Dec 6 05:22:06 localhost ovn_metadata_agent[160504]: 2025-12-06 10:22:06.495 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 0f63818b-46da-4610-917f-48a4c73bfa86 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:22:06 localhost nova_compute[282193]: 2025-12-06 10:22:06.496 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:06 localhost ovn_metadata_agent[160504]: 2025-12-06 10:22:06.496 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[c75aec0d-369b-4e52-b649-e5a87afb6df1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:22:06 localhost nova_compute[282193]: 2025-12-06 10:22:06.523 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:07 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:22:07 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e193 e193: 6 total, 6 up, 6 in Dec 6 05:22:07 localhost podman[332899]: Dec 6 05:22:07 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #49. Immutable memtables: 0. Dec 6 05:22:07 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:22:07.406521) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 6 05:22:07 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 49 Dec 6 05:22:07 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016527406604, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 447, "num_deletes": 253, "total_data_size": 409840, "memory_usage": 419176, "flush_reason": "Manual Compaction"} Dec 6 05:22:07 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #50: started Dec 6 05:22:07 localhost podman[332899]: 2025-12-06 10:22:07.407530308 +0000 UTC m=+0.054399314 container create 294ed2ade3cdaf5dc40b3998609fd207e11e78054502a44b21052a2751a92747 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f63818b-46da-4610-917f-48a4c73bfa86, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:22:07 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016527411400, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 50, "file_size": 270596, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29784, "largest_seqno": 30226, "table_properties": {"data_size": 267814, "index_size": 765, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6684, "raw_average_key_size": 18, "raw_value_size": 261976, "raw_average_value_size": 727, "num_data_blocks": 29, "num_entries": 360, "num_filter_entries": 360, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765016522, "oldest_key_time": 1765016522, "file_creation_time": 1765016527, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ec2a3a6-c7d0-4f1e-9923-5a3b782725ae", "db_session_id": "JMBO5KX1IJCJ8FWC64EX", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}} Dec 6 05:22:07 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 4927 microseconds, and 2009 cpu microseconds. Dec 6 05:22:07 localhost ceph-mon[298582]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 6 05:22:07 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:22:07.411453) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #50: 270596 bytes OK Dec 6 05:22:07 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:22:07.411483) [db/memtable_list.cc:519] [default] Level-0 commit table #50 started Dec 6 05:22:07 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:22:07.416572) [db/memtable_list.cc:722] [default] Level-0 commit table #50: memtable #1 done Dec 6 05:22:07 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:22:07.416598) EVENT_LOG_v1 {"time_micros": 1765016527416590, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 6 05:22:07 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:22:07.416623) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 6 05:22:07 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 406914, prev total WAL file size 406914, number of live WAL files 2. Dec 6 05:22:07 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000046.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:22:07 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:22:07.417498) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760031353336' seq:72057594037927935, type:22 .. '6B760031373930' seq:0, type:0; will stop at (end) Dec 6 05:22:07 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 6 05:22:07 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [50(264KB)], [48(18MB)] Dec 6 05:22:07 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016527417555, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [50], "files_L6": [48], "score": -1, "input_data_size": 19828612, "oldest_snapshot_seqno": -1} Dec 6 05:22:07 localhost systemd[1]: Started libpod-conmon-294ed2ade3cdaf5dc40b3998609fd207e11e78054502a44b21052a2751a92747.scope. Dec 6 05:22:07 localhost systemd[1]: Started libcrun container. Dec 6 05:22:07 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53d7169593b190df1c1e48dc9de962f6b6ad6b884e0167968ce6145110e64e36/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:22:07 localhost podman[332899]: 2025-12-06 10:22:07.379478861 +0000 UTC m=+0.026347867 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:22:07 localhost podman[332899]: 2025-12-06 10:22:07.508887957 +0000 UTC m=+0.155756973 container init 294ed2ade3cdaf5dc40b3998609fd207e11e78054502a44b21052a2751a92747 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f63818b-46da-4610-917f-48a4c73bfa86, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true) Dec 6 05:22:07 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #51: 13146 keys, 18733015 bytes, temperature: kUnknown Dec 6 05:22:07 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016527510078, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 51, "file_size": 18733015, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18659953, "index_size": 39105, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32901, "raw_key_size": 353794, "raw_average_key_size": 26, "raw_value_size": 18438154, "raw_average_value_size": 1402, "num_data_blocks": 1451, "num_entries": 13146, "num_filter_entries": 13146, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015623, "oldest_key_time": 0, "file_creation_time": 1765016527, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ec2a3a6-c7d0-4f1e-9923-5a3b782725ae", "db_session_id": "JMBO5KX1IJCJ8FWC64EX", "orig_file_number": 51, "seqno_to_time_mapping": "N/A"}} Dec 6 05:22:07 localhost ceph-mon[298582]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 6 05:22:07 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:22:07.510573) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 18733015 bytes Dec 6 05:22:07 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:22:07.513793) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 213.7 rd, 201.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 18.7 +0.0 blob) out(17.9 +0.0 blob), read-write-amplify(142.5) write-amplify(69.2) OK, records in: 13678, records dropped: 532 output_compression: NoCompression Dec 6 05:22:07 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:22:07.513824) EVENT_LOG_v1 {"time_micros": 1765016527513809, "job": 28, "event": "compaction_finished", "compaction_time_micros": 92777, "compaction_time_cpu_micros": 47590, "output_level": 6, "num_output_files": 1, "total_output_size": 18733015, "num_input_records": 13678, "num_output_records": 13146, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 6 05:22:07 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:22:07 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016527514221, "job": 28, "event": "table_file_deletion", "file_number": 50} Dec 6 05:22:07 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000048.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:22:07 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016527516508, "job": 28, "event": "table_file_deletion", "file_number": 48} Dec 6 05:22:07 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:22:07.417214) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:22:07 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:22:07.516593) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:22:07 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:22:07.516599) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:22:07 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:22:07.516601) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:22:07 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:22:07.516602) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:22:07 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:22:07.516604) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:22:07 localhost podman[332899]: 2025-12-06 10:22:07.519633945 +0000 UTC m=+0.166502991 container start 294ed2ade3cdaf5dc40b3998609fd207e11e78054502a44b21052a2751a92747 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f63818b-46da-4610-917f-48a4c73bfa86, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:22:07 localhost dnsmasq[332917]: started, version 2.85 cachesize 150 Dec 6 05:22:07 localhost dnsmasq[332917]: DNS service limited to local subnets Dec 6 05:22:07 localhost dnsmasq[332917]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:22:07 localhost dnsmasq[332917]: warning: no upstream servers configured Dec 6 05:22:07 localhost dnsmasq-dhcp[332917]: DHCP, static leases only on 10.103.0.0, lease time 1d Dec 6 05:22:07 localhost dnsmasq[332917]: read /var/lib/neutron/dhcp/0f63818b-46da-4610-917f-48a4c73bfa86/addn_hosts - 0 addresses Dec 6 05:22:07 localhost dnsmasq-dhcp[332917]: read /var/lib/neutron/dhcp/0f63818b-46da-4610-917f-48a4c73bfa86/host Dec 6 05:22:07 localhost dnsmasq-dhcp[332917]: read /var/lib/neutron/dhcp/0f63818b-46da-4610-917f-48a4c73bfa86/opts Dec 6 05:22:07 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:22:07.684 263652 INFO neutron.agent.dhcp.agent [None req-c1f47ad4-f6b2-4230-a523-c11a694e1c6c - - - - - -] DHCP configuration for ports {'891ff69b-32bd-4c75-bfdb-8e56dbfa03a3'} is completed#033[00m Dec 6 05:22:08 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:22:08 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:22:08.940 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:22:08Z, description=, device_id=2b50982b-9c62-4665-9d93-c7bc12ad6e52, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=705ae66d-542f-4ec2-ac8f-4e6b6578ea8b, ip_allocation=immediate, mac_address=fa:16:3e:e4:07:d4, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:22:04Z, description=, dns_domain=, id=0f63818b-46da-4610-917f-48a4c73bfa86, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1667240543, port_security_enabled=True, project_id=e82deaff368b4feea9fec0f06459a6ca, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=16595, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3081, status=ACTIVE, subnets=['b8628baa-738e-4823-9d4e-66c46fc00679'], tags=[], tenant_id=e82deaff368b4feea9fec0f06459a6ca, updated_at=2025-12-06T10:22:05Z, vlan_transparent=None, network_id=0f63818b-46da-4610-917f-48a4c73bfa86, port_security_enabled=False, project_id=e82deaff368b4feea9fec0f06459a6ca, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3111, status=DOWN, tags=[], tenant_id=e82deaff368b4feea9fec0f06459a6ca, updated_at=2025-12-06T10:22:08Z on network 0f63818b-46da-4610-917f-48a4c73bfa86#033[00m Dec 6 05:22:08 localhost sshd[332918]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:22:09 localhost nova_compute[282193]: 2025-12-06 10:22:09.210 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:09 localhost systemd[1]: tmp-crun.bue4S9.mount: Deactivated successfully. Dec 6 05:22:09 localhost dnsmasq[332917]: read /var/lib/neutron/dhcp/0f63818b-46da-4610-917f-48a4c73bfa86/addn_hosts - 1 addresses Dec 6 05:22:09 localhost podman[332935]: 2025-12-06 10:22:09.230287956 +0000 UTC m=+0.111863710 container kill 294ed2ade3cdaf5dc40b3998609fd207e11e78054502a44b21052a2751a92747 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f63818b-46da-4610-917f-48a4c73bfa86, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:22:09 localhost dnsmasq-dhcp[332917]: read /var/lib/neutron/dhcp/0f63818b-46da-4610-917f-48a4c73bfa86/host Dec 6 05:22:09 localhost dnsmasq-dhcp[332917]: read /var/lib/neutron/dhcp/0f63818b-46da-4610-917f-48a4c73bfa86/opts Dec 6 05:22:09 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:22:09.305 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:22:09Z, description=, device_id=d826eb4e-7fd5-4cdb-9ce2-8323b958c600, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=1292f95c-a471-4d24-82b6-dc839c334a0e, ip_allocation=immediate, mac_address=fa:16:3e:f4:55:31, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:21:59Z, description=, dns_domain=, id=b212e2be-4a1a-42c0-a3cd-d5a930c0c30a, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersNegativeIpV6Test-test-network-1471924923, port_security_enabled=True, project_id=b4daafaf0e264da6a728bdd60c5d6377, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=56487, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3055, status=ACTIVE, subnets=['d1ff6954-2d67-457c-b6ca-44990d6a79f2'], tags=[], tenant_id=b4daafaf0e264da6a728bdd60c5d6377, updated_at=2025-12-06T10:22:00Z, vlan_transparent=None, network_id=b212e2be-4a1a-42c0-a3cd-d5a930c0c30a, port_security_enabled=False, project_id=b4daafaf0e264da6a728bdd60c5d6377, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3112, status=DOWN, tags=[], tenant_id=b4daafaf0e264da6a728bdd60c5d6377, updated_at=2025-12-06T10:22:09Z on network b212e2be-4a1a-42c0-a3cd-d5a930c0c30a#033[00m Dec 6 05:22:09 localhost dnsmasq[332659]: read /var/lib/neutron/dhcp/b212e2be-4a1a-42c0-a3cd-d5a930c0c30a/addn_hosts - 1 addresses Dec 6 05:22:09 localhost dnsmasq-dhcp[332659]: read /var/lib/neutron/dhcp/b212e2be-4a1a-42c0-a3cd-d5a930c0c30a/host Dec 6 05:22:09 localhost podman[332974]: 2025-12-06 10:22:09.480215447 +0000 UTC m=+0.047331798 container kill 98149fe39736f2b06b84af45fc052baa53e53a5b040ce189b48bcd27e51b9b5b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b212e2be-4a1a-42c0-a3cd-d5a930c0c30a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 6 05:22:09 localhost dnsmasq-dhcp[332659]: read /var/lib/neutron/dhcp/b212e2be-4a1a-42c0-a3cd-d5a930c0c30a/opts Dec 6 05:22:09 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:22:09.550 263652 INFO neutron.agent.dhcp.agent [None req-09e42bbe-5a3f-47a8-8c31-0b8c43871525 - - - - - -] DHCP configuration for ports {'705ae66d-542f-4ec2-ac8f-4e6b6578ea8b'} is completed#033[00m Dec 6 05:22:09 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:22:09.815 263652 INFO neutron.agent.dhcp.agent [None req-e4152a86-7c49-4f40-b4c4-9a9caddbab89 - - - - - -] DHCP configuration for ports {'1292f95c-a471-4d24-82b6-dc839c334a0e'} is completed#033[00m Dec 6 05:22:09 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:22:09.925 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:22:08Z, description=, device_id=2b50982b-9c62-4665-9d93-c7bc12ad6e52, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=705ae66d-542f-4ec2-ac8f-4e6b6578ea8b, ip_allocation=immediate, mac_address=fa:16:3e:e4:07:d4, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:22:04Z, description=, dns_domain=, id=0f63818b-46da-4610-917f-48a4c73bfa86, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1667240543, port_security_enabled=True, project_id=e82deaff368b4feea9fec0f06459a6ca, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=16595, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3081, status=ACTIVE, subnets=['b8628baa-738e-4823-9d4e-66c46fc00679'], tags=[], tenant_id=e82deaff368b4feea9fec0f06459a6ca, updated_at=2025-12-06T10:22:05Z, vlan_transparent=None, network_id=0f63818b-46da-4610-917f-48a4c73bfa86, port_security_enabled=False, project_id=e82deaff368b4feea9fec0f06459a6ca, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3111, status=DOWN, tags=[], tenant_id=e82deaff368b4feea9fec0f06459a6ca, updated_at=2025-12-06T10:22:08Z on network 0f63818b-46da-4610-917f-48a4c73bfa86#033[00m Dec 6 05:22:09 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:22:09.978 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:22:09Z, description=, device_id=d826eb4e-7fd5-4cdb-9ce2-8323b958c600, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=1292f95c-a471-4d24-82b6-dc839c334a0e, ip_allocation=immediate, mac_address=fa:16:3e:f4:55:31, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:21:59Z, description=, dns_domain=, id=b212e2be-4a1a-42c0-a3cd-d5a930c0c30a, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersNegativeIpV6Test-test-network-1471924923, port_security_enabled=True, project_id=b4daafaf0e264da6a728bdd60c5d6377, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=56487, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3055, status=ACTIVE, subnets=['d1ff6954-2d67-457c-b6ca-44990d6a79f2'], tags=[], tenant_id=b4daafaf0e264da6a728bdd60c5d6377, updated_at=2025-12-06T10:22:00Z, vlan_transparent=None, network_id=b212e2be-4a1a-42c0-a3cd-d5a930c0c30a, port_security_enabled=False, project_id=b4daafaf0e264da6a728bdd60c5d6377, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3112, status=DOWN, tags=[], tenant_id=b4daafaf0e264da6a728bdd60c5d6377, updated_at=2025-12-06T10:22:09Z on network b212e2be-4a1a-42c0-a3cd-d5a930c0c30a#033[00m Dec 6 05:22:10 localhost dnsmasq[332917]: read /var/lib/neutron/dhcp/0f63818b-46da-4610-917f-48a4c73bfa86/addn_hosts - 1 addresses Dec 6 05:22:10 localhost dnsmasq-dhcp[332917]: read /var/lib/neutron/dhcp/0f63818b-46da-4610-917f-48a4c73bfa86/host Dec 6 05:22:10 localhost dnsmasq-dhcp[332917]: read /var/lib/neutron/dhcp/0f63818b-46da-4610-917f-48a4c73bfa86/opts Dec 6 05:22:10 localhost podman[333028]: 2025-12-06 10:22:10.163250025 +0000 UTC m=+0.074723514 container kill 294ed2ade3cdaf5dc40b3998609fd207e11e78054502a44b21052a2751a92747 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f63818b-46da-4610-917f-48a4c73bfa86, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 6 05:22:10 localhost dnsmasq[332659]: read /var/lib/neutron/dhcp/b212e2be-4a1a-42c0-a3cd-d5a930c0c30a/addn_hosts - 1 addresses Dec 6 05:22:10 localhost dnsmasq-dhcp[332659]: read /var/lib/neutron/dhcp/b212e2be-4a1a-42c0-a3cd-d5a930c0c30a/host Dec 6 05:22:10 localhost dnsmasq-dhcp[332659]: read /var/lib/neutron/dhcp/b212e2be-4a1a-42c0-a3cd-d5a930c0c30a/opts Dec 6 05:22:10 localhost podman[333035]: 2025-12-06 10:22:10.196844972 +0000 UTC m=+0.084780142 container kill 98149fe39736f2b06b84af45fc052baa53e53a5b040ce189b48bcd27e51b9b5b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b212e2be-4a1a-42c0-a3cd-d5a930c0c30a, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Dec 6 05:22:10 localhost nova_compute[282193]: 2025-12-06 10:22:10.307 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:10 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:22:10.542 263652 INFO neutron.agent.dhcp.agent [None req-c1d2e46e-b923-4709-a0e9-a869c04844fa - - - - - -] DHCP configuration for ports {'705ae66d-542f-4ec2-ac8f-4e6b6578ea8b'} is completed#033[00m Dec 6 05:22:10 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:22:10.788 263652 INFO neutron.agent.dhcp.agent [None req-5faa22a5-138a-4cec-b7c2-55a9e2c93096 - - - - - -] DHCP configuration for ports {'1292f95c-a471-4d24-82b6-dc839c334a0e'} is completed#033[00m Dec 6 05:22:11 localhost dnsmasq[332659]: read /var/lib/neutron/dhcp/b212e2be-4a1a-42c0-a3cd-d5a930c0c30a/addn_hosts - 0 addresses Dec 6 05:22:11 localhost podman[333087]: 2025-12-06 10:22:11.384852349 +0000 UTC m=+0.045375328 container kill 98149fe39736f2b06b84af45fc052baa53e53a5b040ce189b48bcd27e51b9b5b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b212e2be-4a1a-42c0-a3cd-d5a930c0c30a, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Dec 6 05:22:11 localhost dnsmasq-dhcp[332659]: read /var/lib/neutron/dhcp/b212e2be-4a1a-42c0-a3cd-d5a930c0c30a/host Dec 6 05:22:11 localhost dnsmasq-dhcp[332659]: read /var/lib/neutron/dhcp/b212e2be-4a1a-42c0-a3cd-d5a930c0c30a/opts Dec 6 05:22:11 localhost nova_compute[282193]: 2025-12-06 10:22:11.581 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:11 localhost ovn_controller[154851]: 2025-12-06T10:22:11Z|00489|binding|INFO|Releasing lport 27deb0c2-77bb-47c5-bae8-ebe8fdf091d3 from this chassis (sb_readonly=0) Dec 6 05:22:11 localhost ovn_controller[154851]: 2025-12-06T10:22:11Z|00490|binding|INFO|Setting lport 27deb0c2-77bb-47c5-bae8-ebe8fdf091d3 down in Southbound Dec 6 05:22:11 localhost kernel: device tap27deb0c2-77 left promiscuous mode Dec 6 05:22:11 localhost ovn_metadata_agent[160504]: 2025-12-06 10:22:11.598 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-b212e2be-4a1a-42c0-a3cd-d5a930c0c30a', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b212e2be-4a1a-42c0-a3cd-d5a930c0c30a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4daafaf0e264da6a728bdd60c5d6377', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548789.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=788a81a8-1f71-42fc-a0bb-d48b0228b6cf, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=27deb0c2-77bb-47c5-bae8-ebe8fdf091d3) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:22:11 localhost ovn_metadata_agent[160504]: 2025-12-06 10:22:11.600 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 27deb0c2-77bb-47c5-bae8-ebe8fdf091d3 in datapath b212e2be-4a1a-42c0-a3cd-d5a930c0c30a unbound from our chassis#033[00m Dec 6 05:22:11 localhost ovn_metadata_agent[160504]: 2025-12-06 10:22:11.602 160509 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network b212e2be-4a1a-42c0-a3cd-d5a930c0c30a or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Dec 6 05:22:11 localhost ovn_metadata_agent[160504]: 2025-12-06 10:22:11.603 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[e42a30eb-d1aa-4bfb-a519-198dcf7174c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:22:11 localhost nova_compute[282193]: 2025-12-06 10:22:11.608 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:13 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:22:13 localhost ovn_metadata_agent[160504]: 2025-12-06 10:22:13.354 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:6c:02', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:a8:2f:0c:cb:a1'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:22:13 localhost nova_compute[282193]: 2025-12-06 10:22:13.355 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:13 localhost ovn_metadata_agent[160504]: 2025-12-06 10:22:13.355 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 6 05:22:13 localhost systemd[1]: tmp-crun.0zYZXP.mount: Deactivated successfully. Dec 6 05:22:13 localhost dnsmasq[332917]: read /var/lib/neutron/dhcp/0f63818b-46da-4610-917f-48a4c73bfa86/addn_hosts - 0 addresses Dec 6 05:22:13 localhost dnsmasq-dhcp[332917]: read /var/lib/neutron/dhcp/0f63818b-46da-4610-917f-48a4c73bfa86/host Dec 6 05:22:13 localhost podman[333126]: 2025-12-06 10:22:13.632714274 +0000 UTC m=+0.046234175 container kill 294ed2ade3cdaf5dc40b3998609fd207e11e78054502a44b21052a2751a92747 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f63818b-46da-4610-917f-48a4c73bfa86, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:22:13 localhost dnsmasq-dhcp[332917]: read /var/lib/neutron/dhcp/0f63818b-46da-4610-917f-48a4c73bfa86/opts Dec 6 05:22:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:22:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:22:13 localhost podman[333139]: 2025-12-06 10:22:13.710088158 +0000 UTC m=+0.053539697 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, config_id=edpm, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, architecture=x86_64, maintainer=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Dec 6 05:22:13 localhost podman[333139]: 2025-12-06 10:22:13.7281328 +0000 UTC m=+0.071584359 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, version=9.6, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, container_name=openstack_network_exporter, io.openshift.expose-services=, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc.) Dec 6 05:22:13 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:22:13 localhost podman[333141]: 2025-12-06 10:22:13.773356432 +0000 UTC m=+0.115966705 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:22:13 localhost podman[333141]: 2025-12-06 10:22:13.779378126 +0000 UTC m=+0.121988379 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 6 05:22:13 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:22:13 localhost nova_compute[282193]: 2025-12-06 10:22:13.913 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:13 localhost kernel: device tap4430879c-2d left promiscuous mode Dec 6 05:22:13 localhost ovn_controller[154851]: 2025-12-06T10:22:13Z|00491|binding|INFO|Releasing lport 4430879c-2d53-4fe9-afc4-ecc4737ece3d from this chassis (sb_readonly=0) Dec 6 05:22:13 localhost ovn_controller[154851]: 2025-12-06T10:22:13Z|00492|binding|INFO|Setting lport 4430879c-2d53-4fe9-afc4-ecc4737ece3d down in Southbound Dec 6 05:22:13 localhost ovn_metadata_agent[160504]: 2025-12-06 10:22:13.933 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.103.0.2/28', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-0f63818b-46da-4610-917f-48a4c73bfa86', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f63818b-46da-4610-917f-48a4c73bfa86', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e82deaff368b4feea9fec0f06459a6ca', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548789.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4628e25-25c8-4fdc-a3c4-cda346229624, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=4430879c-2d53-4fe9-afc4-ecc4737ece3d) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:22:13 localhost ovn_metadata_agent[160504]: 2025-12-06 10:22:13.935 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 4430879c-2d53-4fe9-afc4-ecc4737ece3d in datapath 0f63818b-46da-4610-917f-48a4c73bfa86 unbound from our chassis#033[00m Dec 6 05:22:13 localhost ovn_metadata_agent[160504]: 2025-12-06 10:22:13.939 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0f63818b-46da-4610-917f-48a4c73bfa86, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:22:13 localhost ovn_metadata_agent[160504]: 2025-12-06 10:22:13.939 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[36f81f07-71a7-4ed9-aa2d-d35d2903324b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:22:13 localhost nova_compute[282193]: 2025-12-06 10:22:13.978 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:14 localhost nova_compute[282193]: 2025-12-06 10:22:14.213 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:14 localhost dnsmasq[332917]: exiting on receipt of SIGTERM Dec 6 05:22:14 localhost podman[333200]: 2025-12-06 10:22:14.523338968 +0000 UTC m=+0.064364679 container kill 294ed2ade3cdaf5dc40b3998609fd207e11e78054502a44b21052a2751a92747 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f63818b-46da-4610-917f-48a4c73bfa86, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:22:14 localhost systemd[1]: libpod-294ed2ade3cdaf5dc40b3998609fd207e11e78054502a44b21052a2751a92747.scope: Deactivated successfully. Dec 6 05:22:14 localhost podman[333214]: 2025-12-06 10:22:14.598473805 +0000 UTC m=+0.058689455 container died 294ed2ade3cdaf5dc40b3998609fd207e11e78054502a44b21052a2751a92747 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f63818b-46da-4610-917f-48a4c73bfa86, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 6 05:22:14 localhost systemd[1]: var-lib-containers-storage-overlay-53d7169593b190df1c1e48dc9de962f6b6ad6b884e0167968ce6145110e64e36-merged.mount: Deactivated successfully. Dec 6 05:22:14 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-294ed2ade3cdaf5dc40b3998609fd207e11e78054502a44b21052a2751a92747-userdata-shm.mount: Deactivated successfully. Dec 6 05:22:14 localhost podman[333214]: 2025-12-06 10:22:14.686496295 +0000 UTC m=+0.146711885 container remove 294ed2ade3cdaf5dc40b3998609fd207e11e78054502a44b21052a2751a92747 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0f63818b-46da-4610-917f-48a4c73bfa86, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125) Dec 6 05:22:14 localhost systemd[1]: libpod-conmon-294ed2ade3cdaf5dc40b3998609fd207e11e78054502a44b21052a2751a92747.scope: Deactivated successfully. Dec 6 05:22:14 localhost systemd[1]: run-netns-qdhcp\x2d0f63818b\x2d46da\x2d4610\x2d917f\x2d48a4c73bfa86.mount: Deactivated successfully. Dec 6 05:22:14 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:22:14.939 263652 INFO neutron.agent.dhcp.agent [None req-ff50923f-6bc2-49b4-aee2-4c9179c6c120 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:22:14 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:22:14.966 263652 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:22:15 localhost nova_compute[282193]: 2025-12-06 10:22:15.310 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:15 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:22:15.507 263652 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:22:15 localhost ovn_controller[154851]: 2025-12-06T10:22:15Z|00493|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:22:15 localhost nova_compute[282193]: 2025-12-06 10:22:15.756 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:16 localhost systemd[1]: tmp-crun.J1nZ1R.mount: Deactivated successfully. Dec 6 05:22:16 localhost dnsmasq[332659]: exiting on receipt of SIGTERM Dec 6 05:22:16 localhost podman[333257]: 2025-12-06 10:22:16.108628157 +0000 UTC m=+0.079963336 container kill 98149fe39736f2b06b84af45fc052baa53e53a5b040ce189b48bcd27e51b9b5b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b212e2be-4a1a-42c0-a3cd-d5a930c0c30a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:22:16 localhost systemd[1]: libpod-98149fe39736f2b06b84af45fc052baa53e53a5b040ce189b48bcd27e51b9b5b.scope: Deactivated successfully. Dec 6 05:22:16 localhost podman[333271]: 2025-12-06 10:22:16.186846128 +0000 UTC m=+0.065044960 container died 98149fe39736f2b06b84af45fc052baa53e53a5b040ce189b48bcd27e51b9b5b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b212e2be-4a1a-42c0-a3cd-d5a930c0c30a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:22:16 localhost podman[333271]: 2025-12-06 10:22:16.219835556 +0000 UTC m=+0.098034338 container cleanup 98149fe39736f2b06b84af45fc052baa53e53a5b040ce189b48bcd27e51b9b5b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b212e2be-4a1a-42c0-a3cd-d5a930c0c30a, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 6 05:22:16 localhost systemd[1]: libpod-conmon-98149fe39736f2b06b84af45fc052baa53e53a5b040ce189b48bcd27e51b9b5b.scope: Deactivated successfully. Dec 6 05:22:16 localhost podman[333273]: 2025-12-06 10:22:16.296483689 +0000 UTC m=+0.165808329 container remove 98149fe39736f2b06b84af45fc052baa53e53a5b040ce189b48bcd27e51b9b5b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b212e2be-4a1a-42c0-a3cd-d5a930c0c30a, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 6 05:22:16 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:22:16.326 263652 INFO neutron.agent.dhcp.agent [None req-401e84f8-0097-4620-913f-c43e89ccdc7b - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:22:16 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:22:16.327 263652 INFO neutron.agent.dhcp.agent [None req-401e84f8-0097-4620-913f-c43e89ccdc7b - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:22:16 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #52. Immutable memtables: 0. Dec 6 05:22:16 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:22:16.578979) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 6 05:22:16 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 52 Dec 6 05:22:16 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016536579291, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 377, "num_deletes": 251, "total_data_size": 194848, "memory_usage": 203224, "flush_reason": "Manual Compaction"} Dec 6 05:22:16 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #53: started Dec 6 05:22:16 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016536582614, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 53, "file_size": 125501, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30231, "largest_seqno": 30603, "table_properties": {"data_size": 123249, "index_size": 363, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6109, "raw_average_key_size": 19, "raw_value_size": 118707, "raw_average_value_size": 384, "num_data_blocks": 16, "num_entries": 309, "num_filter_entries": 309, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765016528, "oldest_key_time": 1765016528, "file_creation_time": 1765016536, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ec2a3a6-c7d0-4f1e-9923-5a3b782725ae", "db_session_id": "JMBO5KX1IJCJ8FWC64EX", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}} Dec 6 05:22:16 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 3679 microseconds, and 1690 cpu microseconds. Dec 6 05:22:16 localhost ceph-mon[298582]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 6 05:22:16 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:22:16.582673) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #53: 125501 bytes OK Dec 6 05:22:16 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:22:16.582698) [db/memtable_list.cc:519] [default] Level-0 commit table #53 started Dec 6 05:22:16 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:22:16.584328) [db/memtable_list.cc:722] [default] Level-0 commit table #53: memtable #1 done Dec 6 05:22:16 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:22:16.584352) EVENT_LOG_v1 {"time_micros": 1765016536584344, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 6 05:22:16 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:22:16.584379) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 6 05:22:16 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 192318, prev total WAL file size 192318, number of live WAL files 2. Dec 6 05:22:16 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000049.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:22:16 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:22:16.585228) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132323939' seq:72057594037927935, type:22 .. '7061786F73003132353531' seq:0, type:0; will stop at (end) Dec 6 05:22:16 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 6 05:22:16 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [53(122KB)], [51(17MB)] Dec 6 05:22:16 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016536585283, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [53], "files_L6": [51], "score": -1, "input_data_size": 18858516, "oldest_snapshot_seqno": -1} Dec 6 05:22:16 localhost openstack_network_exporter[243110]: ERROR 10:22:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:22:16 localhost openstack_network_exporter[243110]: ERROR 10:22:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:22:16 localhost openstack_network_exporter[243110]: ERROR 10:22:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:22:16 localhost openstack_network_exporter[243110]: ERROR 10:22:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:22:16 localhost openstack_network_exporter[243110]: Dec 6 05:22:16 localhost openstack_network_exporter[243110]: ERROR 10:22:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:22:16 localhost openstack_network_exporter[243110]: Dec 6 05:22:16 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #54: 12940 keys, 17664003 bytes, temperature: kUnknown Dec 6 05:22:16 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016536686258, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 54, "file_size": 17664003, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17593559, "index_size": 36999, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32389, "raw_key_size": 350035, "raw_average_key_size": 27, "raw_value_size": 17376636, "raw_average_value_size": 1342, "num_data_blocks": 1359, "num_entries": 12940, "num_filter_entries": 12940, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015623, "oldest_key_time": 0, "file_creation_time": 1765016536, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ec2a3a6-c7d0-4f1e-9923-5a3b782725ae", "db_session_id": "JMBO5KX1IJCJ8FWC64EX", "orig_file_number": 54, "seqno_to_time_mapping": "N/A"}} Dec 6 05:22:16 localhost ceph-mon[298582]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 6 05:22:16 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:22:16.686587) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 17664003 bytes Dec 6 05:22:16 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:22:16.688587) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 186.6 rd, 174.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 17.9 +0.0 blob) out(16.8 +0.0 blob), read-write-amplify(291.0) write-amplify(140.7) OK, records in: 13455, records dropped: 515 output_compression: NoCompression Dec 6 05:22:16 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:22:16.688610) EVENT_LOG_v1 {"time_micros": 1765016536688599, "job": 30, "event": "compaction_finished", "compaction_time_micros": 101056, "compaction_time_cpu_micros": 48983, "output_level": 6, "num_output_files": 1, "total_output_size": 17664003, "num_input_records": 13455, "num_output_records": 12940, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 6 05:22:16 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:22:16 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016536688746, "job": 30, "event": "table_file_deletion", "file_number": 53} Dec 6 05:22:16 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000051.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:22:16 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016536690610, "job": 30, "event": "table_file_deletion", "file_number": 51} Dec 6 05:22:16 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:22:16.585093) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:22:16 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:22:16.690634) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:22:16 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:22:16.690639) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:22:16 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:22:16.690640) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:22:16 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:22:16.690642) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:22:16 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:22:16.690643) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:22:17 localhost systemd[1]: var-lib-containers-storage-overlay-bf868e69c05e71759c1e3d850cc649a6050ae655aa7438d5cc3256a2e4a2d10a-merged.mount: Deactivated successfully. Dec 6 05:22:17 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-98149fe39736f2b06b84af45fc052baa53e53a5b040ce189b48bcd27e51b9b5b-userdata-shm.mount: Deactivated successfully. Dec 6 05:22:17 localhost systemd[1]: run-netns-qdhcp\x2db212e2be\x2d4a1a\x2d42c0\x2da3cd\x2dd5a930c0c30a.mount: Deactivated successfully. Dec 6 05:22:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:22:17 localhost systemd[1]: tmp-crun.SqqHWT.mount: Deactivated successfully. Dec 6 05:22:17 localhost ovn_controller[154851]: 2025-12-06T10:22:17Z|00494|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:22:17 localhost podman[333302]: 2025-12-06 10:22:17.936671287 +0000 UTC m=+0.097657886 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 6 05:22:17 localhost nova_compute[282193]: 2025-12-06 10:22:17.965 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:17 localhost podman[333302]: 2025-12-06 10:22:17.999391725 +0000 UTC m=+0.160378344 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd) Dec 6 05:22:18 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:22:18 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:22:18 localhost ovn_metadata_agent[160504]: 2025-12-06 10:22:18.357 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b142a5ef-fbed-4e92-aa78-e3ad080c6370, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 05:22:19 localhost nova_compute[282193]: 2025-12-06 10:22:19.251 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:19 localhost ovn_controller[154851]: 2025-12-06T10:22:19Z|00495|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:22:19 localhost nova_compute[282193]: 2025-12-06 10:22:19.964 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:20 localhost nova_compute[282193]: 2025-12-06 10:22:20.355 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:20 localhost dnsmasq[332273]: read /var/lib/neutron/dhcp/23332176-d495-43f0-b960-60f576e19db9/addn_hosts - 0 addresses Dec 6 05:22:20 localhost dnsmasq-dhcp[332273]: read /var/lib/neutron/dhcp/23332176-d495-43f0-b960-60f576e19db9/host Dec 6 05:22:20 localhost podman[333339]: 2025-12-06 10:22:20.815918631 +0000 UTC m=+0.045893063 container kill 071dbc5c9b86072f2d5bda447e10edc83b5e25677868e8ab3267454575f41827 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-23332176-d495-43f0-b960-60f576e19db9, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:22:20 localhost dnsmasq-dhcp[332273]: read /var/lib/neutron/dhcp/23332176-d495-43f0-b960-60f576e19db9/opts Dec 6 05:22:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:22:20 localhost podman[333352]: 2025-12-06 10:22:20.928561084 +0000 UTC m=+0.090459706 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 6 05:22:20 localhost podman[333352]: 2025-12-06 10:22:20.965334458 +0000 UTC m=+0.127233080 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 6 05:22:20 localhost ovn_controller[154851]: 2025-12-06T10:22:20Z|00496|binding|INFO|Releasing lport 72ee7f1b-132c-4087-bcf6-0dc2886ccb24 from this chassis (sb_readonly=0) Dec 6 05:22:20 localhost ovn_controller[154851]: 2025-12-06T10:22:20Z|00497|binding|INFO|Setting lport 72ee7f1b-132c-4087-bcf6-0dc2886ccb24 down in Southbound Dec 6 05:22:20 localhost kernel: device tap72ee7f1b-13 left promiscuous mode Dec 6 05:22:20 localhost nova_compute[282193]: 2025-12-06 10:22:20.978 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:20 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:22:20 localhost ovn_metadata_agent[160504]: 2025-12-06 10:22:20.989 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-23332176-d495-43f0-b960-60f576e19db9', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-23332176-d495-43f0-b960-60f576e19db9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e82deaff368b4feea9fec0f06459a6ca', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548789.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b8461d03-d92a-4a33-8802-d634072db402, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=72ee7f1b-132c-4087-bcf6-0dc2886ccb24) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:22:20 localhost ovn_metadata_agent[160504]: 2025-12-06 10:22:20.991 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 72ee7f1b-132c-4087-bcf6-0dc2886ccb24 in datapath 23332176-d495-43f0-b960-60f576e19db9 unbound from our chassis#033[00m Dec 6 05:22:20 localhost ovn_metadata_agent[160504]: 2025-12-06 10:22:20.994 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 23332176-d495-43f0-b960-60f576e19db9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:22:20 localhost ovn_metadata_agent[160504]: 2025-12-06 10:22:20.995 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[335dcbf4-a45a-4da7-ae54-a11aaa510a93]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:22:21 localhost nova_compute[282193]: 2025-12-06 10:22:21.000 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:21 localhost dnsmasq[332273]: exiting on receipt of SIGTERM Dec 6 05:22:21 localhost podman[333400]: 2025-12-06 10:22:21.413110696 +0000 UTC m=+0.065142772 container kill 071dbc5c9b86072f2d5bda447e10edc83b5e25677868e8ab3267454575f41827 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-23332176-d495-43f0-b960-60f576e19db9, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Dec 6 05:22:21 localhost systemd[1]: libpod-071dbc5c9b86072f2d5bda447e10edc83b5e25677868e8ab3267454575f41827.scope: Deactivated successfully. Dec 6 05:22:21 localhost podman[333412]: 2025-12-06 10:22:21.48881479 +0000 UTC m=+0.060234741 container died 071dbc5c9b86072f2d5bda447e10edc83b5e25677868e8ab3267454575f41827 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-23332176-d495-43f0-b960-60f576e19db9, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0) Dec 6 05:22:21 localhost podman[333412]: 2025-12-06 10:22:21.52022597 +0000 UTC m=+0.091645891 container cleanup 071dbc5c9b86072f2d5bda447e10edc83b5e25677868e8ab3267454575f41827 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-23332176-d495-43f0-b960-60f576e19db9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:22:21 localhost systemd[1]: libpod-conmon-071dbc5c9b86072f2d5bda447e10edc83b5e25677868e8ab3267454575f41827.scope: Deactivated successfully. Dec 6 05:22:21 localhost podman[333414]: 2025-12-06 10:22:21.569288501 +0000 UTC m=+0.126333293 container remove 071dbc5c9b86072f2d5bda447e10edc83b5e25677868e8ab3267454575f41827 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-23332176-d495-43f0-b960-60f576e19db9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 6 05:22:21 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:22:21.653 263652 INFO neutron.agent.dhcp.agent [None req-5da40bd6-5ea1-4d09-9398-a7019386d725 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:22:21 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:22:21.654 263652 INFO neutron.agent.dhcp.agent [None req-5da40bd6-5ea1-4d09-9398-a7019386d725 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:22:21 localhost systemd[1]: var-lib-containers-storage-overlay-988891ef40577becefdf38dadb1a51a6fd48e58dae4cb75c05349582a2ed8956-merged.mount: Deactivated successfully. Dec 6 05:22:21 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-071dbc5c9b86072f2d5bda447e10edc83b5e25677868e8ab3267454575f41827-userdata-shm.mount: Deactivated successfully. Dec 6 05:22:21 localhost systemd[1]: run-netns-qdhcp\x2d23332176\x2dd495\x2d43f0\x2db960\x2d60f576e19db9.mount: Deactivated successfully. Dec 6 05:22:21 localhost ovn_controller[154851]: 2025-12-06T10:22:21Z|00498|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:22:21 localhost nova_compute[282193]: 2025-12-06 10:22:21.910 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:23 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:22:23 localhost podman[241090]: time="2025-12-06T10:22:23Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:22:23 localhost podman[241090]: @ - - [06/Dec/2025:10:22:23 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156104 "" "Go-http-client/1.1" Dec 6 05:22:24 localhost podman[241090]: @ - - [06/Dec/2025:10:22:23 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19264 "" "Go-http-client/1.1" Dec 6 05:22:24 localhost nova_compute[282193]: 2025-12-06 10:22:24.358 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:25 localhost nova_compute[282193]: 2025-12-06 10:22:25.357 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:22:25 localhost podman[333441]: 2025-12-06 10:22:25.915188109 +0000 UTC m=+0.077060786 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:22:25 localhost podman[333441]: 2025-12-06 10:22:25.952479699 +0000 UTC m=+0.114352396 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller) Dec 6 05:22:25 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:22:28 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:22:29 localhost nova_compute[282193]: 2025-12-06 10:22:29.384 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:30 localhost nova_compute[282193]: 2025-12-06 10:22:30.360 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:30 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e194 e194: 6 total, 6 up, 6 in Dec 6 05:22:31 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e195 e195: 6 total, 6 up, 6 in Dec 6 05:22:31 localhost ceph-osd[32665]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1. Dec 6 05:22:33 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e195 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:22:34 localhost nova_compute[282193]: 2025-12-06 10:22:34.409 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:35 localhost nova_compute[282193]: 2025-12-06 10:22:35.362 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:22:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:22:36 localhost podman[333466]: 2025-12-06 10:22:36.93193966 +0000 UTC m=+0.087300299 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 6 05:22:36 localhost podman[333466]: 2025-12-06 10:22:36.970017684 +0000 UTC m=+0.125378293 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent) Dec 6 05:22:36 localhost podman[333467]: 2025-12-06 10:22:36.991069828 +0000 UTC m=+0.142652692 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:22:36 localhost systemd[1]: tmp-crun.cULO71.mount: Deactivated successfully. Dec 6 05:22:36 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:22:37 localhost podman[333467]: 2025-12-06 10:22:37.004055075 +0000 UTC m=+0.155637959 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 6 05:22:37 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:22:37 localhost nova_compute[282193]: 2025-12-06 10:22:37.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:22:37 localhost nova_compute[282193]: 2025-12-06 10:22:37.204 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:22:37 localhost nova_compute[282193]: 2025-12-06 10:22:37.205 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:22:37 localhost nova_compute[282193]: 2025-12-06 10:22:37.205 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:22:37 localhost nova_compute[282193]: 2025-12-06 10:22:37.206 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Auditing locally available compute resources for np0005548789.localdomain (node: np0005548789.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 6 05:22:37 localhost nova_compute[282193]: 2025-12-06 10:22:37.206 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:22:37 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e196 e196: 6 total, 6 up, 6 in Dec 6 05:22:37 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:22:37 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/4090728169' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:22:37 localhost nova_compute[282193]: 2025-12-06 10:22:37.657 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:22:37 localhost nova_compute[282193]: 2025-12-06 10:22:37.742 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:22:37 localhost nova_compute[282193]: 2025-12-06 10:22:37.743 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:22:37 localhost nova_compute[282193]: 2025-12-06 10:22:37.949 282197 WARNING nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 6 05:22:37 localhost nova_compute[282193]: 2025-12-06 10:22:37.950 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Hypervisor/Node resource view: name=np0005548789.localdomain free_ram=11175MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 6 05:22:37 localhost nova_compute[282193]: 2025-12-06 10:22:37.951 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:22:37 localhost nova_compute[282193]: 2025-12-06 10:22:37.951 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:22:38 localhost nova_compute[282193]: 2025-12-06 10:22:38.018 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 6 05:22:38 localhost nova_compute[282193]: 2025-12-06 10:22:38.019 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 6 05:22:38 localhost nova_compute[282193]: 2025-12-06 10:22:38.019 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Final resource view: name=np0005548789.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 6 05:22:38 localhost nova_compute[282193]: 2025-12-06 10:22:38.069 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:22:38 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:22:38 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:22:38 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3578013769' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:22:38 localhost nova_compute[282193]: 2025-12-06 10:22:38.527 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:22:38 localhost nova_compute[282193]: 2025-12-06 10:22:38.532 282197 DEBUG nova.compute.provider_tree [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed in ProviderTree for provider: 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 6 05:22:38 localhost nova_compute[282193]: 2025-12-06 10:22:38.563 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 6 05:22:38 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e197 e197: 6 total, 6 up, 6 in Dec 6 05:22:38 localhost nova_compute[282193]: 2025-12-06 10:22:38.595 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Compute_service record updated for np0005548789.localdomain:np0005548789.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 6 05:22:38 localhost nova_compute[282193]: 2025-12-06 10:22:38.596 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:22:39 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 6 05:22:39 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/819637897' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 6 05:22:39 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 6 05:22:39 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/819637897' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 6 05:22:39 localhost nova_compute[282193]: 2025-12-06 10:22:39.447 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:40 localhost nova_compute[282193]: 2025-12-06 10:22:40.364 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:40 localhost nova_compute[282193]: 2025-12-06 10:22:40.594 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:22:40 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e198 e198: 6 total, 6 up, 6 in Dec 6 05:22:41 localhost nova_compute[282193]: 2025-12-06 10:22:41.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:22:41 localhost nova_compute[282193]: 2025-12-06 10:22:41.182 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 6 05:22:41 localhost nova_compute[282193]: 2025-12-06 10:22:41.182 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 6 05:22:41 localhost nova_compute[282193]: 2025-12-06 10:22:41.277 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 6 05:22:41 localhost nova_compute[282193]: 2025-12-06 10:22:41.278 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquired lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 6 05:22:41 localhost nova_compute[282193]: 2025-12-06 10:22:41.278 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 6 05:22:41 localhost nova_compute[282193]: 2025-12-06 10:22:41.279 282197 DEBUG nova.objects.instance [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 6 05:22:41 localhost nova_compute[282193]: 2025-12-06 10:22:41.844 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updating instance_info_cache with network_info: [{"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 6 05:22:41 localhost nova_compute[282193]: 2025-12-06 10:22:41.865 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Releasing lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 6 05:22:41 localhost nova_compute[282193]: 2025-12-06 10:22:41.865 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 6 05:22:42 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e199 e199: 6 total, 6 up, 6 in Dec 6 05:22:43 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:22:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:22:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:22:43 localhost podman[333552]: 2025-12-06 10:22:43.936408732 +0000 UTC m=+0.090681213 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, vcs-type=git, managed_by=edpm_ansible, version=9.6, architecture=x86_64, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 6 05:22:43 localhost systemd[1]: tmp-crun.iTPYYx.mount: Deactivated successfully. Dec 6 05:22:43 localhost podman[333552]: 2025-12-06 10:22:43.979136138 +0000 UTC m=+0.133408589 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, io.buildah.version=1.33.7, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter) Dec 6 05:22:43 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:22:44 localhost podman[333553]: 2025-12-06 10:22:43.980875461 +0000 UTC m=+0.132478660 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:22:44 localhost podman[333553]: 2025-12-06 10:22:44.063193967 +0000 UTC m=+0.214797136 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:22:44 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:22:44 localhost nova_compute[282193]: 2025-12-06 10:22:44.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:22:44 localhost nova_compute[282193]: 2025-12-06 10:22:44.182 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:22:44 localhost nova_compute[282193]: 2025-12-06 10:22:44.182 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:22:44 localhost nova_compute[282193]: 2025-12-06 10:22:44.182 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:22:44 localhost nova_compute[282193]: 2025-12-06 10:22:44.182 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 6 05:22:44 localhost nova_compute[282193]: 2025-12-06 10:22:44.476 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:44 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e200 e200: 6 total, 6 up, 6 in Dec 6 05:22:45 localhost ovn_controller[154851]: 2025-12-06T10:22:45Z|00499|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:22:45 localhost nova_compute[282193]: 2025-12-06 10:22:45.064 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:45 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 1 addresses Dec 6 05:22:45 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:22:45 localhost podman[333610]: 2025-12-06 10:22:45.071660845 +0000 UTC m=+0.075128567 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:22:45 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:22:45 localhost nova_compute[282193]: 2025-12-06 10:22:45.367 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:45 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 6 05:22:45 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1317054491' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 6 05:22:45 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 6 05:22:45 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1317054491' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 6 05:22:45 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e201 e201: 6 total, 6 up, 6 in Dec 6 05:22:45 localhost sshd[333632]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:22:46 localhost nova_compute[282193]: 2025-12-06 10:22:46.183 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:22:46 localhost nova_compute[282193]: 2025-12-06 10:22:46.184 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:22:46 localhost openstack_network_exporter[243110]: ERROR 10:22:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:22:46 localhost openstack_network_exporter[243110]: ERROR 10:22:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:22:46 localhost openstack_network_exporter[243110]: ERROR 10:22:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:22:46 localhost openstack_network_exporter[243110]: ERROR 10:22:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:22:46 localhost openstack_network_exporter[243110]: Dec 6 05:22:46 localhost openstack_network_exporter[243110]: ERROR 10:22:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:22:46 localhost openstack_network_exporter[243110]: Dec 6 05:22:47 localhost sshd[333634]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:22:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:22:47.340 160509 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:22:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:22:47.341 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:22:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:22:47.341 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:22:47 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e202 e202: 6 total, 6 up, 6 in Dec 6 05:22:48 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:22:48 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e203 e203: 6 total, 6 up, 6 in Dec 6 05:22:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:22:48 localhost podman[333636]: 2025-12-06 10:22:48.638919112 +0000 UTC m=+0.066974808 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 6 05:22:48 localhost podman[333636]: 2025-12-06 10:22:48.650152455 +0000 UTC m=+0.078208231 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 6 05:22:48 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:22:49 localhost nova_compute[282193]: 2025-12-06 10:22:49.178 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:22:49 localhost nova_compute[282193]: 2025-12-06 10:22:49.521 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:49 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e204 e204: 6 total, 6 up, 6 in Dec 6 05:22:50 localhost nova_compute[282193]: 2025-12-06 10:22:50.369 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:50 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e205 e205: 6 total, 6 up, 6 in Dec 6 05:22:51 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e206 e206: 6 total, 6 up, 6 in Dec 6 05:22:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:22:51 localhost systemd[1]: tmp-crun.zK97n9.mount: Deactivated successfully. Dec 6 05:22:51 localhost podman[333656]: 2025-12-06 10:22:51.942682441 +0000 UTC m=+0.106188847 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 6 05:22:51 localhost podman[333656]: 2025-12-06 10:22:51.954256815 +0000 UTC m=+0.117763231 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 6 05:22:51 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:22:52 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e207 e207: 6 total, 6 up, 6 in Dec 6 05:22:53 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:22:53 localhost podman[241090]: time="2025-12-06T10:22:53Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:22:53 localhost podman[241090]: @ - - [06/Dec/2025:10:22:53 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156104 "" "Go-http-client/1.1" Dec 6 05:22:53 localhost podman[241090]: @ - - [06/Dec/2025:10:22:53 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19268 "" "Go-http-client/1.1" Dec 6 05:22:54 localhost nova_compute[282193]: 2025-12-06 10:22:54.522 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:54 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e208 e208: 6 total, 6 up, 6 in Dec 6 05:22:55 localhost ovn_metadata_agent[160504]: 2025-12-06 10:22:55.298 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:6c:02', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:a8:2f:0c:cb:a1'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:22:55 localhost ovn_metadata_agent[160504]: 2025-12-06 10:22:55.299 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 6 05:22:55 localhost nova_compute[282193]: 2025-12-06 10:22:55.298 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:55 localhost nova_compute[282193]: 2025-12-06 10:22:55.371 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:55 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 6 05:22:55 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3220423124' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 6 05:22:55 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 6 05:22:55 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3220423124' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 6 05:22:55 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:22:55.520 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:22:55Z, description=, device_id=d17ccc6d-49c1-4c04-b4b2-d05430f70927, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=8f530efb-7550-478d-addb-425f279de982, ip_allocation=immediate, mac_address=fa:16:3e:33:08:6a, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3291, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-06T10:22:55Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:22:55 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:22:55.694 263652 INFO neutron.agent.linux.ip_lib [None req-a6857fea-71a4-4313-97cb-6e4af08d500f - - - - - -] Device tapec870270-76 cannot be used as it has no MAC address#033[00m Dec 6 05:22:55 localhost nova_compute[282193]: 2025-12-06 10:22:55.720 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:55 localhost kernel: device tapec870270-76 entered promiscuous mode Dec 6 05:22:55 localhost NetworkManager[5973]: [1765016575.7279] manager: (tapec870270-76): new Generic device (/org/freedesktop/NetworkManager/Devices/80) Dec 6 05:22:55 localhost ovn_controller[154851]: 2025-12-06T10:22:55Z|00500|binding|INFO|Claiming lport ec870270-76fc-404f-9ac8-aae83a5c5051 for this chassis. Dec 6 05:22:55 localhost ovn_controller[154851]: 2025-12-06T10:22:55Z|00501|binding|INFO|ec870270-76fc-404f-9ac8-aae83a5c5051: Claiming unknown Dec 6 05:22:55 localhost systemd-udevd[333716]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:22:55 localhost nova_compute[282193]: 2025-12-06 10:22:55.733 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:55 localhost ovn_metadata_agent[160504]: 2025-12-06 10:22:55.741 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-04e62072-8d37-46d4-a112-c923d93098a9', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-04e62072-8d37-46d4-a112-c923d93098a9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3006b6c88845443ab13998bd660d02f7', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0dafa4fe-04f5-4502-a649-2a574bf9c45c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=ec870270-76fc-404f-9ac8-aae83a5c5051) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:22:55 localhost ovn_metadata_agent[160504]: 2025-12-06 10:22:55.743 160509 INFO neutron.agent.ovn.metadata.agent [-] Port ec870270-76fc-404f-9ac8-aae83a5c5051 in datapath 04e62072-8d37-46d4-a112-c923d93098a9 bound to our chassis#033[00m Dec 6 05:22:55 localhost ovn_controller[154851]: 2025-12-06T10:22:55Z|00502|binding|INFO|Setting lport ec870270-76fc-404f-9ac8-aae83a5c5051 ovn-installed in OVS Dec 6 05:22:55 localhost ovn_controller[154851]: 2025-12-06T10:22:55Z|00503|binding|INFO|Setting lport ec870270-76fc-404f-9ac8-aae83a5c5051 up in Southbound Dec 6 05:22:55 localhost ovn_metadata_agent[160504]: 2025-12-06 10:22:55.745 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Port 0f3fb41c-7a0b-40e3-8890-b8a5ead66801 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 6 05:22:55 localhost ovn_metadata_agent[160504]: 2025-12-06 10:22:55.745 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 04e62072-8d37-46d4-a112-c923d93098a9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:22:55 localhost ovn_metadata_agent[160504]: 2025-12-06 10:22:55.746 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[91d1a665-0acb-4302-ac1a-8ba883c2be87]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:22:55 localhost nova_compute[282193]: 2025-12-06 10:22:55.747 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:55 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 2 addresses Dec 6 05:22:55 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:22:55 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:22:55 localhost podman[333700]: 2025-12-06 10:22:55.760158514 +0000 UTC m=+0.085030211 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 6 05:22:55 localhost journal[230404]: ethtool ioctl error on tapec870270-76: No such device Dec 6 05:22:55 localhost journal[230404]: ethtool ioctl error on tapec870270-76: No such device Dec 6 05:22:55 localhost nova_compute[282193]: 2025-12-06 10:22:55.770 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:55 localhost journal[230404]: ethtool ioctl error on tapec870270-76: No such device Dec 6 05:22:55 localhost journal[230404]: ethtool ioctl error on tapec870270-76: No such device Dec 6 05:22:55 localhost journal[230404]: ethtool ioctl error on tapec870270-76: No such device Dec 6 05:22:55 localhost journal[230404]: ethtool ioctl error on tapec870270-76: No such device Dec 6 05:22:55 localhost journal[230404]: ethtool ioctl error on tapec870270-76: No such device Dec 6 05:22:55 localhost journal[230404]: ethtool ioctl error on tapec870270-76: No such device Dec 6 05:22:55 localhost nova_compute[282193]: 2025-12-06 10:22:55.812 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:55 localhost nova_compute[282193]: 2025-12-06 10:22:55.840 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:56 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:22:56.070 263652 INFO neutron.agent.dhcp.agent [None req-923278d6-42bc-4efe-acae-e825706e52e8 - - - - - -] DHCP configuration for ports {'8f530efb-7550-478d-addb-425f279de982'} is completed#033[00m Dec 6 05:22:56 localhost nova_compute[282193]: 2025-12-06 10:22:56.367 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:22:56 localhost podman[333797]: Dec 6 05:22:56 localhost podman[333797]: 2025-12-06 10:22:56.81829248 +0000 UTC m=+0.092649554 container create f0b3da8b2765503728b15ffd83b4b75c223326eadbfa4889f0b7b0340f71baba (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-04e62072-8d37-46d4-a112-c923d93098a9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 6 05:22:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:22:56 localhost systemd[1]: Started libpod-conmon-f0b3da8b2765503728b15ffd83b4b75c223326eadbfa4889f0b7b0340f71baba.scope. Dec 6 05:22:56 localhost systemd[1]: Started libcrun container. Dec 6 05:22:56 localhost podman[333797]: 2025-12-06 10:22:56.775291005 +0000 UTC m=+0.049648129 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:22:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e17231f4b67de947b4d429d81a6f4334b12776ef1dd2b29f4fb039abcc7bc52/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:22:56 localhost podman[333797]: 2025-12-06 10:22:56.889030642 +0000 UTC m=+0.163387726 container init f0b3da8b2765503728b15ffd83b4b75c223326eadbfa4889f0b7b0340f71baba (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-04e62072-8d37-46d4-a112-c923d93098a9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0) Dec 6 05:22:56 localhost dnsmasq[333826]: started, version 2.85 cachesize 150 Dec 6 05:22:56 localhost dnsmasq[333826]: DNS service limited to local subnets Dec 6 05:22:56 localhost dnsmasq[333826]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:22:56 localhost dnsmasq[333826]: warning: no upstream servers configured Dec 6 05:22:56 localhost dnsmasq-dhcp[333826]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 6 05:22:56 localhost dnsmasq[333826]: read /var/lib/neutron/dhcp/04e62072-8d37-46d4-a112-c923d93098a9/addn_hosts - 0 addresses Dec 6 05:22:56 localhost dnsmasq-dhcp[333826]: read /var/lib/neutron/dhcp/04e62072-8d37-46d4-a112-c923d93098a9/host Dec 6 05:22:56 localhost dnsmasq-dhcp[333826]: read /var/lib/neutron/dhcp/04e62072-8d37-46d4-a112-c923d93098a9/opts Dec 6 05:22:56 localhost podman[333797]: 2025-12-06 10:22:56.949498189 +0000 UTC m=+0.223855263 container start f0b3da8b2765503728b15ffd83b4b75c223326eadbfa4889f0b7b0340f71baba (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-04e62072-8d37-46d4-a112-c923d93098a9, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true) Dec 6 05:22:56 localhost podman[333810]: 2025-12-06 10:22:56.966600362 +0000 UTC m=+0.122261427 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2) Dec 6 05:22:57 localhost podman[333810]: 2025-12-06 10:22:57.013065923 +0000 UTC m=+0.168727018 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Dec 6 05:22:57 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:22:57 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:22:57.125 263652 INFO neutron.agent.dhcp.agent [None req-fe6337ef-bb50-4310-a33d-1eb2350e643e - - - - - -] DHCP configuration for ports {'b7836405-7939-4345-bc2d-b9851667edb3'} is completed#033[00m Dec 6 05:22:57 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:22:57.136 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:22:56Z, description=, device_id=d17ccc6d-49c1-4c04-b4b2-d05430f70927, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=81d0b0ea-67b7-457e-bfbc-24d21f440b5e, ip_allocation=immediate, mac_address=fa:16:3e:7b:5d:08, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:22:53Z, description=, dns_domain=, id=04e62072-8d37-46d4-a112-c923d93098a9, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingAPIMysqlTest-657417851-network, port_security_enabled=True, project_id=3006b6c88845443ab13998bd660d02f7, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=9710, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3279, status=ACTIVE, subnets=['d1dd1f6b-ed10-476e-b0e0-6361467c9e10'], tags=[], tenant_id=3006b6c88845443ab13998bd660d02f7, updated_at=2025-12-06T10:22:53Z, vlan_transparent=None, network_id=04e62072-8d37-46d4-a112-c923d93098a9, port_security_enabled=False, project_id=3006b6c88845443ab13998bd660d02f7, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3300, status=DOWN, tags=[], tenant_id=3006b6c88845443ab13998bd660d02f7, updated_at=2025-12-06T10:22:56Z on network 04e62072-8d37-46d4-a112-c923d93098a9#033[00m Dec 6 05:22:57 localhost dnsmasq[333826]: read /var/lib/neutron/dhcp/04e62072-8d37-46d4-a112-c923d93098a9/addn_hosts - 1 addresses Dec 6 05:22:57 localhost dnsmasq-dhcp[333826]: read /var/lib/neutron/dhcp/04e62072-8d37-46d4-a112-c923d93098a9/host Dec 6 05:22:57 localhost dnsmasq-dhcp[333826]: read /var/lib/neutron/dhcp/04e62072-8d37-46d4-a112-c923d93098a9/opts Dec 6 05:22:57 localhost podman[333856]: 2025-12-06 10:22:57.363287679 +0000 UTC m=+0.062972916 container kill f0b3da8b2765503728b15ffd83b4b75c223326eadbfa4889f0b7b0340f71baba (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-04e62072-8d37-46d4-a112-c923d93098a9, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Dec 6 05:22:57 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e209 e209: 6 total, 6 up, 6 in Dec 6 05:22:57 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:22:57.724 263652 INFO neutron.agent.dhcp.agent [None req-db279e16-a3b7-40aa-b550-d60523417811 - - - - - -] DHCP configuration for ports {'81d0b0ea-67b7-457e-bfbc-24d21f440b5e'} is completed#033[00m Dec 6 05:22:58 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:22:58 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:22:58.945 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:22:56Z, description=, device_id=d17ccc6d-49c1-4c04-b4b2-d05430f70927, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=81d0b0ea-67b7-457e-bfbc-24d21f440b5e, ip_allocation=immediate, mac_address=fa:16:3e:7b:5d:08, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:22:53Z, description=, dns_domain=, id=04e62072-8d37-46d4-a112-c923d93098a9, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingAPIMysqlTest-657417851-network, port_security_enabled=True, project_id=3006b6c88845443ab13998bd660d02f7, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=9710, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3279, status=ACTIVE, subnets=['d1dd1f6b-ed10-476e-b0e0-6361467c9e10'], tags=[], tenant_id=3006b6c88845443ab13998bd660d02f7, updated_at=2025-12-06T10:22:53Z, vlan_transparent=None, network_id=04e62072-8d37-46d4-a112-c923d93098a9, port_security_enabled=False, project_id=3006b6c88845443ab13998bd660d02f7, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3300, status=DOWN, tags=[], tenant_id=3006b6c88845443ab13998bd660d02f7, updated_at=2025-12-06T10:22:56Z on network 04e62072-8d37-46d4-a112-c923d93098a9#033[00m Dec 6 05:22:59 localhost dnsmasq[333826]: read /var/lib/neutron/dhcp/04e62072-8d37-46d4-a112-c923d93098a9/addn_hosts - 1 addresses Dec 6 05:22:59 localhost dnsmasq-dhcp[333826]: read /var/lib/neutron/dhcp/04e62072-8d37-46d4-a112-c923d93098a9/host Dec 6 05:22:59 localhost podman[333895]: 2025-12-06 10:22:59.158635188 +0000 UTC m=+0.067958188 container kill f0b3da8b2765503728b15ffd83b4b75c223326eadbfa4889f0b7b0340f71baba (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-04e62072-8d37-46d4-a112-c923d93098a9, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 6 05:22:59 localhost dnsmasq-dhcp[333826]: read /var/lib/neutron/dhcp/04e62072-8d37-46d4-a112-c923d93098a9/opts Dec 6 05:22:59 localhost systemd[1]: tmp-crun.pr6b0Q.mount: Deactivated successfully. Dec 6 05:22:59 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:22:59.383 263652 INFO neutron.agent.dhcp.agent [None req-7394efdf-8f2c-40d1-afbc-c55308d3573f - - - - - -] DHCP configuration for ports {'81d0b0ea-67b7-457e-bfbc-24d21f440b5e'} is completed#033[00m Dec 6 05:22:59 localhost nova_compute[282193]: 2025-12-06 10:22:59.526 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:23:00 localhost nova_compute[282193]: 2025-12-06 10:23:00.374 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:23:02 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e210 e210: 6 total, 6 up, 6 in Dec 6 05:23:03 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:23:03 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e211 e211: 6 total, 6 up, 6 in Dec 6 05:23:04 localhost ovn_metadata_agent[160504]: 2025-12-06 10:23:04.300 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b142a5ef-fbed-4e92-aa78-e3ad080c6370, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 05:23:04 localhost nova_compute[282193]: 2025-12-06 10:23:04.529 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:23:05 localhost nova_compute[282193]: 2025-12-06 10:23:05.407 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:23:05 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e212 e212: 6 total, 6 up, 6 in Dec 6 05:23:05 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:23:05 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:23:06 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e213 e213: 6 total, 6 up, 6 in Dec 6 05:23:07 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:23:07 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e214 e214: 6 total, 6 up, 6 in Dec 6 05:23:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:23:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.917 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'name': 'test', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005548789.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'hostId': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.918 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.924 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets volume: 60 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f82f0d4c-e0f9-41f9-85bd-60caefc79060', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 60, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:23:07.918876', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '8ef6eaa6-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13006.168246417, 'message_signature': '09f51530a368551a372290c68c3a874ef5e62b9eb39f75e57300c76a5df3f503'}]}, 'timestamp': '2025-12-06 10:23:07.925824', '_unique_id': '9ffe9a6c20df420aa47bb6dd9712cfa1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.928 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.930 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.930 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets volume: 114 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f3e68bf2-4e8a-4183-a041-c30944a9c07a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 114, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:23:07.930455', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '8ef7bb8e-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13006.168246417, 'message_signature': '838d1aeb92eee4594c569a032bce97b0984877edf0810a616254330c16bcd708'}]}, 'timestamp': '2025-12-06 10:23:07.931023', '_unique_id': 'c12ce357c7a04ec58221e2612b4c633d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.932 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.933 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.933 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '383fd15e-4b38-4892-9cdd-dbe7de181f5a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:23:07.933463', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '8ef8308c-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13006.168246417, 'message_signature': 'f2f8b58ed7393922d611df1e60eca428c0f75c63881293de9a68c02ba7358b8d'}]}, 'timestamp': '2025-12-06 10:23:07.934026', '_unique_id': '978e699060f749baabaa25e167baac68'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.935 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.936 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.936 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.937 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c79b969f-1916-46c4-a823-dce5e870cb3f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:23:07.937082', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '8ef8c04c-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13006.168246417, 'message_signature': '672d796e9eec7334ba30755ef2ca2b992bcb9512c52564e67eb851a1c63e89c9'}]}, 'timestamp': '2025-12-06 10:23:07.937682', '_unique_id': '22eb05bdaafc4cfdae3f5ce564d28449'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.938 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.939 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 6 05:23:07 localhost systemd[1]: tmp-crun.yCnqke.mount: Deactivated successfully. Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.969 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.970 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3d366e4b-8bfe-40a1-8832-5a9cdc4f359f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:23:07.940129', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '8efdb61a-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13006.189501077, 'message_signature': 'a8414a8b1f7007f4db6c24be89797b7b0a499cb894df1d753a89b2b3b7ccf63d'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:23:07.940129', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '8efdd104-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13006.189501077, 'message_signature': '2d7d9d31ea133854bc93efbf6718ea7362f11cdbf8abbd91bc1b68bb470b54c1'}]}, 'timestamp': '2025-12-06 10:23:07.970945', '_unique_id': '063bce6128ab43f68d295d3c38dd91fa'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.972 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:07.974 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 6 05:23:08 localhost systemd[1]: tmp-crun.9aeuN5.mount: Deactivated successfully. Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.006 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/cpu volume: 18760000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '737260c0-bb81-4379-af82-03dda64d20b9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 18760000000, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'timestamp': '2025-12-06T10:23:07.974482', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '8f03521e-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13006.255417662, 'message_signature': 'e40e45acc04b2c3333443193188ae8840e109526b1cc0dd61fa6d39ab468539f'}]}, 'timestamp': '2025-12-06 10:23:08.006998', '_unique_id': '2c03ef6504854236b8f3adbe689d57f3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.008 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.009 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.010 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.010 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.010 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:23:08 localhost podman[334000]: 2025-12-06 10:23:08.012469043 +0000 UTC m=+0.162190138 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '576d7336-8d2c-475d-aa51-52bd0d5568a9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:23:08.010213', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '8f03e6a2-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13006.189501077, 'message_signature': 'e56893d2e43d29a111482c905e5bb5c0d2ce3bdf2d231a7664f064ab03bca74b'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:23:08.010213', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '8f03faac-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13006.189501077, 'message_signature': '964d36e68aa36d69b33e022a3b2392a4ca2a35435d806b2c511ab85125bf10c5'}]}, 'timestamp': '2025-12-06 10:23:08.011199', '_unique_id': '8aca2530269f42a78d5d18db51f6cc95'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.012 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.015 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 6 05:23:08 localhost podman[334000]: 2025-12-06 10:23:08.043009137 +0000 UTC m=+0.192730292 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.046 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.latency volume: 1252245154 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.047 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.latency volume: 27668224 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'aa19168c-7921-4908-91d0-422df314a7ba', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1252245154, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:23:08.015438', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '8f097ee6-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13006.264808099, 'message_signature': 'c693f3a6f6557c0d715bf27264da16d69848d60c8eb19dbc9f6311e24e1a385c'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 27668224, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:23:08.015438', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '8f099548-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13006.264808099, 'message_signature': '36d92b9c0d2929bc3989d037d7ee91ac8b691af9a409f3d303750a871749699c'}]}, 'timestamp': '2025-12-06 10:23:08.047981', '_unique_id': '1dcc22773f204a0b92d0af6e36c25a0e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.049 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.051 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.051 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ed021326-8368-4bdb-9252-dc2f1d8fe584', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:23:08.051485', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '8f0a3520-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13006.168246417, 'message_signature': '87eee7626835890fabafbcde5b8e5cef25f1b4a7cb5c8194c5eb872d8b86578e'}]}, 'timestamp': '2025-12-06 10:23:08.052053', '_unique_id': 'bdc13ecd3ac44209a2923315e63842d2'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.053 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.054 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.055 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.bytes volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:23:08 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b5878c55-fd86-4858-bda5-2c9bec5cf421', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:23:08.055156', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '8f0ac5e4-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13006.168246417, 'message_signature': 'bd1176061e2b57a3b5767340e8e9ab3887d66ec1d72b36efe4ddde6c189e9318'}]}, 'timestamp': '2025-12-06 10:23:08.055903', '_unique_id': '76da1030f7fb49f7874c83c9d19fb505'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.057 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.059 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.059 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/memory.usage volume: 51.80859375 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1035dcc2-7252-4e26-b44e-4feee5767eed', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.80859375, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'timestamp': '2025-12-06T10:23:08.059306', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '8f0b6756-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13006.255417662, 'message_signature': 'e5ff10afebb3d5ececfb6187c63817d5489eccc6cd0e63b00aac46bf669e14be'}]}, 'timestamp': '2025-12-06 10:23:08.060003', '_unique_id': '5603f39792ec49d59d02e927a7953ea0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.061 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.062 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.063 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:23:08 localhost podman[334001]: 2025-12-06 10:23:07.963230908 +0000 UTC m=+0.112288363 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '49b8350e-cd34-47fa-9326-25649f576eb5', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:23:08.063077', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '8f0bfb80-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13006.168246417, 'message_signature': '32c06a3d41abadd87f3264113076ef666872b56f7baea29bf28df99372b5b408'}]}, 'timestamp': '2025-12-06 10:23:08.063720', '_unique_id': '284ee9f6834f49f3a193b4a72ba5ffd1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.064 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.066 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.066 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1435c678-3829-4179-a635-b5df441d0e95', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:23:08.066918', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '8f0c922a-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13006.168246417, 'message_signature': '322890b6114a18501c4ecd1a849dc3a8491b7ae29b273733864ef72d3f1ff31b'}]}, 'timestamp': '2025-12-06 10:23:08.067575', '_unique_id': 'fe1f3cc759894826a02eec1a685b4cf6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.068 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.070 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.070 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.bytes volume: 389120 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.070 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '02c640c9-b624-40c8-aea2-6bd16cc0b01e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 389120, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:23:08.070477', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '8f0d17d6-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13006.264808099, 'message_signature': 'c84f0046849d978cec5d161b774e4779b64114eaf6d3a315e6d8ca2a5179408d'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:23:08.070477', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '8f0d244c-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13006.264808099, 'message_signature': '3d2587307ce70d1958e353fd5c4608c12e3a9a664ce0581e5ec687ef6c5b5279'}]}, 'timestamp': '2025-12-06 10:23:08.071158', '_unique_id': '157de13fe64941a18a7f8efbce4878f4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.072 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.073 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.073 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.074 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5af9a935-ab1c-44b5-894c-1a8405d4c064', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:23:08.073843', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '8f0d98fa-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13006.264808099, 'message_signature': 'aa228fce003a18b90360adb1bd20d45ab3915347387a42c4aa8249a644031d44'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:23:08.073843', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '8f0da372-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13006.264808099, 'message_signature': '9babc7710011ef8ab0f6f7403d94cbbeed1d7fd4792cc243a2853836acf11c4e'}]}, 'timestamp': '2025-12-06 10:23:08.074406', '_unique_id': '6645be711743407a86f25e01b01929f8'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.075 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.latency volume: 1525105336 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.076 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.latency volume: 106716064 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e9fc86c6-0b32-49a5-a0b6-fcefcb94808b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1525105336, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:23:08.075854', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '8f0deb2a-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13006.264808099, 'message_signature': '8ddc03dfe4b1bc88c817983fc24891d0d7dabe0d38f65de9699855df1914938a'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 106716064, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:23:08.075854', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '8f0df5de-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13006.264808099, 'message_signature': 'd925d75bd14a139cc0b6f28aba0314a3ae46cc772c7476309917c5abe4551eb7'}]}, 'timestamp': '2025-12-06 10:23:08.076529', '_unique_id': '6a4efd20a0f04a6ea46ef624e7694581'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.077 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.078 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.078 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.078 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '056741f8-e175-4bdd-a505-6bdb7150dce7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:23:08.078279', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '8f0e46ce-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13006.189501077, 'message_signature': '5218dd92a8e0fe453bfd765100d7e8045323fd20dff9d96434757c6ccda9de2e'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:23:08.078279', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '8f0e513c-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13006.189501077, 'message_signature': 'a6ddbe7fb5a175918b27797ec35b3f93da3777b52c91d05dac508c3f22a2e20d'}]}, 'timestamp': '2025-12-06 10:23:08.078876', '_unique_id': '1b56811025aa479fb5b2242e1f237311'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.079 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.080 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.080 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.080 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.requests volume: 47 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.080 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '23fbd32b-4367-4a27-b272-ee44da223c4a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 47, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:23:08.080439', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '8f0e9a0c-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13006.264808099, 'message_signature': '46749e6d8a0977a2e8675da4d937c81d6bfbfff293c2a545d799de16b1711c2e'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:23:08.080439', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '8f0ea524-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13006.264808099, 'message_signature': 'ae93e0946201eb989278f60715978475b93abbbd99a70817ca5bf2b07e74ccbf'}]}, 'timestamp': '2025-12-06 10:23:08.081061', '_unique_id': 'd59bb563716e4a7489ce1fad2a105073'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.081 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.082 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.082 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.083 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f1d3a79d-d086-4fc1-bc6b-7094b75d06ab', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:23:08.082841', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '8f0ef9de-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13006.264808099, 'message_signature': 'e5524067f8706396eac904f6fa5fd468c9186693a131c079da27be7959f6f1ea'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:23:08.082841', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '8f0f069a-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13006.264808099, 'message_signature': '9c2be2848569175687788fbd4cbadccc3aff02f1ff55159e72f4b10129ddc6e4'}]}, 'timestamp': '2025-12-06 10:23:08.083508', '_unique_id': 'c180f16e561141008dfe879cc48b1a0d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.084 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.085 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '226dd84e-683c-4f89-8ba5-8123a5c4c183', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:23:08.085050', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '8f0f505a-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13006.168246417, 'message_signature': '90bc878095e56e6f3c091e488b7b325b1daef7a2f147ee8e7c8b6e28dc229b85'}]}, 'timestamp': '2025-12-06 10:23:08.085437', '_unique_id': '9cd103c741a14e269ef96def07a6cf15'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.086 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.087 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.bytes volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5edad3b9-aa59-4615-9420-3e6d44582889', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:23:08.087000', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '8f0f9c04-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13006.168246417, 'message_signature': '34eee62ef5d80e90af4b4a1837023e1a143a983c9dc3aed4e3cc6d75319560b8'}]}, 'timestamp': '2025-12-06 10:23:08.087344', '_unique_id': '2210163cadd748358e21a9b1a953cf65'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:23:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:23:08.088 12 ERROR oslo_messaging.notify.messaging Dec 6 05:23:08 localhost podman[334001]: 2025-12-06 10:23:08.09742299 +0000 UTC m=+0.246480395 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 6 05:23:08 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:23:08 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:23:08 localhost dnsmasq[333826]: read /var/lib/neutron/dhcp/04e62072-8d37-46d4-a112-c923d93098a9/addn_hosts - 0 addresses Dec 6 05:23:08 localhost podman[334056]: 2025-12-06 10:23:08.845522688 +0000 UTC m=+0.058333834 container kill f0b3da8b2765503728b15ffd83b4b75c223326eadbfa4889f0b7b0340f71baba (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-04e62072-8d37-46d4-a112-c923d93098a9, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3) Dec 6 05:23:08 localhost dnsmasq-dhcp[333826]: read /var/lib/neutron/dhcp/04e62072-8d37-46d4-a112-c923d93098a9/host Dec 6 05:23:08 localhost dnsmasq-dhcp[333826]: read /var/lib/neutron/dhcp/04e62072-8d37-46d4-a112-c923d93098a9/opts Dec 6 05:23:09 localhost nova_compute[282193]: 2025-12-06 10:23:09.051 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:23:09 localhost ovn_controller[154851]: 2025-12-06T10:23:09Z|00504|binding|INFO|Releasing lport ec870270-76fc-404f-9ac8-aae83a5c5051 from this chassis (sb_readonly=0) Dec 6 05:23:09 localhost ovn_controller[154851]: 2025-12-06T10:23:09Z|00505|binding|INFO|Setting lport ec870270-76fc-404f-9ac8-aae83a5c5051 down in Southbound Dec 6 05:23:09 localhost kernel: device tapec870270-76 left promiscuous mode Dec 6 05:23:09 localhost ovn_metadata_agent[160504]: 2025-12-06 10:23:09.063 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-04e62072-8d37-46d4-a112-c923d93098a9', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-04e62072-8d37-46d4-a112-c923d93098a9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3006b6c88845443ab13998bd660d02f7', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548789.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0dafa4fe-04f5-4502-a649-2a574bf9c45c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=ec870270-76fc-404f-9ac8-aae83a5c5051) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:23:09 localhost ovn_metadata_agent[160504]: 2025-12-06 10:23:09.065 160509 INFO neutron.agent.ovn.metadata.agent [-] Port ec870270-76fc-404f-9ac8-aae83a5c5051 in datapath 04e62072-8d37-46d4-a112-c923d93098a9 unbound from our chassis#033[00m Dec 6 05:23:09 localhost ovn_metadata_agent[160504]: 2025-12-06 10:23:09.068 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 04e62072-8d37-46d4-a112-c923d93098a9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:23:09 localhost ovn_metadata_agent[160504]: 2025-12-06 10:23:09.069 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[e841b348-bf95-4537-9e1d-6f9462d15d4f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:23:09 localhost nova_compute[282193]: 2025-12-06 10:23:09.079 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:23:09 localhost nova_compute[282193]: 2025-12-06 10:23:09.576 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:23:10 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 1 addresses Dec 6 05:23:10 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:23:10 localhost podman[334095]: 2025-12-06 10:23:10.085735449 +0000 UTC m=+0.061360337 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 6 05:23:10 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:23:10 localhost ovn_controller[154851]: 2025-12-06T10:23:10Z|00506|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:23:10 localhost nova_compute[282193]: 2025-12-06 10:23:10.337 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:23:10 localhost nova_compute[282193]: 2025-12-06 10:23:10.410 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:23:10 localhost dnsmasq[333826]: exiting on receipt of SIGTERM Dec 6 05:23:10 localhost podman[334133]: 2025-12-06 10:23:10.786662925 +0000 UTC m=+0.064145592 container kill f0b3da8b2765503728b15ffd83b4b75c223326eadbfa4889f0b7b0340f71baba (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-04e62072-8d37-46d4-a112-c923d93098a9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 6 05:23:10 localhost systemd[1]: libpod-f0b3da8b2765503728b15ffd83b4b75c223326eadbfa4889f0b7b0340f71baba.scope: Deactivated successfully. Dec 6 05:23:10 localhost podman[334145]: 2025-12-06 10:23:10.85226813 +0000 UTC m=+0.052728633 container died f0b3da8b2765503728b15ffd83b4b75c223326eadbfa4889f0b7b0340f71baba (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-04e62072-8d37-46d4-a112-c923d93098a9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 6 05:23:10 localhost podman[334145]: 2025-12-06 10:23:10.94319645 +0000 UTC m=+0.143656913 container cleanup f0b3da8b2765503728b15ffd83b4b75c223326eadbfa4889f0b7b0340f71baba (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-04e62072-8d37-46d4-a112-c923d93098a9, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:23:10 localhost systemd[1]: libpod-conmon-f0b3da8b2765503728b15ffd83b4b75c223326eadbfa4889f0b7b0340f71baba.scope: Deactivated successfully. Dec 6 05:23:10 localhost podman[334152]: 2025-12-06 10:23:10.966670367 +0000 UTC m=+0.155164403 container remove f0b3da8b2765503728b15ffd83b4b75c223326eadbfa4889f0b7b0340f71baba (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-04e62072-8d37-46d4-a112-c923d93098a9, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:23:11 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:23:11.002 263652 INFO neutron.agent.dhcp.agent [None req-454f3b01-48e3-4f7d-ae05-98866e14b70c - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:23:11 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:23:11.003 263652 INFO neutron.agent.dhcp.agent [None req-454f3b01-48e3-4f7d-ae05-98866e14b70c - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:23:11 localhost systemd[1]: var-lib-containers-storage-overlay-3e17231f4b67de947b4d429d81a6f4334b12776ef1dd2b29f4fb039abcc7bc52-merged.mount: Deactivated successfully. Dec 6 05:23:11 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f0b3da8b2765503728b15ffd83b4b75c223326eadbfa4889f0b7b0340f71baba-userdata-shm.mount: Deactivated successfully. Dec 6 05:23:11 localhost systemd[1]: run-netns-qdhcp\x2d04e62072\x2d8d37\x2d46d4\x2da112\x2dc923d93098a9.mount: Deactivated successfully. Dec 6 05:23:11 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e215 e215: 6 total, 6 up, 6 in Dec 6 05:23:12 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e216 e216: 6 total, 6 up, 6 in Dec 6 05:23:13 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:23:14 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:23:14.527 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:23:14Z, description=, device_id=2b5489f4-b8c8-4de2-9a14-bc27f555fe1d, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=d9db14b8-6eec-454e-9bf9-05f99df47bdd, ip_allocation=immediate, mac_address=fa:16:3e:c6:34:ab, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3389, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-06T10:23:14Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:23:14 localhost nova_compute[282193]: 2025-12-06 10:23:14.578 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:23:14 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 2 addresses Dec 6 05:23:14 localhost podman[334189]: 2025-12-06 10:23:14.775331382 +0000 UTC m=+0.067615029 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Dec 6 05:23:14 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:23:14 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:23:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:23:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:23:14 localhost systemd[1]: tmp-crun.zAwduk.mount: Deactivated successfully. Dec 6 05:23:14 localhost podman[334202]: 2025-12-06 10:23:14.894554485 +0000 UTC m=+0.083779022 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_id=edpm, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vcs-type=git, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=9.6, name=ubi9-minimal, release=1755695350, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc.) Dec 6 05:23:14 localhost podman[334202]: 2025-12-06 10:23:14.901042243 +0000 UTC m=+0.090266810 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, name=ubi9-minimal, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 6 05:23:14 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:23:14 localhost podman[334203]: 2025-12-06 10:23:14.9401901 +0000 UTC m=+0.122120943 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 6 05:23:14 localhost podman[334203]: 2025-12-06 10:23:14.975134579 +0000 UTC m=+0.157065412 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Dec 6 05:23:14 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:23:15 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:23:15.026 263652 INFO neutron.agent.dhcp.agent [None req-e80e3ecc-32d2-430e-9ed7-0cedd695fc20 - - - - - -] DHCP configuration for ports {'d9db14b8-6eec-454e-9bf9-05f99df47bdd'} is completed#033[00m Dec 6 05:23:15 localhost nova_compute[282193]: 2025-12-06 10:23:15.404 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:23:15 localhost nova_compute[282193]: 2025-12-06 10:23:15.411 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:23:16 localhost openstack_network_exporter[243110]: ERROR 10:23:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:23:16 localhost openstack_network_exporter[243110]: ERROR 10:23:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:23:16 localhost openstack_network_exporter[243110]: ERROR 10:23:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:23:16 localhost openstack_network_exporter[243110]: Dec 6 05:23:16 localhost openstack_network_exporter[243110]: ERROR 10:23:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:23:16 localhost openstack_network_exporter[243110]: ERROR 10:23:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:23:16 localhost openstack_network_exporter[243110]: Dec 6 05:23:17 localhost nova_compute[282193]: 2025-12-06 10:23:17.639 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:23:18 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:23:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:23:18 localhost systemd[1]: tmp-crun.dStZFj.mount: Deactivated successfully. Dec 6 05:23:18 localhost podman[334246]: 2025-12-06 10:23:18.927490277 +0000 UTC m=+0.088511817 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:23:18 localhost podman[334246]: 2025-12-06 10:23:18.945234189 +0000 UTC m=+0.106255729 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 6 05:23:18 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:23:19 localhost nova_compute[282193]: 2025-12-06 10:23:19.619 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:23:19 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e217 e217: 6 total, 6 up, 6 in Dec 6 05:23:20 localhost nova_compute[282193]: 2025-12-06 10:23:20.414 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:23:20 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e218 e218: 6 total, 6 up, 6 in Dec 6 05:23:20 localhost sshd[334265]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:23:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:23:22 localhost podman[334267]: 2025-12-06 10:23:22.428416793 +0000 UTC m=+0.100806853 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:23:22 localhost podman[334267]: 2025-12-06 10:23:22.441094341 +0000 UTC m=+0.113484411 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:23:22 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:23:23 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:23:23 localhost podman[241090]: time="2025-12-06T10:23:23Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:23:23 localhost podman[241090]: @ - - [06/Dec/2025:10:23:23 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156104 "" "Go-http-client/1.1" Dec 6 05:23:23 localhost podman[241090]: @ - - [06/Dec/2025:10:23:23 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19269 "" "Go-http-client/1.1" Dec 6 05:23:24 localhost nova_compute[282193]: 2025-12-06 10:23:24.663 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:23:25 localhost nova_compute[282193]: 2025-12-06 10:23:25.416 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:23:27 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e219 e219: 6 total, 6 up, 6 in Dec 6 05:23:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:23:27 localhost podman[334291]: 2025-12-06 10:23:27.95592055 +0000 UTC m=+0.107626461 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 6 05:23:28 localhost podman[334291]: 2025-12-06 10:23:28.020906956 +0000 UTC m=+0.172612797 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 6 05:23:28 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:23:28 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:23:29 localhost nova_compute[282193]: 2025-12-06 10:23:29.666 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:23:30 localhost nova_compute[282193]: 2025-12-06 10:23:30.419 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:23:31 localhost nova_compute[282193]: 2025-12-06 10:23:31.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:23:31 localhost nova_compute[282193]: 2025-12-06 10:23:31.182 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Dec 6 05:23:31 localhost nova_compute[282193]: 2025-12-06 10:23:31.199 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Dec 6 05:23:33 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:23:34 localhost nova_compute[282193]: 2025-12-06 10:23:34.701 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:23:34 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 6 05:23:34 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/685384975' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 6 05:23:34 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 6 05:23:34 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/685384975' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 6 05:23:35 localhost nova_compute[282193]: 2025-12-06 10:23:35.421 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:23:38 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:23:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:23:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:23:38 localhost podman[334315]: 2025-12-06 10:23:38.935605788 +0000 UTC m=+0.093504369 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:23:38 localhost podman[334316]: 2025-12-06 10:23:38.99357311 +0000 UTC m=+0.145504079 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 6 05:23:39 localhost podman[334316]: 2025-12-06 10:23:39.005284608 +0000 UTC m=+0.157215597 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 6 05:23:39 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:23:39 localhost podman[334315]: 2025-12-06 10:23:39.021908476 +0000 UTC m=+0.179807057 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 6 05:23:39 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:23:39 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 6 05:23:39 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3601030272' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 6 05:23:39 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 6 05:23:39 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3601030272' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 6 05:23:39 localhost nova_compute[282193]: 2025-12-06 10:23:39.200 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:23:39 localhost nova_compute[282193]: 2025-12-06 10:23:39.221 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:23:39 localhost nova_compute[282193]: 2025-12-06 10:23:39.221 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:23:39 localhost nova_compute[282193]: 2025-12-06 10:23:39.222 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:23:39 localhost nova_compute[282193]: 2025-12-06 10:23:39.222 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Auditing locally available compute resources for np0005548789.localdomain (node: np0005548789.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 6 05:23:39 localhost nova_compute[282193]: 2025-12-06 10:23:39.223 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:23:39 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e220 e220: 6 total, 6 up, 6 in Dec 6 05:23:39 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:23:39 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/860284658' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:23:39 localhost nova_compute[282193]: 2025-12-06 10:23:39.702 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:23:39 localhost nova_compute[282193]: 2025-12-06 10:23:39.710 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:23:39 localhost nova_compute[282193]: 2025-12-06 10:23:39.779 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:23:39 localhost nova_compute[282193]: 2025-12-06 10:23:39.779 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:23:39 localhost nova_compute[282193]: 2025-12-06 10:23:39.970 282197 WARNING nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 6 05:23:39 localhost nova_compute[282193]: 2025-12-06 10:23:39.972 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Hypervisor/Node resource view: name=np0005548789.localdomain free_ram=11187MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 6 05:23:39 localhost nova_compute[282193]: 2025-12-06 10:23:39.972 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:23:39 localhost nova_compute[282193]: 2025-12-06 10:23:39.972 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:23:40 localhost nova_compute[282193]: 2025-12-06 10:23:40.278 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 6 05:23:40 localhost nova_compute[282193]: 2025-12-06 10:23:40.279 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 6 05:23:40 localhost nova_compute[282193]: 2025-12-06 10:23:40.279 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Final resource view: name=np0005548789.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 6 05:23:40 localhost nova_compute[282193]: 2025-12-06 10:23:40.346 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Refreshing inventories for resource provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Dec 6 05:23:40 localhost nova_compute[282193]: 2025-12-06 10:23:40.423 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:23:40 localhost nova_compute[282193]: 2025-12-06 10:23:40.455 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Updating ProviderTree inventory for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Dec 6 05:23:40 localhost nova_compute[282193]: 2025-12-06 10:23:40.455 282197 DEBUG nova.compute.provider_tree [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Updating inventory in ProviderTree for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 6 05:23:40 localhost nova_compute[282193]: 2025-12-06 10:23:40.486 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Refreshing aggregate associations for resource provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Dec 6 05:23:40 localhost nova_compute[282193]: 2025-12-06 10:23:40.507 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Refreshing trait associations for resource provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad, traits: HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_RESCUE_BFV,HW_CPU_X86_AVX2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SHA,HW_CPU_X86_BMI2,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSSE3,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_AVX,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AMD_SVM,HW_CPU_X86_FMA3,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_F16C,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_ABM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Dec 6 05:23:40 localhost nova_compute[282193]: 2025-12-06 10:23:40.542 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:23:40 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e221 e221: 6 total, 6 up, 6 in Dec 6 05:23:40 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:23:40 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2359987376' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:23:41 localhost nova_compute[282193]: 2025-12-06 10:23:41.012 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:23:41 localhost nova_compute[282193]: 2025-12-06 10:23:41.018 282197 DEBUG nova.compute.provider_tree [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed in ProviderTree for provider: 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 6 05:23:41 localhost nova_compute[282193]: 2025-12-06 10:23:41.037 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 6 05:23:41 localhost nova_compute[282193]: 2025-12-06 10:23:41.039 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Compute_service record updated for np0005548789.localdomain:np0005548789.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 6 05:23:41 localhost nova_compute[282193]: 2025-12-06 10:23:41.040 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:23:41 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e222 e222: 6 total, 6 up, 6 in Dec 6 05:23:42 localhost nova_compute[282193]: 2025-12-06 10:23:42.016 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:23:42 localhost nova_compute[282193]: 2025-12-06 10:23:42.017 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:23:42 localhost nova_compute[282193]: 2025-12-06 10:23:42.017 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 6 05:23:42 localhost nova_compute[282193]: 2025-12-06 10:23:42.018 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 6 05:23:42 localhost nova_compute[282193]: 2025-12-06 10:23:42.138 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 6 05:23:42 localhost nova_compute[282193]: 2025-12-06 10:23:42.139 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquired lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 6 05:23:42 localhost nova_compute[282193]: 2025-12-06 10:23:42.139 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 6 05:23:42 localhost nova_compute[282193]: 2025-12-06 10:23:42.139 282197 DEBUG nova.objects.instance [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 6 05:23:42 localhost nova_compute[282193]: 2025-12-06 10:23:42.557 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updating instance_info_cache with network_info: [{"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 6 05:23:42 localhost nova_compute[282193]: 2025-12-06 10:23:42.581 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Releasing lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 6 05:23:42 localhost nova_compute[282193]: 2025-12-06 10:23:42.582 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 6 05:23:42 localhost sshd[334400]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:23:43 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e222 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:23:43 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e223 e223: 6 total, 6 up, 6 in Dec 6 05:23:44 localhost nova_compute[282193]: 2025-12-06 10:23:44.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:23:44 localhost nova_compute[282193]: 2025-12-06 10:23:44.182 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:23:44 localhost nova_compute[282193]: 2025-12-06 10:23:44.183 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:23:44 localhost nova_compute[282193]: 2025-12-06 10:23:44.183 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:23:44 localhost nova_compute[282193]: 2025-12-06 10:23:44.184 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 6 05:23:44 localhost nova_compute[282193]: 2025-12-06 10:23:44.751 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:23:45 localhost nova_compute[282193]: 2025-12-06 10:23:45.426 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:23:45 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:23:45.511 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:23:45Z, description=, device_id=c9e741a0-1e78-4ba5-9ba8-789872d3aa4a, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=08370a59-6bd8-4ee2-99a3-75cdfb1fe917, ip_allocation=immediate, mac_address=fa:16:3e:35:f6:22, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3497, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-06T10:23:45Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:23:45 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 3 addresses Dec 6 05:23:45 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:23:45 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:23:45 localhost podman[334418]: 2025-12-06 10:23:45.762928755 +0000 UTC m=+0.061220582 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 6 05:23:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:23:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:23:45 localhost podman[334432]: 2025-12-06 10:23:45.883837131 +0000 UTC m=+0.094898242 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., config_id=edpm, distribution-scope=public, io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, maintainer=Red Hat, Inc., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git) Dec 6 05:23:45 localhost podman[334432]: 2025-12-06 10:23:45.923205505 +0000 UTC m=+0.134266626 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, managed_by=edpm_ansible, release=1755695350, container_name=openstack_network_exporter, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, architecture=x86_64, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Dec 6 05:23:45 localhost systemd[1]: tmp-crun.F2963D.mount: Deactivated successfully. Dec 6 05:23:45 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:23:45 localhost podman[334434]: 2025-12-06 10:23:45.947822997 +0000 UTC m=+0.154766891 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 6 05:23:45 localhost podman[334434]: 2025-12-06 10:23:45.963208418 +0000 UTC m=+0.170152372 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 6 05:23:45 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:23:46 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:23:46.127 263652 INFO neutron.agent.dhcp.agent [None req-88eff2e5-fd08-4e50-8cf8-6ae1d862819f - - - - - -] DHCP configuration for ports {'08370a59-6bd8-4ee2-99a3-75cdfb1fe917'} is completed#033[00m Dec 6 05:23:46 localhost nova_compute[282193]: 2025-12-06 10:23:46.568 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:23:46 localhost openstack_network_exporter[243110]: ERROR 10:23:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:23:46 localhost openstack_network_exporter[243110]: ERROR 10:23:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:23:46 localhost openstack_network_exporter[243110]: ERROR 10:23:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:23:46 localhost openstack_network_exporter[243110]: ERROR 10:23:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:23:46 localhost openstack_network_exporter[243110]: Dec 6 05:23:46 localhost openstack_network_exporter[243110]: ERROR 10:23:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:23:46 localhost openstack_network_exporter[243110]: Dec 6 05:23:47 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e224 e224: 6 total, 6 up, 6 in Dec 6 05:23:47 localhost nova_compute[282193]: 2025-12-06 10:23:47.184 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:23:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:23:47.341 160509 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:23:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:23:47.341 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:23:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:23:47.342 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:23:47 localhost sshd[334476]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:23:47 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e225 e225: 6 total, 6 up, 6 in Dec 6 05:23:48 localhost nova_compute[282193]: 2025-12-06 10:23:48.182 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:23:48 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e225 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:23:48 localhost nova_compute[282193]: 2025-12-06 10:23:48.976 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:23:49 localhost nova_compute[282193]: 2025-12-06 10:23:49.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:23:49 localhost ovn_metadata_agent[160504]: 2025-12-06 10:23:49.653 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:6c:02', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:a8:2f:0c:cb:a1'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:23:49 localhost ovn_metadata_agent[160504]: 2025-12-06 10:23:49.654 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 6 05:23:49 localhost nova_compute[282193]: 2025-12-06 10:23:49.685 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:23:49 localhost nova_compute[282193]: 2025-12-06 10:23:49.762 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:23:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:23:49 localhost podman[334477]: 2025-12-06 10:23:49.927812038 +0000 UTC m=+0.091307083 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Dec 6 05:23:49 localhost podman[334477]: 2025-12-06 10:23:49.965746877 +0000 UTC m=+0.129241902 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 6 05:23:49 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:23:50 localhost nova_compute[282193]: 2025-12-06 10:23:50.428 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:23:51 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e226 e226: 6 total, 6 up, 6 in Dec 6 05:23:52 localhost nova_compute[282193]: 2025-12-06 10:23:52.195 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:23:52 localhost nova_compute[282193]: 2025-12-06 10:23:52.195 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Dec 6 05:23:52 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e227 e227: 6 total, 6 up, 6 in Dec 6 05:23:52 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e228 e228: 6 total, 6 up, 6 in Dec 6 05:23:52 localhost ovn_metadata_agent[160504]: 2025-12-06 10:23:52.656 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b142a5ef-fbed-4e92-aa78-e3ad080c6370, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 05:23:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:23:52 localhost podman[334494]: 2025-12-06 10:23:52.912008772 +0000 UTC m=+0.074304902 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 6 05:23:52 localhost podman[334494]: 2025-12-06 10:23:52.925136324 +0000 UTC m=+0.087432474 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 6 05:23:52 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:23:53 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 6 05:23:53 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3355742361' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 6 05:23:53 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 6 05:23:53 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3355742361' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 6 05:23:53 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:23:53 localhost podman[241090]: time="2025-12-06T10:23:53Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:23:53 localhost podman[241090]: @ - - [06/Dec/2025:10:23:53 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156104 "" "Go-http-client/1.1" Dec 6 05:23:53 localhost podman[241090]: @ - - [06/Dec/2025:10:23:53 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19273 "" "Go-http-client/1.1" Dec 6 05:23:54 localhost nova_compute[282193]: 2025-12-06 10:23:54.807 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:23:55 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e229 e229: 6 total, 6 up, 6 in Dec 6 05:23:55 localhost nova_compute[282193]: 2025-12-06 10:23:55.429 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:23:57 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e230 e230: 6 total, 6 up, 6 in Dec 6 05:23:58 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 6 05:23:58 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2773610337' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 6 05:23:58 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 6 05:23:58 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2773610337' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 6 05:23:58 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:23:58 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e231 e231: 6 total, 6 up, 6 in Dec 6 05:23:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:23:58 localhost systemd[1]: tmp-crun.DZDjcn.mount: Deactivated successfully. Dec 6 05:23:58 localhost podman[334516]: 2025-12-06 10:23:58.926954868 +0000 UTC m=+0.092316693 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true) Dec 6 05:23:58 localhost podman[334516]: 2025-12-06 10:23:58.96462259 +0000 UTC m=+0.129984445 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 6 05:23:58 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:23:59 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e232 e232: 6 total, 6 up, 6 in Dec 6 05:23:59 localhost nova_compute[282193]: 2025-12-06 10:23:59.809 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:24:00 localhost nova_compute[282193]: 2025-12-06 10:24:00.432 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:24:01 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 6 05:24:01 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/573079347' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 6 05:24:02 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e233 e233: 6 total, 6 up, 6 in Dec 6 05:24:02 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 6 05:24:02 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1694106485' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 6 05:24:02 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 6 05:24:02 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1694106485' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 6 05:24:03 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:24:03 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e234 e234: 6 total, 6 up, 6 in Dec 6 05:24:04 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e235 e235: 6 total, 6 up, 6 in Dec 6 05:24:04 localhost nova_compute[282193]: 2025-12-06 10:24:04.813 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:24:05 localhost ovn_controller[154851]: 2025-12-06T10:24:05Z|00507|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:24:05 localhost nova_compute[282193]: 2025-12-06 10:24:05.409 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:24:05 localhost systemd[1]: tmp-crun.2xdW9r.mount: Deactivated successfully. Dec 6 05:24:05 localhost podman[334557]: 2025-12-06 10:24:05.412193739 +0000 UTC m=+0.075914881 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:24:05 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 2 addresses Dec 6 05:24:05 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:24:05 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:24:05 localhost nova_compute[282193]: 2025-12-06 10:24:05.434 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:24:06 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:24:06 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:24:06 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:24:06 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:24:06 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:24:06 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:24:07 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e236 e236: 6 total, 6 up, 6 in Dec 6 05:24:07 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e237 e237: 6 total, 6 up, 6 in Dec 6 05:24:08 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:24:08 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:24:08 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:24:09 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e238 e238: 6 total, 6 up, 6 in Dec 6 05:24:09 localhost nova_compute[282193]: 2025-12-06 10:24:09.840 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:24:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:24:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:24:09 localhost podman[334721]: 2025-12-06 10:24:09.962469162 +0000 UTC m=+0.097875122 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 6 05:24:09 localhost podman[334721]: 2025-12-06 10:24:09.972951313 +0000 UTC m=+0.108357283 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 6 05:24:09 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:24:10 localhost systemd[1]: tmp-crun.ZPGAlS.mount: Deactivated successfully. Dec 6 05:24:10 localhost podman[334720]: 2025-12-06 10:24:10.063323545 +0000 UTC m=+0.198630662 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Dec 6 05:24:10 localhost podman[334720]: 2025-12-06 10:24:10.094826409 +0000 UTC m=+0.230133516 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 6 05:24:10 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:24:10 localhost nova_compute[282193]: 2025-12-06 10:24:10.437 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:24:11 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e239 e239: 6 total, 6 up, 6 in Dec 6 05:24:12 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:24:12 localhost neutron_sriov_agent[256690]: 2025-12-06 10:24:12.599 2 INFO neutron.agent.securitygroups_rpc [None req-63e1cfe1-0be8-4c17-9453-907c82bfa210 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Security group rule updated ['d407968b-b8de-45cd-a244-3bf62d3c0357']#033[00m Dec 6 05:24:12 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e240 e240: 6 total, 6 up, 6 in Dec 6 05:24:13 localhost neutron_sriov_agent[256690]: 2025-12-06 10:24:13.350 2 INFO neutron.agent.securitygroups_rpc [None req-9218de2a-a054-45fc-bd21-8f037be37a59 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Security group rule updated ['d407968b-b8de-45cd-a244-3bf62d3c0357']#033[00m Dec 6 05:24:13 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:24:14 localhost nova_compute[282193]: 2025-12-06 10:24:14.843 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:24:15 localhost nova_compute[282193]: 2025-12-06 10:24:15.439 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:24:15 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e241 e241: 6 total, 6 up, 6 in Dec 6 05:24:16 localhost openstack_network_exporter[243110]: ERROR 10:24:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:24:16 localhost openstack_network_exporter[243110]: ERROR 10:24:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:24:16 localhost openstack_network_exporter[243110]: ERROR 10:24:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:24:16 localhost openstack_network_exporter[243110]: ERROR 10:24:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:24:16 localhost openstack_network_exporter[243110]: Dec 6 05:24:16 localhost openstack_network_exporter[243110]: ERROR 10:24:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:24:16 localhost openstack_network_exporter[243110]: Dec 6 05:24:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:24:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:24:16 localhost podman[334762]: 2025-12-06 10:24:16.922668553 +0000 UTC m=+0.080050058 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true) Dec 6 05:24:16 localhost podman[334762]: 2025-12-06 10:24:16.96147235 +0000 UTC m=+0.118853895 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 6 05:24:16 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:24:16 localhost podman[334761]: 2025-12-06 10:24:16.979091988 +0000 UTC m=+0.139934189 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., io.openshift.expose-services=, release=1755695350, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.openshift.tags=minimal rhel9, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, version=9.6, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container) Dec 6 05:24:16 localhost podman[334761]: 2025-12-06 10:24:16.989267589 +0000 UTC m=+0.150109770 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, distribution-scope=public, architecture=x86_64, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=) Dec 6 05:24:17 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:24:17 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e242 e242: 6 total, 6 up, 6 in Dec 6 05:24:18 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:24:19 localhost neutron_sriov_agent[256690]: 2025-12-06 10:24:19.082 2 INFO neutron.agent.securitygroups_rpc [req-6cf9c8b9-f82f-4729-827a-87ee94dc739b req-d70b880e-b383-4ad3-91f4-06f3e667f577 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Security group member updated ['d407968b-b8de-45cd-a244-3bf62d3c0357']#033[00m Dec 6 05:24:19 localhost nova_compute[282193]: 2025-12-06 10:24:19.890 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:24:20 localhost nova_compute[282193]: 2025-12-06 10:24:20.441 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:24:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:24:20 localhost podman[334801]: 2025-12-06 10:24:20.923383958 +0000 UTC m=+0.085733871 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd) Dec 6 05:24:20 localhost podman[334801]: 2025-12-06 10:24:20.961139302 +0000 UTC m=+0.123489225 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd) Dec 6 05:24:20 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:24:22 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e243 e243: 6 total, 6 up, 6 in Dec 6 05:24:23 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:24:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:24:23 localhost podman[241090]: time="2025-12-06T10:24:23Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:24:23 localhost podman[334821]: 2025-12-06 10:24:23.920102691 +0000 UTC m=+0.080983407 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 6 05:24:23 localhost podman[241090]: @ - - [06/Dec/2025:10:24:23 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156104 "" "Go-http-client/1.1" Dec 6 05:24:24 localhost podman[334821]: 2025-12-06 10:24:24.004199391 +0000 UTC m=+0.165080047 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:24:24 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:24:24 localhost podman[241090]: @ - - [06/Dec/2025:10:24:23 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19274 "" "Go-http-client/1.1" Dec 6 05:24:24 localhost nova_compute[282193]: 2025-12-06 10:24:24.935 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:24:25 localhost nova_compute[282193]: 2025-12-06 10:24:25.442 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:24:25 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:24:25.516 263652 INFO neutron.agent.linux.ip_lib [None req-cb670ead-bb02-4e2a-ad39-dac9c22fbd01 - - - - - -] Device tape3bcd567-2d cannot be used as it has no MAC address#033[00m Dec 6 05:24:25 localhost nova_compute[282193]: 2025-12-06 10:24:25.543 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:24:25 localhost kernel: device tape3bcd567-2d entered promiscuous mode Dec 6 05:24:25 localhost NetworkManager[5973]: [1765016665.5533] manager: (tape3bcd567-2d): new Generic device (/org/freedesktop/NetworkManager/Devices/81) Dec 6 05:24:25 localhost ovn_controller[154851]: 2025-12-06T10:24:25Z|00508|binding|INFO|Claiming lport e3bcd567-2db3-4d72-9cdb-24e14598df57 for this chassis. Dec 6 05:24:25 localhost ovn_controller[154851]: 2025-12-06T10:24:25Z|00509|binding|INFO|e3bcd567-2db3-4d72-9cdb-24e14598df57: Claiming unknown Dec 6 05:24:25 localhost nova_compute[282193]: 2025-12-06 10:24:25.554 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:24:25 localhost systemd-udevd[334854]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:24:25 localhost ovn_metadata_agent[160504]: 2025-12-06 10:24:25.564 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-67f05a6c-bdca-4c59-9049-edf7ed03aad0', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67f05a6c-bdca-4c59-9049-edf7ed03aad0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0e98feac0e5947229c2baa6fc34be5fb', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8812a56d-06ed-4c18-98f2-54c75645cf8d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=e3bcd567-2db3-4d72-9cdb-24e14598df57) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:24:25 localhost ovn_metadata_agent[160504]: 2025-12-06 10:24:25.566 160509 INFO neutron.agent.ovn.metadata.agent [-] Port e3bcd567-2db3-4d72-9cdb-24e14598df57 in datapath 67f05a6c-bdca-4c59-9049-edf7ed03aad0 bound to our chassis#033[00m Dec 6 05:24:25 localhost ovn_metadata_agent[160504]: 2025-12-06 10:24:25.567 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Port 7322d78b-d00d-4ce7-9ccc-685a4924e7d6 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 6 05:24:25 localhost ovn_metadata_agent[160504]: 2025-12-06 10:24:25.567 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 67f05a6c-bdca-4c59-9049-edf7ed03aad0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:24:25 localhost ovn_metadata_agent[160504]: 2025-12-06 10:24:25.568 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[a85cdc77-6d3e-4ce8-bf59-370f44e629c2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:24:25 localhost journal[230404]: ethtool ioctl error on tape3bcd567-2d: No such device Dec 6 05:24:25 localhost ovn_controller[154851]: 2025-12-06T10:24:25Z|00510|binding|INFO|Setting lport e3bcd567-2db3-4d72-9cdb-24e14598df57 ovn-installed in OVS Dec 6 05:24:25 localhost ovn_controller[154851]: 2025-12-06T10:24:25Z|00511|binding|INFO|Setting lport e3bcd567-2db3-4d72-9cdb-24e14598df57 up in Southbound Dec 6 05:24:25 localhost journal[230404]: ethtool ioctl error on tape3bcd567-2d: No such device Dec 6 05:24:25 localhost nova_compute[282193]: 2025-12-06 10:24:25.592 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:24:25 localhost journal[230404]: ethtool ioctl error on tape3bcd567-2d: No such device Dec 6 05:24:25 localhost journal[230404]: ethtool ioctl error on tape3bcd567-2d: No such device Dec 6 05:24:25 localhost journal[230404]: ethtool ioctl error on tape3bcd567-2d: No such device Dec 6 05:24:25 localhost journal[230404]: ethtool ioctl error on tape3bcd567-2d: No such device Dec 6 05:24:25 localhost journal[230404]: ethtool ioctl error on tape3bcd567-2d: No such device Dec 6 05:24:25 localhost journal[230404]: ethtool ioctl error on tape3bcd567-2d: No such device Dec 6 05:24:25 localhost nova_compute[282193]: 2025-12-06 10:24:25.635 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:24:25 localhost nova_compute[282193]: 2025-12-06 10:24:25.662 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:24:25 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:24:25.850 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:24:25Z, description=, device_id=7f27da2c-b860-4e59-a4ae-32a003c36a27, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=e4fc5ebd-720f-4d2a-ba2c-8ce2c6466147, ip_allocation=immediate, mac_address=fa:16:3e:94:89:c4, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3554, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-06T10:24:25Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:24:26 localhost podman[334911]: 2025-12-06 10:24:26.09719384 +0000 UTC m=+0.061878752 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 6 05:24:26 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 3 addresses Dec 6 05:24:26 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:24:26 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:24:26 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:24:26.404 263652 INFO neutron.agent.dhcp.agent [None req-d9acfb43-d0a8-47cf-baea-a62c7bd2849b - - - - - -] DHCP configuration for ports {'e4fc5ebd-720f-4d2a-ba2c-8ce2c6466147'} is completed#033[00m Dec 6 05:24:26 localhost nova_compute[282193]: 2025-12-06 10:24:26.554 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:24:26 localhost podman[334964]: Dec 6 05:24:26 localhost podman[334964]: 2025-12-06 10:24:26.618955069 +0000 UTC m=+0.060293953 container create ae534041aa594f21eccc75296424685ccea09523ed4cad3d4ba6b6620b5f4196 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-67f05a6c-bdca-4c59-9049-edf7ed03aad0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:24:26 localhost systemd[1]: Started libpod-conmon-ae534041aa594f21eccc75296424685ccea09523ed4cad3d4ba6b6620b5f4196.scope. Dec 6 05:24:26 localhost systemd[1]: Started libcrun container. Dec 6 05:24:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b70f3ccd35a58f837d6cbb67993db9373957054834ea3aae99c93b78afe6bc66/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:24:26 localhost podman[334964]: 2025-12-06 10:24:26.678446128 +0000 UTC m=+0.119785012 container init ae534041aa594f21eccc75296424685ccea09523ed4cad3d4ba6b6620b5f4196 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-67f05a6c-bdca-4c59-9049-edf7ed03aad0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125) Dec 6 05:24:26 localhost podman[334964]: 2025-12-06 10:24:26.586269241 +0000 UTC m=+0.027608185 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:24:26 localhost podman[334964]: 2025-12-06 10:24:26.688830275 +0000 UTC m=+0.130169189 container start ae534041aa594f21eccc75296424685ccea09523ed4cad3d4ba6b6620b5f4196 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-67f05a6c-bdca-4c59-9049-edf7ed03aad0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Dec 6 05:24:26 localhost dnsmasq[334982]: started, version 2.85 cachesize 150 Dec 6 05:24:26 localhost dnsmasq[334982]: DNS service limited to local subnets Dec 6 05:24:26 localhost dnsmasq[334982]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:24:26 localhost dnsmasq[334982]: warning: no upstream servers configured Dec 6 05:24:26 localhost dnsmasq-dhcp[334982]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 6 05:24:26 localhost dnsmasq[334982]: read /var/lib/neutron/dhcp/67f05a6c-bdca-4c59-9049-edf7ed03aad0/addn_hosts - 0 addresses Dec 6 05:24:26 localhost dnsmasq-dhcp[334982]: read /var/lib/neutron/dhcp/67f05a6c-bdca-4c59-9049-edf7ed03aad0/host Dec 6 05:24:26 localhost dnsmasq-dhcp[334982]: read /var/lib/neutron/dhcp/67f05a6c-bdca-4c59-9049-edf7ed03aad0/opts Dec 6 05:24:26 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:24:26.853 263652 INFO neutron.agent.dhcp.agent [None req-e782a239-46cb-4db6-9f7f-8bba64220849 - - - - - -] DHCP configuration for ports {'01afa461-016c-41d6-8fdd-371c7b3fb32f'} is completed#033[00m Dec 6 05:24:27 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:24:27.175 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:24:27Z, description=, device_id=7f27da2c-b860-4e59-a4ae-32a003c36a27, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=40a72c0c-c9b6-4998-aa5c-35369f742816, ip_allocation=immediate, mac_address=fa:16:3e:a2:d6:41, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:24:23Z, description=, dns_domain=, id=67f05a6c-bdca-4c59-9049-edf7ed03aad0, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingNegativeTest-437399641-network, port_security_enabled=True, project_id=0e98feac0e5947229c2baa6fc34be5fb, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=24335, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3547, status=ACTIVE, subnets=['e63ebf19-6289-435c-ac51-9391b3e6813a'], tags=[], tenant_id=0e98feac0e5947229c2baa6fc34be5fb, updated_at=2025-12-06T10:24:23Z, vlan_transparent=None, network_id=67f05a6c-bdca-4c59-9049-edf7ed03aad0, port_security_enabled=False, project_id=0e98feac0e5947229c2baa6fc34be5fb, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3555, status=DOWN, tags=[], tenant_id=0e98feac0e5947229c2baa6fc34be5fb, updated_at=2025-12-06T10:24:27Z on network 67f05a6c-bdca-4c59-9049-edf7ed03aad0#033[00m Dec 6 05:24:27 localhost dnsmasq[334982]: read /var/lib/neutron/dhcp/67f05a6c-bdca-4c59-9049-edf7ed03aad0/addn_hosts - 1 addresses Dec 6 05:24:27 localhost dnsmasq-dhcp[334982]: read /var/lib/neutron/dhcp/67f05a6c-bdca-4c59-9049-edf7ed03aad0/host Dec 6 05:24:27 localhost dnsmasq-dhcp[334982]: read /var/lib/neutron/dhcp/67f05a6c-bdca-4c59-9049-edf7ed03aad0/opts Dec 6 05:24:27 localhost podman[335000]: 2025-12-06 10:24:27.39497229 +0000 UTC m=+0.044941384 container kill ae534041aa594f21eccc75296424685ccea09523ed4cad3d4ba6b6620b5f4196 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-67f05a6c-bdca-4c59-9049-edf7ed03aad0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Dec 6 05:24:27 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:24:27.610 263652 INFO neutron.agent.dhcp.agent [None req-7af9c5d0-c064-43f9-a4d0-629c62eae4a6 - - - - - -] DHCP configuration for ports {'40a72c0c-c9b6-4998-aa5c-35369f742816'} is completed#033[00m Dec 6 05:24:28 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:24:28.190 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:24:27Z, description=, device_id=7f27da2c-b860-4e59-a4ae-32a003c36a27, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=40a72c0c-c9b6-4998-aa5c-35369f742816, ip_allocation=immediate, mac_address=fa:16:3e:a2:d6:41, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:24:23Z, description=, dns_domain=, id=67f05a6c-bdca-4c59-9049-edf7ed03aad0, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingNegativeTest-437399641-network, port_security_enabled=True, project_id=0e98feac0e5947229c2baa6fc34be5fb, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=24335, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3547, status=ACTIVE, subnets=['e63ebf19-6289-435c-ac51-9391b3e6813a'], tags=[], tenant_id=0e98feac0e5947229c2baa6fc34be5fb, updated_at=2025-12-06T10:24:23Z, vlan_transparent=None, network_id=67f05a6c-bdca-4c59-9049-edf7ed03aad0, port_security_enabled=False, project_id=0e98feac0e5947229c2baa6fc34be5fb, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3555, status=DOWN, tags=[], tenant_id=0e98feac0e5947229c2baa6fc34be5fb, updated_at=2025-12-06T10:24:27Z on network 67f05a6c-bdca-4c59-9049-edf7ed03aad0#033[00m Dec 6 05:24:28 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:24:28 localhost dnsmasq[334982]: read /var/lib/neutron/dhcp/67f05a6c-bdca-4c59-9049-edf7ed03aad0/addn_hosts - 1 addresses Dec 6 05:24:28 localhost dnsmasq-dhcp[334982]: read /var/lib/neutron/dhcp/67f05a6c-bdca-4c59-9049-edf7ed03aad0/host Dec 6 05:24:28 localhost dnsmasq-dhcp[334982]: read /var/lib/neutron/dhcp/67f05a6c-bdca-4c59-9049-edf7ed03aad0/opts Dec 6 05:24:28 localhost podman[335037]: 2025-12-06 10:24:28.396315561 +0000 UTC m=+0.062127980 container kill ae534041aa594f21eccc75296424685ccea09523ed4cad3d4ba6b6620b5f4196 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-67f05a6c-bdca-4c59-9049-edf7ed03aad0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:24:28 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:24:28.667 263652 INFO neutron.agent.dhcp.agent [None req-55c32fdd-31e2-4be6-add9-457d8d03fafb - - - - - -] DHCP configuration for ports {'40a72c0c-c9b6-4998-aa5c-35369f742816'} is completed#033[00m Dec 6 05:24:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:24:29 localhost podman[335059]: 2025-12-06 10:24:29.924242246 +0000 UTC m=+0.084943267 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 6 05:24:29 localhost nova_compute[282193]: 2025-12-06 10:24:29.973 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:24:29 localhost podman[335059]: 2025-12-06 10:24:29.988640634 +0000 UTC m=+0.149341655 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible) Dec 6 05:24:30 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:24:30 localhost nova_compute[282193]: 2025-12-06 10:24:30.444 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:24:30 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #55. Immutable memtables: 0. Dec 6 05:24:30 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:24:30.677825) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 6 05:24:30 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 55 Dec 6 05:24:30 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016670677942, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2837, "num_deletes": 279, "total_data_size": 5386268, "memory_usage": 5462552, "flush_reason": "Manual Compaction"} Dec 6 05:24:30 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #56: started Dec 6 05:24:30 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016670698044, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 56, "file_size": 3502146, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30608, "largest_seqno": 33440, "table_properties": {"data_size": 3490518, "index_size": 7557, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3013, "raw_key_size": 26999, "raw_average_key_size": 22, "raw_value_size": 3466567, "raw_average_value_size": 2917, "num_data_blocks": 316, "num_entries": 1188, "num_filter_entries": 1188, "num_deletions": 279, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765016537, "oldest_key_time": 1765016537, "file_creation_time": 1765016670, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ec2a3a6-c7d0-4f1e-9923-5a3b782725ae", "db_session_id": "JMBO5KX1IJCJ8FWC64EX", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}} Dec 6 05:24:30 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 20293 microseconds, and 8994 cpu microseconds. Dec 6 05:24:30 localhost ceph-mon[298582]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 6 05:24:30 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:24:30.698119) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #56: 3502146 bytes OK Dec 6 05:24:30 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:24:30.698161) [db/memtable_list.cc:519] [default] Level-0 commit table #56 started Dec 6 05:24:30 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:24:30.700387) [db/memtable_list.cc:722] [default] Level-0 commit table #56: memtable #1 done Dec 6 05:24:30 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:24:30.700418) EVENT_LOG_v1 {"time_micros": 1765016670700408, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 6 05:24:30 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:24:30.700448) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 6 05:24:30 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 5373078, prev total WAL file size 5373078, number of live WAL files 2. Dec 6 05:24:30 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000052.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:24:30 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:24:30.702132) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132353530' seq:72057594037927935, type:22 .. '7061786F73003132383032' seq:0, type:0; will stop at (end) Dec 6 05:24:30 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 6 05:24:30 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [56(3420KB)], [54(16MB)] Dec 6 05:24:30 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016670702221, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [56], "files_L6": [54], "score": -1, "input_data_size": 21166149, "oldest_snapshot_seqno": -1} Dec 6 05:24:30 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #57: 13564 keys, 19524991 bytes, temperature: kUnknown Dec 6 05:24:30 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016670810795, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 57, "file_size": 19524991, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19448197, "index_size": 41813, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33925, "raw_key_size": 364920, "raw_average_key_size": 26, "raw_value_size": 19218181, "raw_average_value_size": 1416, "num_data_blocks": 1551, "num_entries": 13564, "num_filter_entries": 13564, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015623, "oldest_key_time": 0, "file_creation_time": 1765016670, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ec2a3a6-c7d0-4f1e-9923-5a3b782725ae", "db_session_id": "JMBO5KX1IJCJ8FWC64EX", "orig_file_number": 57, "seqno_to_time_mapping": "N/A"}} Dec 6 05:24:30 localhost ceph-mon[298582]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 6 05:24:30 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:24:30.811171) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 19524991 bytes Dec 6 05:24:30 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:24:30.813253) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 194.8 rd, 179.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 16.8 +0.0 blob) out(18.6 +0.0 blob), read-write-amplify(11.6) write-amplify(5.6) OK, records in: 14128, records dropped: 564 output_compression: NoCompression Dec 6 05:24:30 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:24:30.813284) EVENT_LOG_v1 {"time_micros": 1765016670813270, "job": 32, "event": "compaction_finished", "compaction_time_micros": 108670, "compaction_time_cpu_micros": 58244, "output_level": 6, "num_output_files": 1, "total_output_size": 19524991, "num_input_records": 14128, "num_output_records": 13564, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 6 05:24:30 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:24:30 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016670813899, "job": 32, "event": "table_file_deletion", "file_number": 56} Dec 6 05:24:30 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000054.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:24:30 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016670817168, "job": 32, "event": "table_file_deletion", "file_number": 54} Dec 6 05:24:30 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:24:30.701885) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:24:30 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:24:30.817234) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:24:30 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:24:30.817241) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:24:30 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:24:30.817244) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:24:30 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:24:30.817248) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:24:30 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:24:30.817250) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:24:30 localhost dnsmasq[334982]: read /var/lib/neutron/dhcp/67f05a6c-bdca-4c59-9049-edf7ed03aad0/addn_hosts - 0 addresses Dec 6 05:24:30 localhost dnsmasq-dhcp[334982]: read /var/lib/neutron/dhcp/67f05a6c-bdca-4c59-9049-edf7ed03aad0/host Dec 6 05:24:30 localhost dnsmasq-dhcp[334982]: read /var/lib/neutron/dhcp/67f05a6c-bdca-4c59-9049-edf7ed03aad0/opts Dec 6 05:24:30 localhost podman[335099]: 2025-12-06 10:24:30.897838747 +0000 UTC m=+0.063019508 container kill ae534041aa594f21eccc75296424685ccea09523ed4cad3d4ba6b6620b5f4196 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-67f05a6c-bdca-4c59-9049-edf7ed03aad0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 6 05:24:31 localhost nova_compute[282193]: 2025-12-06 10:24:31.133 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:24:31 localhost kernel: device tape3bcd567-2d left promiscuous mode Dec 6 05:24:31 localhost ovn_controller[154851]: 2025-12-06T10:24:31Z|00512|binding|INFO|Releasing lport e3bcd567-2db3-4d72-9cdb-24e14598df57 from this chassis (sb_readonly=0) Dec 6 05:24:31 localhost ovn_controller[154851]: 2025-12-06T10:24:31Z|00513|binding|INFO|Setting lport e3bcd567-2db3-4d72-9cdb-24e14598df57 down in Southbound Dec 6 05:24:31 localhost ovn_metadata_agent[160504]: 2025-12-06 10:24:31.149 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-67f05a6c-bdca-4c59-9049-edf7ed03aad0', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67f05a6c-bdca-4c59-9049-edf7ed03aad0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0e98feac0e5947229c2baa6fc34be5fb', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548789.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8812a56d-06ed-4c18-98f2-54c75645cf8d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=e3bcd567-2db3-4d72-9cdb-24e14598df57) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:24:31 localhost ovn_metadata_agent[160504]: 2025-12-06 10:24:31.151 160509 INFO neutron.agent.ovn.metadata.agent [-] Port e3bcd567-2db3-4d72-9cdb-24e14598df57 in datapath 67f05a6c-bdca-4c59-9049-edf7ed03aad0 unbound from our chassis#033[00m Dec 6 05:24:31 localhost ovn_metadata_agent[160504]: 2025-12-06 10:24:31.154 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 67f05a6c-bdca-4c59-9049-edf7ed03aad0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:24:31 localhost ovn_metadata_agent[160504]: 2025-12-06 10:24:31.154 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[c01ceaf8-2f99-4cbf-924c-a652ddb424e3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:24:31 localhost nova_compute[282193]: 2025-12-06 10:24:31.158 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:24:31 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Dec 6 05:24:31 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/82abd4b2-157a-49c5-b0f6-995ee895ebc0/2859c761-f712-4409-870b-5e31b0c42ab8", "osd", "allow rw pool=manila_data namespace=fsvolumens_82abd4b2-157a-49c5-b0f6-995ee895ebc0", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:24:31 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/82abd4b2-157a-49c5-b0f6-995ee895ebc0/2859c761-f712-4409-870b-5e31b0c42ab8", "osd", "allow rw pool=manila_data namespace=fsvolumens_82abd4b2-157a-49c5-b0f6-995ee895ebc0", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:24:31 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/82abd4b2-157a-49c5-b0f6-995ee895ebc0/2859c761-f712-4409-870b-5e31b0c42ab8", "osd", "allow rw pool=manila_data namespace=fsvolumens_82abd4b2-157a-49c5-b0f6-995ee895ebc0", "mon", "allow r"], "format": "json"}]': finished Dec 6 05:24:32 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 2 addresses Dec 6 05:24:32 localhost podman[335137]: 2025-12-06 10:24:32.454613315 +0000 UTC m=+0.066572085 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 6 05:24:32 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:24:32 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:24:32 localhost ovn_controller[154851]: 2025-12-06T10:24:32Z|00514|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:24:32 localhost nova_compute[282193]: 2025-12-06 10:24:32.601 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:24:33 localhost podman[335174]: 2025-12-06 10:24:33.108881825 +0000 UTC m=+0.073735935 container kill ae534041aa594f21eccc75296424685ccea09523ed4cad3d4ba6b6620b5f4196 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-67f05a6c-bdca-4c59-9049-edf7ed03aad0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125) Dec 6 05:24:33 localhost dnsmasq[334982]: exiting on receipt of SIGTERM Dec 6 05:24:33 localhost systemd[1]: tmp-crun.xuFWvg.mount: Deactivated successfully. Dec 6 05:24:33 localhost systemd[1]: libpod-ae534041aa594f21eccc75296424685ccea09523ed4cad3d4ba6b6620b5f4196.scope: Deactivated successfully. Dec 6 05:24:33 localhost podman[335188]: 2025-12-06 10:24:33.186610181 +0000 UTC m=+0.060587383 container died ae534041aa594f21eccc75296424685ccea09523ed4cad3d4ba6b6620b5f4196 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-67f05a6c-bdca-4c59-9049-edf7ed03aad0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:24:33 localhost podman[335188]: 2025-12-06 10:24:33.220819286 +0000 UTC m=+0.094796438 container cleanup ae534041aa594f21eccc75296424685ccea09523ed4cad3d4ba6b6620b5f4196 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-67f05a6c-bdca-4c59-9049-edf7ed03aad0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 6 05:24:33 localhost systemd[1]: libpod-conmon-ae534041aa594f21eccc75296424685ccea09523ed4cad3d4ba6b6620b5f4196.scope: Deactivated successfully. Dec 6 05:24:33 localhost podman[335190]: 2025-12-06 10:24:33.26837762 +0000 UTC m=+0.134513993 container remove ae534041aa594f21eccc75296424685ccea09523ed4cad3d4ba6b6620b5f4196 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-67f05a6c-bdca-4c59-9049-edf7ed03aad0, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:24:33 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:24:33.297 263652 INFO neutron.agent.dhcp.agent [None req-6072cad9-bf5b-4be9-9cac-a499c2717921 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:24:33 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:24:33.298 263652 INFO neutron.agent.dhcp.agent [None req-6072cad9-bf5b-4be9-9cac-a499c2717921 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:24:33 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:24:33 localhost systemd[1]: tmp-crun.WdbBJC.mount: Deactivated successfully. Dec 6 05:24:33 localhost systemd[1]: var-lib-containers-storage-overlay-b70f3ccd35a58f837d6cbb67993db9373957054834ea3aae99c93b78afe6bc66-merged.mount: Deactivated successfully. Dec 6 05:24:33 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ae534041aa594f21eccc75296424685ccea09523ed4cad3d4ba6b6620b5f4196-userdata-shm.mount: Deactivated successfully. Dec 6 05:24:33 localhost systemd[1]: run-netns-qdhcp\x2d67f05a6c\x2dbdca\x2d4c59\x2d9049\x2dedf7ed03aad0.mount: Deactivated successfully. Dec 6 05:24:35 localhost nova_compute[282193]: 2025-12-06 10:24:35.012 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:24:35 localhost nova_compute[282193]: 2025-12-06 10:24:35.446 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:24:37 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #58. Immutable memtables: 0. Dec 6 05:24:37 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:24:37.665445) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 6 05:24:37 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 58 Dec 6 05:24:37 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016677665486, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 365, "num_deletes": 257, "total_data_size": 141375, "memory_usage": 148168, "flush_reason": "Manual Compaction"} Dec 6 05:24:37 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #59: started Dec 6 05:24:37 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016677667873, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 59, "file_size": 91458, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33445, "largest_seqno": 33805, "table_properties": {"data_size": 89299, "index_size": 270, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5639, "raw_average_key_size": 18, "raw_value_size": 84804, "raw_average_value_size": 272, "num_data_blocks": 12, "num_entries": 311, "num_filter_entries": 311, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765016671, "oldest_key_time": 1765016671, "file_creation_time": 1765016677, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ec2a3a6-c7d0-4f1e-9923-5a3b782725ae", "db_session_id": "JMBO5KX1IJCJ8FWC64EX", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}} Dec 6 05:24:37 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 2452 microseconds, and 729 cpu microseconds. Dec 6 05:24:37 localhost ceph-mon[298582]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 6 05:24:37 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:24:37.667897) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #59: 91458 bytes OK Dec 6 05:24:37 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:24:37.667918) [db/memtable_list.cc:519] [default] Level-0 commit table #59 started Dec 6 05:24:37 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:24:37.669374) [db/memtable_list.cc:722] [default] Level-0 commit table #59: memtable #1 done Dec 6 05:24:37 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:24:37.669469) EVENT_LOG_v1 {"time_micros": 1765016677669460, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 6 05:24:37 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:24:37.669498) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 6 05:24:37 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 138863, prev total WAL file size 138863, number of live WAL files 2. Dec 6 05:24:37 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000055.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:24:37 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:24:37.670048) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034323730' seq:72057594037927935, type:22 .. '6C6F676D0034353233' seq:0, type:0; will stop at (end) Dec 6 05:24:37 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 6 05:24:37 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [59(89KB)], [57(18MB)] Dec 6 05:24:37 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016677670100, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [59], "files_L6": [57], "score": -1, "input_data_size": 19616449, "oldest_snapshot_seqno": -1} Dec 6 05:24:37 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #60: 13345 keys, 19187539 bytes, temperature: kUnknown Dec 6 05:24:37 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016677760220, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 60, "file_size": 19187539, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19112610, "index_size": 40460, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33413, "raw_key_size": 361369, "raw_average_key_size": 27, "raw_value_size": 18886653, "raw_average_value_size": 1415, "num_data_blocks": 1487, "num_entries": 13345, "num_filter_entries": 13345, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015623, "oldest_key_time": 0, "file_creation_time": 1765016677, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ec2a3a6-c7d0-4f1e-9923-5a3b782725ae", "db_session_id": "JMBO5KX1IJCJ8FWC64EX", "orig_file_number": 60, "seqno_to_time_mapping": "N/A"}} Dec 6 05:24:37 localhost ceph-mon[298582]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 6 05:24:37 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:24:37.760619) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 19187539 bytes Dec 6 05:24:37 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:24:37.763933) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 217.4 rd, 212.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 18.6 +0.0 blob) out(18.3 +0.0 blob), read-write-amplify(424.3) write-amplify(209.8) OK, records in: 13875, records dropped: 530 output_compression: NoCompression Dec 6 05:24:37 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:24:37.763965) EVENT_LOG_v1 {"time_micros": 1765016677763951, "job": 34, "event": "compaction_finished", "compaction_time_micros": 90220, "compaction_time_cpu_micros": 45516, "output_level": 6, "num_output_files": 1, "total_output_size": 19187539, "num_input_records": 13875, "num_output_records": 13345, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 6 05:24:37 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:24:37 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016677764137, "job": 34, "event": "table_file_deletion", "file_number": 59} Dec 6 05:24:37 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000057.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:24:37 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016677767736, "job": 34, "event": "table_file_deletion", "file_number": 57} Dec 6 05:24:37 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:24:37.669937) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:24:37 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:24:37.767853) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:24:37 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:24:37.767862) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:24:37 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:24:37.767864) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:24:37 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:24:37.767866) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:24:37 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:24:37.767868) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:24:38 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:24:38 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Dec 6 05:24:39 localhost nova_compute[282193]: 2025-12-06 10:24:39.206 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:24:39 localhost nova_compute[282193]: 2025-12-06 10:24:39.234 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:24:39 localhost nova_compute[282193]: 2025-12-06 10:24:39.234 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:24:39 localhost nova_compute[282193]: 2025-12-06 10:24:39.235 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:24:39 localhost nova_compute[282193]: 2025-12-06 10:24:39.235 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Auditing locally available compute resources for np0005548789.localdomain (node: np0005548789.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 6 05:24:39 localhost nova_compute[282193]: 2025-12-06 10:24:39.235 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:24:39 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:24:39 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3756278959' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:24:39 localhost nova_compute[282193]: 2025-12-06 10:24:39.698 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:24:39 localhost nova_compute[282193]: 2025-12-06 10:24:39.773 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:24:39 localhost nova_compute[282193]: 2025-12-06 10:24:39.774 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:24:39 localhost nova_compute[282193]: 2025-12-06 10:24:39.954 282197 WARNING nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 6 05:24:39 localhost nova_compute[282193]: 2025-12-06 10:24:39.955 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Hypervisor/Node resource view: name=np0005548789.localdomain free_ram=11135MB free_disk=41.77423095703125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 6 05:24:39 localhost nova_compute[282193]: 2025-12-06 10:24:39.956 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:24:39 localhost nova_compute[282193]: 2025-12-06 10:24:39.956 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:24:40 localhost nova_compute[282193]: 2025-12-06 10:24:40.017 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 6 05:24:40 localhost nova_compute[282193]: 2025-12-06 10:24:40.018 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 6 05:24:40 localhost nova_compute[282193]: 2025-12-06 10:24:40.018 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Final resource view: name=np0005548789.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 6 05:24:40 localhost nova_compute[282193]: 2025-12-06 10:24:40.046 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:24:40 localhost nova_compute[282193]: 2025-12-06 10:24:40.064 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:24:40 localhost nova_compute[282193]: 2025-12-06 10:24:40.448 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:24:40 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:24:40 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/701765729' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:24:40 localhost nova_compute[282193]: 2025-12-06 10:24:40.465 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:24:40 localhost nova_compute[282193]: 2025-12-06 10:24:40.470 282197 DEBUG nova.compute.provider_tree [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed in ProviderTree for provider: 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 6 05:24:40 localhost nova_compute[282193]: 2025-12-06 10:24:40.488 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 6 05:24:40 localhost nova_compute[282193]: 2025-12-06 10:24:40.510 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Compute_service record updated for np0005548789.localdomain:np0005548789.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 6 05:24:40 localhost nova_compute[282193]: 2025-12-06 10:24:40.510 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.554s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:24:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:24:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:24:40 localhost systemd[1]: tmp-crun.CwI0vW.mount: Deactivated successfully. Dec 6 05:24:40 localhost podman[335262]: 2025-12-06 10:24:40.935896653 +0000 UTC m=+0.095930343 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125) Dec 6 05:24:40 localhost podman[335262]: 2025-12-06 10:24:40.96720743 +0000 UTC m=+0.127241120 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 6 05:24:40 localhost podman[335263]: 2025-12-06 10:24:40.976318448 +0000 UTC m=+0.131584733 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 6 05:24:40 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:24:40 localhost podman[335263]: 2025-12-06 10:24:40.989218023 +0000 UTC m=+0.144484358 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 6 05:24:41 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:24:42 localhost nova_compute[282193]: 2025-12-06 10:24:42.481 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:24:42 localhost nova_compute[282193]: 2025-12-06 10:24:42.481 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:24:42 localhost nova_compute[282193]: 2025-12-06 10:24:42.481 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 6 05:24:42 localhost nova_compute[282193]: 2025-12-06 10:24:42.481 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 6 05:24:42 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-649576020", "format": "json"} : dispatch Dec 6 05:24:42 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-649576020", "caps": ["mds", "allow rw path=/volumes/_nogroup/4e35c7b0-6333-486a-9deb-d9473aa05e04/5578bf84-b0d0-4fb9-8cfc-a71276656281", "osd", "allow rw pool=manila_data namespace=fsvolumens_4e35c7b0-6333-486a-9deb-d9473aa05e04", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:24:42 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-649576020", "caps": ["mds", "allow rw path=/volumes/_nogroup/4e35c7b0-6333-486a-9deb-d9473aa05e04/5578bf84-b0d0-4fb9-8cfc-a71276656281", "osd", "allow rw pool=manila_data namespace=fsvolumens_4e35c7b0-6333-486a-9deb-d9473aa05e04", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:24:42 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-649576020", "caps": ["mds", "allow rw path=/volumes/_nogroup/4e35c7b0-6333-486a-9deb-d9473aa05e04/5578bf84-b0d0-4fb9-8cfc-a71276656281", "osd", "allow rw pool=manila_data namespace=fsvolumens_4e35c7b0-6333-486a-9deb-d9473aa05e04", "mon", "allow r"], "format": "json"}]': finished Dec 6 05:24:42 localhost nova_compute[282193]: 2025-12-06 10:24:42.874 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 6 05:24:42 localhost nova_compute[282193]: 2025-12-06 10:24:42.875 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquired lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 6 05:24:42 localhost nova_compute[282193]: 2025-12-06 10:24:42.875 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 6 05:24:42 localhost nova_compute[282193]: 2025-12-06 10:24:42.876 282197 DEBUG nova.objects.instance [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 6 05:24:43 localhost nova_compute[282193]: 2025-12-06 10:24:43.321 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updating instance_info_cache with network_info: [{"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 6 05:24:43 localhost nova_compute[282193]: 2025-12-06 10:24:43.339 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Releasing lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 6 05:24:43 localhost nova_compute[282193]: 2025-12-06 10:24:43.339 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 6 05:24:43 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:24:44 localhost nova_compute[282193]: 2025-12-06 10:24:44.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:24:45 localhost nova_compute[282193]: 2025-12-06 10:24:45.050 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:24:45 localhost nova_compute[282193]: 2025-12-06 10:24:45.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:24:45 localhost nova_compute[282193]: 2025-12-06 10:24:45.182 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 6 05:24:45 localhost nova_compute[282193]: 2025-12-06 10:24:45.450 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:24:46 localhost nova_compute[282193]: 2025-12-06 10:24:46.182 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:24:46 localhost nova_compute[282193]: 2025-12-06 10:24:46.183 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:24:46 localhost openstack_network_exporter[243110]: ERROR 10:24:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:24:46 localhost openstack_network_exporter[243110]: ERROR 10:24:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:24:46 localhost openstack_network_exporter[243110]: ERROR 10:24:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:24:46 localhost openstack_network_exporter[243110]: ERROR 10:24:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:24:46 localhost openstack_network_exporter[243110]: Dec 6 05:24:46 localhost openstack_network_exporter[243110]: ERROR 10:24:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:24:46 localhost openstack_network_exporter[243110]: Dec 6 05:24:47 localhost nova_compute[282193]: 2025-12-06 10:24:47.182 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:24:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:24:47.342 160509 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:24:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:24:47.342 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:24:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:24:47.343 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:24:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:24:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:24:47 localhost podman[335303]: 2025-12-06 10:24:47.763400046 +0000 UTC m=+0.076377195 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, release=1755695350, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, version=9.6, config_id=edpm, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Dec 6 05:24:47 localhost podman[335303]: 2025-12-06 10:24:47.806109051 +0000 UTC m=+0.119086160 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, release=1755695350, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, maintainer=Red Hat, Inc.) Dec 6 05:24:47 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:24:47 localhost podman[335304]: 2025-12-06 10:24:47.830079715 +0000 UTC m=+0.140215008 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 6 05:24:47 localhost podman[335304]: 2025-12-06 10:24:47.871255754 +0000 UTC m=+0.181391037 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:24:47 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:24:48 localhost nova_compute[282193]: 2025-12-06 10:24:48.182 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:24:48 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:24:49 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-649576020", "format": "json"} : dispatch Dec 6 05:24:49 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-649576020"} : dispatch Dec 6 05:24:49 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-649576020"} : dispatch Dec 6 05:24:49 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-649576020"}]': finished Dec 6 05:24:50 localhost nova_compute[282193]: 2025-12-06 10:24:50.104 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:24:50 localhost nova_compute[282193]: 2025-12-06 10:24:50.453 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:24:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:24:51 localhost podman[335343]: 2025-12-06 10:24:51.928108884 +0000 UTC m=+0.087171905 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible) Dec 6 05:24:51 localhost podman[335343]: 2025-12-06 10:24:51.964576219 +0000 UTC m=+0.123639220 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Dec 6 05:24:51 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:24:52 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Dec 6 05:24:52 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch Dec 6 05:24:52 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch Dec 6 05:24:52 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.Joe"}]': finished Dec 6 05:24:53 localhost nova_compute[282193]: 2025-12-06 10:24:53.176 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:24:53 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:24:53 localhost podman[241090]: time="2025-12-06T10:24:53Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:24:53 localhost podman[241090]: @ - - [06/Dec/2025:10:24:53 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156104 "" "Go-http-client/1.1" Dec 6 05:24:53 localhost podman[241090]: @ - - [06/Dec/2025:10:24:53 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19274 "" "Go-http-client/1.1" Dec 6 05:24:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:24:54 localhost podman[335362]: 2025-12-06 10:24:54.919650652 +0000 UTC m=+0.076461069 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 6 05:24:54 localhost podman[335362]: 2025-12-06 10:24:54.95327786 +0000 UTC m=+0.110088317 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 6 05:24:54 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:24:55 localhost nova_compute[282193]: 2025-12-06 10:24:55.155 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:24:55 localhost nova_compute[282193]: 2025-12-06 10:24:55.456 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:24:55 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin", "format": "json"} : dispatch Dec 6 05:24:55 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e244 e244: 6 total, 6 up, 6 in Dec 6 05:24:58 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:24:59 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Dec 6 05:24:59 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/efbd58ea-e56f-4c21-9ed9-3d319ca403b8/413ff259-8ca8-446a-9a48-682d6b0aa9e8", "osd", "allow rw pool=manila_data namespace=fsvolumens_efbd58ea-e56f-4c21-9ed9-3d319ca403b8", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:24:59 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/efbd58ea-e56f-4c21-9ed9-3d319ca403b8/413ff259-8ca8-446a-9a48-682d6b0aa9e8", "osd", "allow rw pool=manila_data namespace=fsvolumens_efbd58ea-e56f-4c21-9ed9-3d319ca403b8", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:24:59 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/efbd58ea-e56f-4c21-9ed9-3d319ca403b8/413ff259-8ca8-446a-9a48-682d6b0aa9e8", "osd", "allow rw pool=manila_data namespace=fsvolumens_efbd58ea-e56f-4c21-9ed9-3d319ca403b8", "mon", "allow r"], "format": "json"}]': finished Dec 6 05:25:00 localhost nova_compute[282193]: 2025-12-06 10:25:00.203 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:25:00 localhost nova_compute[282193]: 2025-12-06 10:25:00.457 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:25:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:25:00 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 6 05:25:00 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1792782973' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 6 05:25:00 localhost podman[335385]: 2025-12-06 10:25:00.923099266 +0000 UTC m=+0.080651527 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS) Dec 6 05:25:00 localhost podman[335385]: 2025-12-06 10:25:00.958158707 +0000 UTC m=+0.115710908 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 6 05:25:00 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:25:02 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e245 e245: 6 total, 6 up, 6 in Dec 6 05:25:02 localhost sshd[335410]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:25:02 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e246 e246: 6 total, 6 up, 6 in Dec 6 05:25:03 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:25:03 localhost ovn_controller[154851]: 2025-12-06T10:25:03Z|00515|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory Dec 6 05:25:05 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e247 e247: 6 total, 6 up, 6 in Dec 6 05:25:05 localhost nova_compute[282193]: 2025-12-06 10:25:05.236 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:25:05 localhost nova_compute[282193]: 2025-12-06 10:25:05.459 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:25:06 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e248 e248: 6 total, 6 up, 6 in Dec 6 05:25:06 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.917 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'name': 'test', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005548789.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'hostId': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.917 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.922 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b925e12c-8f7a-4a56-b202-66139d9c9b95', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:25:07.918206', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'd67d211a-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13126.167576906, 'message_signature': '897727b654162bd2b81659852b18f9e11f8f9d0ef1a700b96fc3b8b708cd1ea1'}]}, 'timestamp': '2025-12-06 10:25:07.923559', '_unique_id': 'ab2551df3245447dad8c6f21d443b7a6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.925 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.926 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.927 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a92024f9-2596-4895-b026-47086aed0508', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:25:07.927188', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'd67dcc82-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13126.167576906, 'message_signature': '1d2e115c52e3685e67629a04b03c14f85043a9abc1ca688bd854b033ab874e01'}]}, 'timestamp': '2025-12-06 10:25:07.927966', '_unique_id': 'ce9882cf58314bfc9f177026c7bb9d95'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.929 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.931 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.931 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets volume: 114 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '956f3999-f245-444d-9c31-e077fa25b6f9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 114, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:25:07.931328', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'd67e6dfe-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13126.167576906, 'message_signature': '05c7562ff6453cb18a40ee8c644f68f9de6e84842bf8f2d59c73d403cc2bd407'}]}, 'timestamp': '2025-12-06 10:25:07.932093', '_unique_id': '0905f535123f433a9e1931319866840d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.933 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.935 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.963 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.963 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '962a4956-cf49-4007-b94a-89c02373e81e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:25:07.935386', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'd683411c-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13126.184789522, 'message_signature': 'b07aefd2e1f06c48472a99bc0ea0b880251a268fd2b879d21b6258618cb11753'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:25:07.935386', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'd6835a62-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13126.184789522, 'message_signature': '110d672a7591052a4e84a62b148871efd73e31b62e83b20da4bcd2ac7675e807'}]}, 'timestamp': '2025-12-06 10:25:07.964247', '_unique_id': '381799248a8546a8b226c75dfdee0b38'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.965 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.966 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.966 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9461cfac-bca5-4e78-813e-b57831131898', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:25:07.966838', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'd683db2c-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13126.167576906, 'message_signature': '7041223351134d255b9b5812a0f0797a0c62fabaf7997dd0d557e4e79a0e0526'}]}, 'timestamp': '2025-12-06 10:25:07.967625', '_unique_id': 'c490c7f2bd594a398d3f9cc9159d8885'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.970 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.970 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.982 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.982 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0511bbf3-c91f-4b89-9cc9-d175822794df', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:25:07.971186', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'd68631ba-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13126.220589686, 'message_signature': '2bdbc7ab541148547c278428772c0483f6de46518313d2d0b30520af5826d6cb'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:25:07.971186', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'd6864a6a-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13126.220589686, 'message_signature': '065ddbb9581953031525cb30d51f2eba9adf2152d1a2cc20676c306ef6e5d729'}]}, 'timestamp': '2025-12-06 10:25:07.983512', '_unique_id': 'f3437a49acf84d84850de85656474b0a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.985 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.986 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.986 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '01f894a7-8ee5-4f72-8dc3-b6432fe00416', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:25:07.986067', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'd686c7c4-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13126.220589686, 'message_signature': '559ac2cdaa7220becc7499069f16c1faae5cd28b48edda3f930ba764a96c8806'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:25:07.986067', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'd686e34e-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13126.220589686, 'message_signature': 'c9fd59c22d8e3d2eb086d8b7771397def19568487354730ed7b538bf5ab34786'}]}, 'timestamp': '2025-12-06 10:25:07.987456', '_unique_id': '9d23cb45d2e7496ca10220270c08f5e8'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.988 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:07.990 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.007 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/cpu volume: 19410000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '00bfe4e4-831c-49c9-82df-de08d5791999', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 19410000000, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'timestamp': '2025-12-06T10:25:07.990724', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': 'd68a0236-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13126.256543495, 'message_signature': '823d35915aeaec99f753c2cffa3c452c6afff63461af047aaa6e8d37c581df44'}]}, 'timestamp': '2025-12-06 10:25:08.007937', '_unique_id': 'e2868c68993647f58938832042d064db'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.008 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.010 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.010 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.bytes volume: 389120 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.011 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '448431f2-fc71-4b09-8f86-b1d88ef1783a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 389120, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:25:08.010334', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'd68a7b6c-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13126.184789522, 'message_signature': '3dbb515674291adae5e870f37ef531d37682d2c838cca2dab54ec54150678c6b'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:25:08.010334', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'd68a96ec-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13126.184789522, 'message_signature': 'c7d2518eac638c7bf9f7037cf5d449162c191532809cb5675c9c152866ae8ea6'}]}, 'timestamp': '2025-12-06 10:25:08.011715', '_unique_id': 'c5ad99f035cb48adb0a7ecf1035d5d48'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.013 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.014 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.015 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/memory.usage volume: 51.80859375 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7baed7c7-e8f9-40d8-941d-ffdd8bb1a58c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.80859375, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'timestamp': '2025-12-06T10:25:08.015089', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': 'd68b35a2-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13126.256543495, 'message_signature': 'b6efa812562efdec20ba95bd9fac7382422f9791e4ffb763f30c9e7c2f22587b'}]}, 'timestamp': '2025-12-06 10:25:08.015839', '_unique_id': 'f804975d28ab4038bd323ab65c88e7a5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.017 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.018 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.019 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.019 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.latency volume: 1252245154 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.019 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.latency volume: 27668224 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '41dcf558-ea0b-48eb-b472-cb11cb368d17', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1252245154, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:25:08.019283', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'd68bd8c2-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13126.184789522, 'message_signature': '3afe404702d83d536bdb846683969276e18149b00301cf9202d46c957e8a1368'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 27668224, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:25:08.019283', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'd68bf2d0-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13126.184789522, 'message_signature': '78afbcf313bc15ecd2be6aba4b79f421b5253b4a1000dcc190f9d665c017cc28'}]}, 'timestamp': '2025-12-06 10:25:08.020611', '_unique_id': '2adee4b238364747b6a95b991437c3e5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.021 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.023 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.023 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.latency volume: 1525105336 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.024 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.latency volume: 106716064 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '20ed0f5c-631d-4b4a-888e-ad60b3e9e859', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1525105336, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:25:08.023892', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'd68c8d1c-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13126.184789522, 'message_signature': '338bc8139188dfb396e229172a520eb5d5e3c3791ae5d2a8b2b9832df290fda2'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 106716064, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:25:08.023892', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'd68ca6da-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13126.184789522, 'message_signature': '48d3f15e8a7854ac7b58de75a0f33867f35bd240eb08d17b2b5b894925c82b76'}]}, 'timestamp': '2025-12-06 10:25:08.025224', '_unique_id': '51c9b0c8d56d4251819eefb3b1697d71'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.026 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.028 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.028 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.029 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ccabbba4-2762-4a94-b06e-4927b855240b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:25:08.028901', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'd68d5120-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13126.220589686, 'message_signature': 'e29b9ff363d5c7e34673c643940ae643becbe1249278c5297e8cf8e0ffdd5400'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:25:08.028901', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'd68d6aac-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13126.220589686, 'message_signature': '1b1b854837377b497028b6c709b1802459d322997abbe7a9b583f8a04e2ddb6a'}]}, 'timestamp': '2025-12-06 10:25:08.030240', '_unique_id': '198a5d0389d64b5ebe3317bdf6973e09'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.031 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.033 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.033 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '573b8532-3b0b-4f74-b978-f314924e7662', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:25:08.033528', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'd68e0732-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13126.167576906, 'message_signature': '4b184117ccd42d0e6ca16ec23c0022390f5740d30d8333f448878224ad9682c9'}]}, 'timestamp': '2025-12-06 10:25:08.034282', '_unique_id': 'eab2e86a644a4b3dad2953991a94d497'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.035 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.036 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.037 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.037 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0e25496a-3c75-4263-bc95-21e94d63e8ab', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:25:08.036995', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'd68e88b0-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13126.184789522, 'message_signature': '0576acf62ef3b81c24c06470a1573d8aa32ee840f6b5eda8f38216f70739f3cb'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:25:08.036995', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'd68e9a8a-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13126.184789522, 'message_signature': 'f3c46bd24147f2881e719e784f27e5bd44a50ee63c5ef6166e7a4ce6cff78dc2'}]}, 'timestamp': '2025-12-06 10:25:08.037940', '_unique_id': '89891ba3a2ee48c1b218d9934d2e83ea'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.039 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.040 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.040 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.bytes volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3232cb76-7cac-4865-9620-64e17eb4d1e0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:25:08.040300', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'd68f05d8-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13126.167576906, 'message_signature': '14884e2c26c8e0800bf6a37ea6ee5ffb9d9a0288651a93e5653fb0db214b102c'}]}, 'timestamp': '2025-12-06 10:25:08.040591', '_unique_id': '8e420f47705846a5b76de0884aaa2619'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.041 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.requests volume: 47 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.042 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c432a824-7c66-4197-8a5f-eb143cc8e526', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 47, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:25:08.041924', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'd68f44f8-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13126.184789522, 'message_signature': '52521593409f06159e553b998095fe6f5d20150a7529e982a4fcb437e08fac0b'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:25:08.041924', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'd68f4f0c-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13126.184789522, 'message_signature': '5cf7a8165903342db7e9dfce8e306d5ccd90cc04e46547fd21640f896e43feed'}]}, 'timestamp': '2025-12-06 10:25:08.042446', '_unique_id': '4e931a1669454741933657ef59842b6d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.043 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets volume: 60 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9d3bccc0-7dfc-4341-97da-bce53448e9c1', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 60, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:25:08.043883', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'd68f91ba-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13126.167576906, 'message_signature': '1a56a880af5b85ebb2658cd0db5c205ccc54d9b7559e9bdf315253e05de6d94e'}]}, 'timestamp': '2025-12-06 10:25:08.044171', '_unique_id': 'a0601cc87bd343ef85d2c953be80a36b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.044 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.045 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.045 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.045 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.bytes volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '61647b61-c0ec-4cd8-9bfb-12e5a620aeb1', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:25:08.045663', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'd68fd7ce-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13126.167576906, 'message_signature': 'fc32cde20ee75b95bce29cff68950305f5917717adde4250deff8e41099aac70'}]}, 'timestamp': '2025-12-06 10:25:08.045964', '_unique_id': '2c3aca3d7e404b9699c911466c8a1d30'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.046 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.047 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.047 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.047 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '754947cd-c7b9-425c-9c32-2a678144f30c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:25:08.047393', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'd6901acc-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13126.167576906, 'message_signature': 'b4730c2cc3f140d283307c79cef4770b53d57fa9e2f3677e577212407c483014'}]}, 'timestamp': '2025-12-06 10:25:08.047682', '_unique_id': 'c95974e073ef4843aa8640130f42c301'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.048 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.049 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd02ee157-99ce-4535-815f-20b204f0eba5', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:25:08.049048', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'd6905c08-d28d-11f0-aaf2-fa163e118844', 'monotonic_time': 13126.167576906, 'message_signature': '5abd121c5f59bf75f2da1fbf95afd8a1b1fc8e81e375e745a565c3c63b4e8299'}]}, 'timestamp': '2025-12-06 10:25:08.049372', '_unique_id': '76c349fa9b4a4743875bfcd4932627fd'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:25:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:25:08.050 12 ERROR oslo_messaging.notify.messaging Dec 6 05:25:08 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:25:09 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e249 e249: 6 total, 6 up, 6 in Dec 6 05:25:09 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:25:09 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:25:10 localhost nova_compute[282193]: 2025-12-06 10:25:10.275 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:25:10 localhost nova_compute[282193]: 2025-12-06 10:25:10.462 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:25:11 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Dec 6 05:25:11 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4192589746' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Dec 6 05:25:11 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e250 e250: 6 total, 6 up, 6 in Dec 6 05:25:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:25:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:25:11 localhost systemd[1]: tmp-crun.cADQ2I.mount: Deactivated successfully. Dec 6 05:25:11 localhost podman[335499]: 2025-12-06 10:25:11.944778986 +0000 UTC m=+0.085843206 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 6 05:25:11 localhost podman[335498]: 2025-12-06 10:25:11.9563847 +0000 UTC m=+0.100329248 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent) Dec 6 05:25:11 localhost podman[335499]: 2025-12-06 10:25:11.977050763 +0000 UTC m=+0.118115073 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 6 05:25:11 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:25:12 localhost podman[335498]: 2025-12-06 10:25:12.036103408 +0000 UTC m=+0.180047956 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125) Dec 6 05:25:12 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:25:12 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Dec 6 05:25:12 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch Dec 6 05:25:12 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch Dec 6 05:25:12 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.david"}]': finished Dec 6 05:25:12 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:25:12 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e251 e251: 6 total, 6 up, 6 in Dec 6 05:25:12 localhost systemd[1]: tmp-crun.qZbbfg.mount: Deactivated successfully. Dec 6 05:25:13 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:25:15 localhost nova_compute[282193]: 2025-12-06 10:25:15.322 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:25:15 localhost nova_compute[282193]: 2025-12-06 10:25:15.464 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:25:15 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e252 e252: 6 total, 6 up, 6 in Dec 6 05:25:16 localhost openstack_network_exporter[243110]: ERROR 10:25:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:25:16 localhost openstack_network_exporter[243110]: ERROR 10:25:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:25:16 localhost openstack_network_exporter[243110]: ERROR 10:25:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:25:16 localhost openstack_network_exporter[243110]: ERROR 10:25:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:25:16 localhost openstack_network_exporter[243110]: Dec 6 05:25:16 localhost openstack_network_exporter[243110]: ERROR 10:25:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:25:16 localhost openstack_network_exporter[243110]: Dec 6 05:25:17 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e253 e253: 6 total, 6 up, 6 in Dec 6 05:25:18 localhost sshd[335538]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:25:18 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:25:18 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e254 e254: 6 total, 6 up, 6 in Dec 6 05:25:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:25:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:25:18 localhost podman[335541]: 2025-12-06 10:25:18.922657185 +0000 UTC m=+0.078701016 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, name=ubi9-minimal, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.33.7, io.openshift.expose-services=, distribution-scope=public, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, version=9.6, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Dec 6 05:25:18 localhost podman[335541]: 2025-12-06 10:25:18.966233807 +0000 UTC m=+0.122277618 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, distribution-scope=public, release=1755695350, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_id=edpm, name=ubi9-minimal, io.openshift.tags=minimal rhel9, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 6 05:25:18 localhost systemd[1]: tmp-crun.F68GqT.mount: Deactivated successfully. Dec 6 05:25:18 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:25:18 localhost podman[335542]: 2025-12-06 10:25:18.992235692 +0000 UTC m=+0.143986903 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 6 05:25:19 localhost podman[335542]: 2025-12-06 10:25:19.007326523 +0000 UTC m=+0.159077714 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:25:19 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:25:20 localhost nova_compute[282193]: 2025-12-06 10:25:20.356 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:25:20 localhost nova_compute[282193]: 2025-12-06 10:25:20.466 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:25:20 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e255 e255: 6 total, 6 up, 6 in Dec 6 05:25:21 localhost nova_compute[282193]: 2025-12-06 10:25:21.846 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:25:21 localhost ovn_metadata_agent[160504]: 2025-12-06 10:25:21.846 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:6c:02', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:a8:2f:0c:cb:a1'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:25:21 localhost ovn_metadata_agent[160504]: 2025-12-06 10:25:21.847 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 6 05:25:21 localhost ovn_metadata_agent[160504]: 2025-12-06 10:25:21.849 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b142a5ef-fbed-4e92-aa78-e3ad080c6370, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 05:25:22 localhost neutron_sriov_agent[256690]: 2025-12-06 10:25:22.656 2 INFO neutron.agent.securitygroups_rpc [req-63e33143-79ba-4452-b217-6b4868995963 req-6d925882-c432-4b30-bcfa-4ea2e9401f50 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Security group member updated ['d407968b-b8de-45cd-a244-3bf62d3c0357']#033[00m Dec 6 05:25:22 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e256 e256: 6 total, 6 up, 6 in Dec 6 05:25:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:25:22 localhost podman[335581]: 2025-12-06 10:25:22.953498552 +0000 UTC m=+0.080534363 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:25:22 localhost podman[335581]: 2025-12-06 10:25:22.96619485 +0000 UTC m=+0.093230651 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 6 05:25:22 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:25:23 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:25:23 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e257 e257: 6 total, 6 up, 6 in Dec 6 05:25:23 localhost podman[241090]: time="2025-12-06T10:25:23Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:25:23 localhost podman[241090]: @ - - [06/Dec/2025:10:25:23 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156104 "" "Go-http-client/1.1" Dec 6 05:25:23 localhost podman[241090]: @ - - [06/Dec/2025:10:25:23 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19272 "" "Go-http-client/1.1" Dec 6 05:25:25 localhost nova_compute[282193]: 2025-12-06 10:25:25.359 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:25:25 localhost nova_compute[282193]: 2025-12-06 10:25:25.468 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:25:25 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e258 e258: 6 total, 6 up, 6 in Dec 6 05:25:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:25:25 localhost podman[335600]: 2025-12-06 10:25:25.925280966 +0000 UTC m=+0.082142453 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 6 05:25:25 localhost podman[335600]: 2025-12-06 10:25:25.933874338 +0000 UTC m=+0.090735845 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 6 05:25:25 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:25:26 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e259 e259: 6 total, 6 up, 6 in Dec 6 05:25:27 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e260 e260: 6 total, 6 up, 6 in Dec 6 05:25:28 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:25:30 localhost nova_compute[282193]: 2025-12-06 10:25:30.363 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:25:30 localhost nova_compute[282193]: 2025-12-06 10:25:30.469 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:25:31 localhost ovn_controller[154851]: 2025-12-06T10:25:31Z|00516|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:25:31 localhost systemd[1]: tmp-crun.JybQrl.mount: Deactivated successfully. Dec 6 05:25:31 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 1 addresses Dec 6 05:25:31 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:25:31 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:25:31 localhost podman[335640]: 2025-12-06 10:25:31.460907511 +0000 UTC m=+0.074666904 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 6 05:25:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:25:31 localhost nova_compute[282193]: 2025-12-06 10:25:31.527 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:25:31 localhost podman[335653]: 2025-12-06 10:25:31.589694618 +0000 UTC m=+0.098993608 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:25:31 localhost podman[335653]: 2025-12-06 10:25:31.654383655 +0000 UTC m=+0.163682655 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller) Dec 6 05:25:31 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:25:32 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e261 e261: 6 total, 6 up, 6 in Dec 6 05:25:33 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:25:33 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e262 e262: 6 total, 6 up, 6 in Dec 6 05:25:35 localhost nova_compute[282193]: 2025-12-06 10:25:35.398 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:25:35 localhost nova_compute[282193]: 2025-12-06 10:25:35.472 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:25:35 localhost ceph-osd[32665]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2. Dec 6 05:25:36 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e263 e263: 6 total, 6 up, 6 in Dec 6 05:25:37 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e264 e264: 6 total, 6 up, 6 in Dec 6 05:25:38 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:25:38 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e265 e265: 6 total, 6 up, 6 in Dec 6 05:25:40 localhost nova_compute[282193]: 2025-12-06 10:25:40.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:25:40 localhost nova_compute[282193]: 2025-12-06 10:25:40.213 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:25:40 localhost nova_compute[282193]: 2025-12-06 10:25:40.213 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:25:40 localhost nova_compute[282193]: 2025-12-06 10:25:40.214 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:25:40 localhost nova_compute[282193]: 2025-12-06 10:25:40.214 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Auditing locally available compute resources for np0005548789.localdomain (node: np0005548789.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 6 05:25:40 localhost nova_compute[282193]: 2025-12-06 10:25:40.214 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:25:40 localhost nova_compute[282193]: 2025-12-06 10:25:40.435 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:25:40 localhost nova_compute[282193]: 2025-12-06 10:25:40.474 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:25:40 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:25:40 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/302515343' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:25:40 localhost nova_compute[282193]: 2025-12-06 10:25:40.662 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:25:40 localhost nova_compute[282193]: 2025-12-06 10:25:40.731 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:25:40 localhost nova_compute[282193]: 2025-12-06 10:25:40.732 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:25:40 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e266 e266: 6 total, 6 up, 6 in Dec 6 05:25:40 localhost nova_compute[282193]: 2025-12-06 10:25:40.938 282197 WARNING nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 6 05:25:40 localhost nova_compute[282193]: 2025-12-06 10:25:40.939 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Hypervisor/Node resource view: name=np0005548789.localdomain free_ram=11138MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 6 05:25:40 localhost nova_compute[282193]: 2025-12-06 10:25:40.940 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:25:40 localhost nova_compute[282193]: 2025-12-06 10:25:40.940 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:25:41 localhost nova_compute[282193]: 2025-12-06 10:25:41.015 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 6 05:25:41 localhost nova_compute[282193]: 2025-12-06 10:25:41.016 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 6 05:25:41 localhost nova_compute[282193]: 2025-12-06 10:25:41.016 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Final resource view: name=np0005548789.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 6 05:25:41 localhost nova_compute[282193]: 2025-12-06 10:25:41.064 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:25:41 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:25:41 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2812440448' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:25:41 localhost nova_compute[282193]: 2025-12-06 10:25:41.467 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.403s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:25:41 localhost nova_compute[282193]: 2025-12-06 10:25:41.473 282197 DEBUG nova.compute.provider_tree [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed in ProviderTree for provider: 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 6 05:25:41 localhost nova_compute[282193]: 2025-12-06 10:25:41.495 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 6 05:25:41 localhost nova_compute[282193]: 2025-12-06 10:25:41.496 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Compute_service record updated for np0005548789.localdomain:np0005548789.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 6 05:25:41 localhost nova_compute[282193]: 2025-12-06 10:25:41.497 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:25:41 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e267 e267: 6 total, 6 up, 6 in Dec 6 05:25:42 localhost nova_compute[282193]: 2025-12-06 10:25:42.492 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:25:42 localhost nova_compute[282193]: 2025-12-06 10:25:42.493 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:25:42 localhost nova_compute[282193]: 2025-12-06 10:25:42.493 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 6 05:25:42 localhost nova_compute[282193]: 2025-12-06 10:25:42.493 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 6 05:25:42 localhost nova_compute[282193]: 2025-12-06 10:25:42.573 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 6 05:25:42 localhost nova_compute[282193]: 2025-12-06 10:25:42.574 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquired lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 6 05:25:42 localhost nova_compute[282193]: 2025-12-06 10:25:42.574 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 6 05:25:42 localhost nova_compute[282193]: 2025-12-06 10:25:42.574 282197 DEBUG nova.objects.instance [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 6 05:25:42 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e268 e268: 6 total, 6 up, 6 in Dec 6 05:25:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:25:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:25:42 localhost podman[335729]: 2025-12-06 10:25:42.908382561 +0000 UTC m=+0.068116233 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 6 05:25:42 localhost podman[335730]: 2025-12-06 10:25:42.976449161 +0000 UTC m=+0.129807948 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 6 05:25:42 localhost podman[335730]: 2025-12-06 10:25:42.987349515 +0000 UTC m=+0.140708362 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:25:42 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:25:43 localhost podman[335729]: 2025-12-06 10:25:43.041909873 +0000 UTC m=+0.201643555 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3) Dec 6 05:25:43 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:25:43 localhost nova_compute[282193]: 2025-12-06 10:25:43.159 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updating instance_info_cache with network_info: [{"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 6 05:25:43 localhost nova_compute[282193]: 2025-12-06 10:25:43.174 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Releasing lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 6 05:25:43 localhost nova_compute[282193]: 2025-12-06 10:25:43.175 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 6 05:25:43 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:25:43 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e269 e269: 6 total, 6 up, 6 in Dec 6 05:25:44 localhost nova_compute[282193]: 2025-12-06 10:25:44.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:25:45 localhost nova_compute[282193]: 2025-12-06 10:25:45.461 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:25:45 localhost nova_compute[282193]: 2025-12-06 10:25:45.475 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:25:45 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 6 05:25:45 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1501997510' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 6 05:25:45 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 6 05:25:45 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1501997510' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 6 05:25:46 localhost nova_compute[282193]: 2025-12-06 10:25:46.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:25:46 localhost nova_compute[282193]: 2025-12-06 10:25:46.181 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 6 05:25:46 localhost openstack_network_exporter[243110]: ERROR 10:25:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:25:46 localhost openstack_network_exporter[243110]: ERROR 10:25:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:25:46 localhost openstack_network_exporter[243110]: ERROR 10:25:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:25:46 localhost openstack_network_exporter[243110]: ERROR 10:25:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:25:46 localhost openstack_network_exporter[243110]: Dec 6 05:25:46 localhost openstack_network_exporter[243110]: ERROR 10:25:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:25:46 localhost openstack_network_exporter[243110]: Dec 6 05:25:47 localhost nova_compute[282193]: 2025-12-06 10:25:47.182 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:25:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:25:47.342 160509 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:25:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:25:47.343 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:25:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:25:47.344 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:25:47 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e270 e270: 6 total, 6 up, 6 in Dec 6 05:25:48 localhost nova_compute[282193]: 2025-12-06 10:25:48.182 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:25:48 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:25:49 localhost nova_compute[282193]: 2025-12-06 10:25:49.182 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:25:49 localhost nova_compute[282193]: 2025-12-06 10:25:49.183 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:25:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:25:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:25:49 localhost podman[335769]: 2025-12-06 10:25:49.929566754 +0000 UTC m=+0.084205435 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:25:49 localhost podman[335769]: 2025-12-06 10:25:49.940945472 +0000 UTC m=+0.095584093 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Dec 6 05:25:49 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:25:49 localhost podman[335768]: 2025-12-06 10:25:49.985885995 +0000 UTC m=+0.144688493 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_id=edpm, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, version=9.6, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., name=ubi9-minimal, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public) Dec 6 05:25:50 localhost podman[335768]: 2025-12-06 10:25:50.003200255 +0000 UTC m=+0.162002793 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, distribution-scope=public, version=9.6, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, name=ubi9-minimal, container_name=openstack_network_exporter, maintainer=Red Hat, Inc.) Dec 6 05:25:50 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:25:50 localhost nova_compute[282193]: 2025-12-06 10:25:50.476 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:25:50 localhost nova_compute[282193]: 2025-12-06 10:25:50.478 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:25:50 localhost nova_compute[282193]: 2025-12-06 10:25:50.478 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:25:50 localhost nova_compute[282193]: 2025-12-06 10:25:50.478 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:25:50 localhost nova_compute[282193]: 2025-12-06 10:25:50.495 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:25:50 localhost nova_compute[282193]: 2025-12-06 10:25:50.496 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:25:52 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e271 e271: 6 total, 6 up, 6 in Dec 6 05:25:53 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:25:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:25:53 localhost podman[335806]: 2025-12-06 10:25:53.920381136 +0000 UTC m=+0.077667556 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 6 05:25:53 localhost podman[241090]: time="2025-12-06T10:25:53Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:25:53 localhost podman[241090]: @ - - [06/Dec/2025:10:25:53 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156104 "" "Go-http-client/1.1" Dec 6 05:25:54 localhost podman[335806]: 2025-12-06 10:25:54.053298308 +0000 UTC m=+0.210584728 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:25:54 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:25:54 localhost podman[241090]: @ - - [06/Dec/2025:10:25:53 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19279 "" "Go-http-client/1.1" Dec 6 05:25:55 localhost nova_compute[282193]: 2025-12-06 10:25:55.497 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:25:55 localhost nova_compute[282193]: 2025-12-06 10:25:55.499 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:25:55 localhost nova_compute[282193]: 2025-12-06 10:25:55.499 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:25:55 localhost nova_compute[282193]: 2025-12-06 10:25:55.500 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:25:55 localhost nova_compute[282193]: 2025-12-06 10:25:55.537 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:25:55 localhost nova_compute[282193]: 2025-12-06 10:25:55.538 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:25:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:25:56 localhost podman[335824]: 2025-12-06 10:25:56.927622011 +0000 UTC m=+0.085534026 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 6 05:25:56 localhost podman[335824]: 2025-12-06 10:25:56.938317398 +0000 UTC m=+0.096229423 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 6 05:25:56 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:25:58 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:26:00 localhost nova_compute[282193]: 2025-12-06 10:26:00.539 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:26:00 localhost nova_compute[282193]: 2025-12-06 10:26:00.541 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:26:00 localhost nova_compute[282193]: 2025-12-06 10:26:00.541 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:26:00 localhost nova_compute[282193]: 2025-12-06 10:26:00.541 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:26:00 localhost nova_compute[282193]: 2025-12-06 10:26:00.572 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:26:00 localhost nova_compute[282193]: 2025-12-06 10:26:00.572 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:26:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:26:01 localhost podman[335848]: 2025-12-06 10:26:01.908861217 +0000 UTC m=+0.069776813 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 6 05:26:01 localhost podman[335848]: 2025-12-06 10:26:01.972234475 +0000 UTC m=+0.133150081 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible) Dec 6 05:26:01 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:26:02 localhost ovn_controller[154851]: 2025-12-06T10:26:02Z|00517|memory_trim|INFO|Detected inactivity (last active 30000 ms ago): trimming memory Dec 6 05:26:03 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:26:05 localhost nova_compute[282193]: 2025-12-06 10:26:05.573 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:26:05 localhost nova_compute[282193]: 2025-12-06 10:26:05.575 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:26:05 localhost nova_compute[282193]: 2025-12-06 10:26:05.575 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:26:05 localhost nova_compute[282193]: 2025-12-06 10:26:05.575 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:26:05 localhost nova_compute[282193]: 2025-12-06 10:26:05.619 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:26:05 localhost nova_compute[282193]: 2025-12-06 10:26:05.619 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:26:08 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:26:10 localhost nova_compute[282193]: 2025-12-06 10:26:10.621 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:26:10 localhost nova_compute[282193]: 2025-12-06 10:26:10.623 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:26:10 localhost nova_compute[282193]: 2025-12-06 10:26:10.623 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:26:10 localhost nova_compute[282193]: 2025-12-06 10:26:10.623 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:26:10 localhost nova_compute[282193]: 2025-12-06 10:26:10.666 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:26:10 localhost nova_compute[282193]: 2025-12-06 10:26:10.667 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:26:10 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:26:10 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:26:12 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:26:13 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:26:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:26:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:26:13 localhost systemd[1]: tmp-crun.AyJZvv.mount: Deactivated successfully. Dec 6 05:26:13 localhost podman[335959]: 2025-12-06 10:26:13.930165047 +0000 UTC m=+0.089760554 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 6 05:26:13 localhost podman[335958]: 2025-12-06 10:26:13.975569115 +0000 UTC m=+0.135203424 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 6 05:26:13 localhost podman[335958]: 2025-12-06 10:26:13.983119356 +0000 UTC m=+0.142753665 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:26:13 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:26:14 localhost podman[335959]: 2025-12-06 10:26:14.0267832 +0000 UTC m=+0.186378647 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 6 05:26:14 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:26:15 localhost nova_compute[282193]: 2025-12-06 10:26:15.668 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:26:15 localhost nova_compute[282193]: 2025-12-06 10:26:15.670 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:26:15 localhost nova_compute[282193]: 2025-12-06 10:26:15.670 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:26:15 localhost nova_compute[282193]: 2025-12-06 10:26:15.670 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:26:15 localhost nova_compute[282193]: 2025-12-06 10:26:15.682 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:26:15 localhost nova_compute[282193]: 2025-12-06 10:26:15.683 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:26:15 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e272 e272: 6 total, 6 up, 6 in Dec 6 05:26:16 localhost openstack_network_exporter[243110]: ERROR 10:26:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:26:16 localhost openstack_network_exporter[243110]: ERROR 10:26:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:26:16 localhost openstack_network_exporter[243110]: ERROR 10:26:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:26:16 localhost openstack_network_exporter[243110]: ERROR 10:26:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:26:16 localhost openstack_network_exporter[243110]: Dec 6 05:26:16 localhost openstack_network_exporter[243110]: ERROR 10:26:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:26:16 localhost openstack_network_exporter[243110]: Dec 6 05:26:18 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:26:20 localhost nova_compute[282193]: 2025-12-06 10:26:20.684 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:26:20 localhost nova_compute[282193]: 2025-12-06 10:26:20.686 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:26:20 localhost nova_compute[282193]: 2025-12-06 10:26:20.686 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:26:20 localhost nova_compute[282193]: 2025-12-06 10:26:20.686 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:26:20 localhost nova_compute[282193]: 2025-12-06 10:26:20.687 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:26:20 localhost nova_compute[282193]: 2025-12-06 10:26:20.689 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:26:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:26:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:26:20 localhost podman[336001]: 2025-12-06 10:26:20.927645017 +0000 UTC m=+0.082655647 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, architecture=x86_64, distribution-scope=public, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter) Dec 6 05:26:20 localhost podman[336001]: 2025-12-06 10:26:20.937740696 +0000 UTC m=+0.092751346 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, version=9.6, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, container_name=openstack_network_exporter, config_id=edpm, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9) Dec 6 05:26:20 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:26:20 localhost systemd[1]: tmp-crun.HXT9wa.mount: Deactivated successfully. Dec 6 05:26:20 localhost podman[336002]: 2025-12-06 10:26:20.996941495 +0000 UTC m=+0.148581032 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 6 05:26:21 localhost podman[336002]: 2025-12-06 10:26:21.011140319 +0000 UTC m=+0.162779856 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 6 05:26:21 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:26:22 localhost nova_compute[282193]: 2025-12-06 10:26:22.247 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:26:22 localhost ovn_metadata_agent[160504]: 2025-12-06 10:26:22.247 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:6c:02', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:a8:2f:0c:cb:a1'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:26:22 localhost ovn_metadata_agent[160504]: 2025-12-06 10:26:22.249 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 6 05:26:22 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e273 e273: 6 total, 6 up, 6 in Dec 6 05:26:22 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:26:22.784 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:26:22Z, description=, device_id=b0de0aa3-0513-45a7-a160-43d6176211a5, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=37156bdd-58f7-4be9-babe-eb430466a407, ip_allocation=immediate, mac_address=fa:16:3e:32:c5:fe, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3753, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-06T10:26:22Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:26:22 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch Dec 6 05:26:22 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/601ad3a1-8738-4a72-911d-38595abebd4b/a5c5bd7f-cc74-4f03-8395-ee4b429e02e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_601ad3a1-8738-4a72-911d-38595abebd4b", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:26:22 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/601ad3a1-8738-4a72-911d-38595abebd4b/a5c5bd7f-cc74-4f03-8395-ee4b429e02e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_601ad3a1-8738-4a72-911d-38595abebd4b", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:26:22 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/601ad3a1-8738-4a72-911d-38595abebd4b/a5c5bd7f-cc74-4f03-8395-ee4b429e02e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_601ad3a1-8738-4a72-911d-38595abebd4b", "mon", "allow r"], "format": "json"}]': finished Dec 6 05:26:23 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 2 addresses Dec 6 05:26:23 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:26:23 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:26:23 localhost podman[336055]: 2025-12-06 10:26:23.0119524 +0000 UTC m=+0.060037336 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 6 05:26:23 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:26:23.324 263652 INFO neutron.agent.dhcp.agent [None req-956f96c8-497f-4ab0-b32b-a1d0ba7fc1d8 - - - - - -] DHCP configuration for ports {'37156bdd-58f7-4be9-babe-eb430466a407'} is completed#033[00m Dec 6 05:26:23 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:26:23 localhost nova_compute[282193]: 2025-12-06 10:26:23.547 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:26:23 localhost podman[241090]: time="2025-12-06T10:26:23Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:26:23 localhost podman[241090]: @ - - [06/Dec/2025:10:26:23 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156104 "" "Go-http-client/1.1" Dec 6 05:26:23 localhost podman[241090]: @ - - [06/Dec/2025:10:26:23 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19282 "" "Go-http-client/1.1" Dec 6 05:26:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:26:24 localhost podman[336076]: 2025-12-06 10:26:24.913673953 +0000 UTC m=+0.078467269 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd) Dec 6 05:26:24 localhost podman[336076]: 2025-12-06 10:26:24.924877656 +0000 UTC m=+0.089670992 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd) Dec 6 05:26:24 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:26:25 localhost nova_compute[282193]: 2025-12-06 10:26:25.728 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:26:25 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch Dec 6 05:26:26 localhost nova_compute[282193]: 2025-12-06 10:26:26.684 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:26:26 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/601ad3a1-8738-4a72-911d-38595abebd4b/a5c5bd7f-cc74-4f03-8395-ee4b429e02e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_601ad3a1-8738-4a72-911d-38595abebd4b", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:26:26 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/601ad3a1-8738-4a72-911d-38595abebd4b/a5c5bd7f-cc74-4f03-8395-ee4b429e02e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_601ad3a1-8738-4a72-911d-38595abebd4b", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:26:26 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/601ad3a1-8738-4a72-911d-38595abebd4b/a5c5bd7f-cc74-4f03-8395-ee4b429e02e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_601ad3a1-8738-4a72-911d-38595abebd4b", "mon", "allow r"], "format": "json"}]': finished Dec 6 05:26:26 localhost sshd[336095]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:26:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:26:27 localhost podman[336097]: 2025-12-06 10:26:27.925452198 +0000 UTC m=+0.084975688 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 6 05:26:27 localhost podman[336097]: 2025-12-06 10:26:27.962340657 +0000 UTC m=+0.121864157 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 6 05:26:27 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:26:28 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:26:28 localhost sshd[336120]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:26:29 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch Dec 6 05:26:29 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch Dec 6 05:26:29 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch Dec 6 05:26:29 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.eve48"}]': finished Dec 6 05:26:29 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:26:29.830 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:26:29Z, description=, device_id=a662b0a7-3c9a-4ddd-a647-8d57adfa246a, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=7dff28d2-a5fd-4739-bfc4-65f4b8d30daf, ip_allocation=immediate, mac_address=fa:16:3e:6a:c4:3f, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3771, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-06T10:26:29Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:26:30 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 3 addresses Dec 6 05:26:30 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:26:30 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:26:30 localhost podman[336140]: 2025-12-06 10:26:30.051745915 +0000 UTC m=+0.048277946 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Dec 6 05:26:30 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:26:30.313 263652 INFO neutron.agent.dhcp.agent [None req-b56938b1-71a8-44d2-b92d-4085c8a23c16 - - - - - -] DHCP configuration for ports {'7dff28d2-a5fd-4739-bfc4-65f4b8d30daf'} is completed#033[00m Dec 6 05:26:30 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:26:30.435 263652 INFO neutron.agent.linux.ip_lib [None req-23c56141-c409-4b4d-b753-26898ad4086b - - - - - -] Device tap68ac2d58-05 cannot be used as it has no MAC address#033[00m Dec 6 05:26:30 localhost nova_compute[282193]: 2025-12-06 10:26:30.440 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:26:30 localhost nova_compute[282193]: 2025-12-06 10:26:30.460 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:26:30 localhost kernel: device tap68ac2d58-05 entered promiscuous mode Dec 6 05:26:30 localhost ovn_controller[154851]: 2025-12-06T10:26:30Z|00518|binding|INFO|Claiming lport 68ac2d58-053a-483f-a60b-4eaff9e97708 for this chassis. Dec 6 05:26:30 localhost nova_compute[282193]: 2025-12-06 10:26:30.471 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:26:30 localhost ovn_controller[154851]: 2025-12-06T10:26:30Z|00519|binding|INFO|68ac2d58-053a-483f-a60b-4eaff9e97708: Claiming unknown Dec 6 05:26:30 localhost systemd-udevd[336171]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:26:30 localhost NetworkManager[5973]: [1765016790.4754] manager: (tap68ac2d58-05): new Generic device (/org/freedesktop/NetworkManager/Devices/82) Dec 6 05:26:30 localhost ovn_metadata_agent[160504]: 2025-12-06 10:26:30.485 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-ffefea15-4edb-43a1-a498-6e71b5510aec', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ffefea15-4edb-43a1-a498-6e71b5510aec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0680825af8a248e9b1a46d099dbba654', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a90362af-d474-4111-8af9-afb889638492, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=68ac2d58-053a-483f-a60b-4eaff9e97708) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:26:30 localhost ovn_metadata_agent[160504]: 2025-12-06 10:26:30.488 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 68ac2d58-053a-483f-a60b-4eaff9e97708 in datapath ffefea15-4edb-43a1-a498-6e71b5510aec bound to our chassis#033[00m Dec 6 05:26:30 localhost ovn_metadata_agent[160504]: 2025-12-06 10:26:30.490 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Port b858d185-a8c6-4173-bbf1-f74fdc623898 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 6 05:26:30 localhost ovn_metadata_agent[160504]: 2025-12-06 10:26:30.490 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ffefea15-4edb-43a1-a498-6e71b5510aec, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:26:30 localhost ovn_metadata_agent[160504]: 2025-12-06 10:26:30.491 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[8f265c37-4e87-416f-8a65-0a9b9673168d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:26:30 localhost ovn_controller[154851]: 2025-12-06T10:26:30Z|00520|binding|INFO|Setting lport 68ac2d58-053a-483f-a60b-4eaff9e97708 ovn-installed in OVS Dec 6 05:26:30 localhost ovn_controller[154851]: 2025-12-06T10:26:30Z|00521|binding|INFO|Setting lport 68ac2d58-053a-483f-a60b-4eaff9e97708 up in Southbound Dec 6 05:26:30 localhost nova_compute[282193]: 2025-12-06 10:26:30.517 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:26:30 localhost nova_compute[282193]: 2025-12-06 10:26:30.554 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:26:30 localhost nova_compute[282193]: 2025-12-06 10:26:30.620 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:26:30 localhost nova_compute[282193]: 2025-12-06 10:26:30.726 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:26:30 localhost nova_compute[282193]: 2025-12-06 10:26:30.732 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:26:31 localhost sshd[336205]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:26:31 localhost ovn_metadata_agent[160504]: 2025-12-06 10:26:31.252 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b142a5ef-fbed-4e92-aa78-e3ad080c6370, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 05:26:31 localhost podman[336228]: Dec 6 05:26:31 localhost podman[336228]: 2025-12-06 10:26:31.429280554 +0000 UTC m=+0.077726517 container create 47a2771951379052c135febd1bc2cced535b242e476bebe9cf722bd66e2e1a23 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ffefea15-4edb-43a1-a498-6e71b5510aec, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:26:31 localhost podman[336228]: 2025-12-06 10:26:31.381203565 +0000 UTC m=+0.029649568 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:26:31 localhost systemd[1]: Started libpod-conmon-47a2771951379052c135febd1bc2cced535b242e476bebe9cf722bd66e2e1a23.scope. Dec 6 05:26:31 localhost systemd[1]: Started libcrun container. Dec 6 05:26:31 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c812ec028dfa41ac696173d138a7fa185692627e0817d322fac67fcf1f33ade3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:26:31 localhost podman[336228]: 2025-12-06 10:26:31.516266803 +0000 UTC m=+0.164712776 container init 47a2771951379052c135febd1bc2cced535b242e476bebe9cf722bd66e2e1a23 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ffefea15-4edb-43a1-a498-6e71b5510aec, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125) Dec 6 05:26:31 localhost podman[336228]: 2025-12-06 10:26:31.527388123 +0000 UTC m=+0.175834086 container start 47a2771951379052c135febd1bc2cced535b242e476bebe9cf722bd66e2e1a23 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ffefea15-4edb-43a1-a498-6e71b5510aec, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:26:31 localhost dnsmasq[336247]: started, version 2.85 cachesize 150 Dec 6 05:26:31 localhost dnsmasq[336247]: DNS service limited to local subnets Dec 6 05:26:31 localhost dnsmasq[336247]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:26:31 localhost dnsmasq[336247]: warning: no upstream servers configured Dec 6 05:26:31 localhost dnsmasq-dhcp[336247]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 6 05:26:31 localhost dnsmasq[336247]: read /var/lib/neutron/dhcp/ffefea15-4edb-43a1-a498-6e71b5510aec/addn_hosts - 0 addresses Dec 6 05:26:31 localhost dnsmasq-dhcp[336247]: read /var/lib/neutron/dhcp/ffefea15-4edb-43a1-a498-6e71b5510aec/host Dec 6 05:26:31 localhost dnsmasq-dhcp[336247]: read /var/lib/neutron/dhcp/ffefea15-4edb-43a1-a498-6e71b5510aec/opts Dec 6 05:26:31 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:26:31.585 263652 INFO neutron.agent.dhcp.agent [None req-69b7eedd-053f-4c4d-a599-b5b626a2a1c3 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:26:31Z, description=, device_id=a662b0a7-3c9a-4ddd-a647-8d57adfa246a, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=cef4873b-e89f-4614-b320-d769cf02dd7f, ip_allocation=immediate, mac_address=fa:16:3e:35:bb:c4, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:26:28Z, description=, dns_domain=, id=ffefea15-4edb-43a1-a498-6e71b5510aec, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingAPIAdminTest-1539709692-network, port_security_enabled=True, project_id=0680825af8a248e9b1a46d099dbba654, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=45640, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3765, status=ACTIVE, subnets=['bd32e7e6-ec38-47a8-b41d-1da05e64c200'], tags=[], tenant_id=0680825af8a248e9b1a46d099dbba654, updated_at=2025-12-06T10:26:28Z, vlan_transparent=None, network_id=ffefea15-4edb-43a1-a498-6e71b5510aec, port_security_enabled=False, project_id=0680825af8a248e9b1a46d099dbba654, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3773, status=DOWN, tags=[], tenant_id=0680825af8a248e9b1a46d099dbba654, updated_at=2025-12-06T10:26:31Z on network ffefea15-4edb-43a1-a498-6e71b5510aec#033[00m Dec 6 05:26:31 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:26:31.684 263652 INFO neutron.agent.dhcp.agent [None req-ae6b865b-7fbb-4a77-afb3-33f4bb273864 - - - - - -] DHCP configuration for ports {'afad7adb-a4ea-42b9-8e21-3426fb577880'} is completed#033[00m Dec 6 05:26:31 localhost dnsmasq[336247]: read /var/lib/neutron/dhcp/ffefea15-4edb-43a1-a498-6e71b5510aec/addn_hosts - 1 addresses Dec 6 05:26:31 localhost podman[336263]: 2025-12-06 10:26:31.849156549 +0000 UTC m=+0.066840164 container kill 47a2771951379052c135febd1bc2cced535b242e476bebe9cf722bd66e2e1a23 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ffefea15-4edb-43a1-a498-6e71b5510aec, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Dec 6 05:26:31 localhost dnsmasq-dhcp[336247]: read /var/lib/neutron/dhcp/ffefea15-4edb-43a1-a498-6e71b5510aec/host Dec 6 05:26:31 localhost dnsmasq-dhcp[336247]: read /var/lib/neutron/dhcp/ffefea15-4edb-43a1-a498-6e71b5510aec/opts Dec 6 05:26:32 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:26:32.148 263652 INFO neutron.agent.dhcp.agent [None req-fc5a40ea-4e7e-47c2-8c2a-f83a7f141851 - - - - - -] DHCP configuration for ports {'cef4873b-e89f-4614-b320-d769cf02dd7f'} is completed#033[00m Dec 6 05:26:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:26:32 localhost podman[336285]: 2025-12-06 10:26:32.424948411 +0000 UTC m=+0.088975021 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 6 05:26:32 localhost podman[336285]: 2025-12-06 10:26:32.487670718 +0000 UTC m=+0.151697388 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.build-date=20251125) Dec 6 05:26:32 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:26:32 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:26:32.614 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:26:31Z, description=, device_id=a662b0a7-3c9a-4ddd-a647-8d57adfa246a, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=cef4873b-e89f-4614-b320-d769cf02dd7f, ip_allocation=immediate, mac_address=fa:16:3e:35:bb:c4, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:26:28Z, description=, dns_domain=, id=ffefea15-4edb-43a1-a498-6e71b5510aec, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingAPIAdminTest-1539709692-network, port_security_enabled=True, project_id=0680825af8a248e9b1a46d099dbba654, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=45640, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3765, status=ACTIVE, subnets=['bd32e7e6-ec38-47a8-b41d-1da05e64c200'], tags=[], tenant_id=0680825af8a248e9b1a46d099dbba654, updated_at=2025-12-06T10:26:28Z, vlan_transparent=None, network_id=ffefea15-4edb-43a1-a498-6e71b5510aec, port_security_enabled=False, project_id=0680825af8a248e9b1a46d099dbba654, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3773, status=DOWN, tags=[], tenant_id=0680825af8a248e9b1a46d099dbba654, updated_at=2025-12-06T10:26:31Z on network ffefea15-4edb-43a1-a498-6e71b5510aec#033[00m Dec 6 05:26:32 localhost systemd[1]: tmp-crun.gIozBd.mount: Deactivated successfully. Dec 6 05:26:32 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch Dec 6 05:26:32 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/601ad3a1-8738-4a72-911d-38595abebd4b/a5c5bd7f-cc74-4f03-8395-ee4b429e02e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_601ad3a1-8738-4a72-911d-38595abebd4b", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:26:32 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/601ad3a1-8738-4a72-911d-38595abebd4b/a5c5bd7f-cc74-4f03-8395-ee4b429e02e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_601ad3a1-8738-4a72-911d-38595abebd4b", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:26:32 localhost dnsmasq[336247]: read /var/lib/neutron/dhcp/ffefea15-4edb-43a1-a498-6e71b5510aec/addn_hosts - 1 addresses Dec 6 05:26:32 localhost dnsmasq-dhcp[336247]: read /var/lib/neutron/dhcp/ffefea15-4edb-43a1-a498-6e71b5510aec/host Dec 6 05:26:32 localhost dnsmasq-dhcp[336247]: read /var/lib/neutron/dhcp/ffefea15-4edb-43a1-a498-6e71b5510aec/opts Dec 6 05:26:32 localhost podman[336326]: 2025-12-06 10:26:32.852484529 +0000 UTC m=+0.071161906 container kill 47a2771951379052c135febd1bc2cced535b242e476bebe9cf722bd66e2e1a23 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ffefea15-4edb-43a1-a498-6e71b5510aec, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 6 05:26:33 localhost sshd[336348]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:26:33 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:26:33.138 263652 INFO neutron.agent.dhcp.agent [None req-6c7e05cb-c87c-45cf-9829-f53da3377085 - - - - - -] DHCP configuration for ports {'cef4873b-e89f-4614-b320-d769cf02dd7f'} is completed#033[00m Dec 6 05:26:33 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:26:33 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/601ad3a1-8738-4a72-911d-38595abebd4b/a5c5bd7f-cc74-4f03-8395-ee4b429e02e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_601ad3a1-8738-4a72-911d-38595abebd4b", "mon", "allow r"], "format": "json"}]': finished Dec 6 05:26:33 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 6 05:26:33 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:26:33 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:26:33 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"}]': finished Dec 6 05:26:34 localhost sshd[336350]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:26:35 localhost nova_compute[282193]: 2025-12-06 10:26:35.733 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:26:35 localhost nova_compute[282193]: 2025-12-06 10:26:35.735 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:26:35 localhost nova_compute[282193]: 2025-12-06 10:26:35.735 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:26:35 localhost nova_compute[282193]: 2025-12-06 10:26:35.736 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:26:35 localhost nova_compute[282193]: 2025-12-06 10:26:35.774 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:26:35 localhost nova_compute[282193]: 2025-12-06 10:26:35.775 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:26:36 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch Dec 6 05:26:36 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch Dec 6 05:26:36 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch Dec 6 05:26:36 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.eve47"}]': finished Dec 6 05:26:37 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 6 05:26:37 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 6 05:26:37 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 6 05:26:37 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Dec 6 05:26:38 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:26:38 localhost ovn_controller[154851]: 2025-12-06T10:26:38Z|00522|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:26:38 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 2 addresses Dec 6 05:26:38 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:26:38 localhost podman[336367]: 2025-12-06 10:26:38.415925204 +0000 UTC m=+0.053280320 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 6 05:26:38 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:26:38 localhost nova_compute[282193]: 2025-12-06 10:26:38.428 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:26:39 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 6 05:26:39 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3136332362' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 6 05:26:39 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 6 05:26:39 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3136332362' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 6 05:26:39 localhost sshd[336388]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:26:40 localhost dnsmasq[336247]: read /var/lib/neutron/dhcp/ffefea15-4edb-43a1-a498-6e71b5510aec/addn_hosts - 0 addresses Dec 6 05:26:40 localhost dnsmasq-dhcp[336247]: read /var/lib/neutron/dhcp/ffefea15-4edb-43a1-a498-6e71b5510aec/host Dec 6 05:26:40 localhost dnsmasq-dhcp[336247]: read /var/lib/neutron/dhcp/ffefea15-4edb-43a1-a498-6e71b5510aec/opts Dec 6 05:26:40 localhost podman[336407]: 2025-12-06 10:26:40.141048048 +0000 UTC m=+0.069046852 container kill 47a2771951379052c135febd1bc2cced535b242e476bebe9cf722bd66e2e1a23 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ffefea15-4edb-43a1-a498-6e71b5510aec, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 6 05:26:40 localhost systemd[1]: tmp-crun.8pw93Y.mount: Deactivated successfully. Dec 6 05:26:40 localhost nova_compute[282193]: 2025-12-06 10:26:40.334 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:26:40 localhost kernel: device tap68ac2d58-05 left promiscuous mode Dec 6 05:26:40 localhost ovn_controller[154851]: 2025-12-06T10:26:40Z|00523|binding|INFO|Releasing lport 68ac2d58-053a-483f-a60b-4eaff9e97708 from this chassis (sb_readonly=0) Dec 6 05:26:40 localhost ovn_controller[154851]: 2025-12-06T10:26:40Z|00524|binding|INFO|Setting lport 68ac2d58-053a-483f-a60b-4eaff9e97708 down in Southbound Dec 6 05:26:40 localhost ovn_metadata_agent[160504]: 2025-12-06 10:26:40.345 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-ffefea15-4edb-43a1-a498-6e71b5510aec', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ffefea15-4edb-43a1-a498-6e71b5510aec', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0680825af8a248e9b1a46d099dbba654', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548789.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a90362af-d474-4111-8af9-afb889638492, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=68ac2d58-053a-483f-a60b-4eaff9e97708) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:26:40 localhost ovn_metadata_agent[160504]: 2025-12-06 10:26:40.347 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 68ac2d58-053a-483f-a60b-4eaff9e97708 in datapath ffefea15-4edb-43a1-a498-6e71b5510aec unbound from our chassis#033[00m Dec 6 05:26:40 localhost ovn_metadata_agent[160504]: 2025-12-06 10:26:40.349 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ffefea15-4edb-43a1-a498-6e71b5510aec, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:26:40 localhost ovn_metadata_agent[160504]: 2025-12-06 10:26:40.350 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[d59372a6-5103-4169-bb88-82a28f394172]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:26:40 localhost nova_compute[282193]: 2025-12-06 10:26:40.353 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:26:40 localhost nova_compute[282193]: 2025-12-06 10:26:40.815 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:26:41 localhost nova_compute[282193]: 2025-12-06 10:26:41.177 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:26:41 localhost nova_compute[282193]: 2025-12-06 10:26:41.180 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:26:41 localhost nova_compute[282193]: 2025-12-06 10:26:41.181 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 6 05:26:41 localhost nova_compute[282193]: 2025-12-06 10:26:41.181 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 6 05:26:41 localhost nova_compute[282193]: 2025-12-06 10:26:41.386 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 6 05:26:41 localhost nova_compute[282193]: 2025-12-06 10:26:41.386 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquired lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 6 05:26:41 localhost nova_compute[282193]: 2025-12-06 10:26:41.387 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 6 05:26:41 localhost nova_compute[282193]: 2025-12-06 10:26:41.387 282197 DEBUG nova.objects.instance [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 6 05:26:41 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch Dec 6 05:26:41 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch Dec 6 05:26:41 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch Dec 6 05:26:41 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.eve49"}]': finished Dec 6 05:26:41 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 6 05:26:41 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:26:41 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:26:41 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"}]': finished Dec 6 05:26:41 localhost ovn_controller[154851]: 2025-12-06T10:26:41Z|00525|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:26:41 localhost nova_compute[282193]: 2025-12-06 10:26:41.561 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:26:41 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 1 addresses Dec 6 05:26:41 localhost podman[336447]: 2025-12-06 10:26:41.582145451 +0000 UTC m=+0.071760125 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:26:41 localhost systemd[1]: tmp-crun.VtVGRz.mount: Deactivated successfully. Dec 6 05:26:41 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:26:41 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:26:41 localhost nova_compute[282193]: 2025-12-06 10:26:41.946 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updating instance_info_cache with network_info: [{"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 6 05:26:41 localhost nova_compute[282193]: 2025-12-06 10:26:41.977 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Releasing lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 6 05:26:41 localhost nova_compute[282193]: 2025-12-06 10:26:41.978 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 6 05:26:41 localhost nova_compute[282193]: 2025-12-06 10:26:41.978 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:26:42 localhost nova_compute[282193]: 2025-12-06 10:26:42.038 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:26:42 localhost nova_compute[282193]: 2025-12-06 10:26:42.038 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:26:42 localhost nova_compute[282193]: 2025-12-06 10:26:42.038 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:26:42 localhost nova_compute[282193]: 2025-12-06 10:26:42.039 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Auditing locally available compute resources for np0005548789.localdomain (node: np0005548789.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 6 05:26:42 localhost nova_compute[282193]: 2025-12-06 10:26:42.039 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:26:42 localhost dnsmasq[336247]: exiting on receipt of SIGTERM Dec 6 05:26:42 localhost podman[336485]: 2025-12-06 10:26:42.152941649 +0000 UTC m=+0.055749405 container kill 47a2771951379052c135febd1bc2cced535b242e476bebe9cf722bd66e2e1a23 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ffefea15-4edb-43a1-a498-6e71b5510aec, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:26:42 localhost systemd[1]: libpod-47a2771951379052c135febd1bc2cced535b242e476bebe9cf722bd66e2e1a23.scope: Deactivated successfully. Dec 6 05:26:42 localhost podman[336500]: 2025-12-06 10:26:42.210461978 +0000 UTC m=+0.045639497 container died 47a2771951379052c135febd1bc2cced535b242e476bebe9cf722bd66e2e1a23 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ffefea15-4edb-43a1-a498-6e71b5510aec, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true) Dec 6 05:26:42 localhost podman[336500]: 2025-12-06 10:26:42.289283867 +0000 UTC m=+0.124461356 container cleanup 47a2771951379052c135febd1bc2cced535b242e476bebe9cf722bd66e2e1a23 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ffefea15-4edb-43a1-a498-6e71b5510aec, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 6 05:26:42 localhost systemd[1]: libpod-conmon-47a2771951379052c135febd1bc2cced535b242e476bebe9cf722bd66e2e1a23.scope: Deactivated successfully. Dec 6 05:26:42 localhost podman[336508]: 2025-12-06 10:26:42.312382583 +0000 UTC m=+0.127462297 container remove 47a2771951379052c135febd1bc2cced535b242e476bebe9cf722bd66e2e1a23 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ffefea15-4edb-43a1-a498-6e71b5510aec, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 6 05:26:42 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:26:42.350 263652 INFO neutron.agent.dhcp.agent [None req-d0afb8d4-33d0-41be-ad79-ef55f87f4a4d - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:26:42 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:26:42.375 263652 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:26:42 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:26:42 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1615917791' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:26:42 localhost nova_compute[282193]: 2025-12-06 10:26:42.499 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:26:42 localhost nova_compute[282193]: 2025-12-06 10:26:42.571 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:26:42 localhost systemd[1]: var-lib-containers-storage-overlay-c812ec028dfa41ac696173d138a7fa185692627e0817d322fac67fcf1f33ade3-merged.mount: Deactivated successfully. Dec 6 05:26:42 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-47a2771951379052c135febd1bc2cced535b242e476bebe9cf722bd66e2e1a23-userdata-shm.mount: Deactivated successfully. Dec 6 05:26:42 localhost systemd[1]: run-netns-qdhcp\x2dffefea15\x2d4edb\x2d43a1\x2da498\x2d6e71b5510aec.mount: Deactivated successfully. Dec 6 05:26:42 localhost nova_compute[282193]: 2025-12-06 10:26:42.571 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:26:42 localhost nova_compute[282193]: 2025-12-06 10:26:42.757 282197 WARNING nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 6 05:26:42 localhost nova_compute[282193]: 2025-12-06 10:26:42.759 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Hypervisor/Node resource view: name=np0005548789.localdomain free_ram=11128MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 6 05:26:42 localhost nova_compute[282193]: 2025-12-06 10:26:42.760 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:26:42 localhost nova_compute[282193]: 2025-12-06 10:26:42.760 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:26:42 localhost nova_compute[282193]: 2025-12-06 10:26:42.840 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 6 05:26:42 localhost nova_compute[282193]: 2025-12-06 10:26:42.841 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 6 05:26:42 localhost nova_compute[282193]: 2025-12-06 10:26:42.841 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Final resource view: name=np0005548789.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 6 05:26:42 localhost ceph-mgr[288591]: client.0 ms_handle_reset on v2:172.18.0.108:6810/3354697053 Dec 6 05:26:42 localhost nova_compute[282193]: 2025-12-06 10:26:42.886 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:26:43 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:26:43 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/4068335658' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:26:43 localhost nova_compute[282193]: 2025-12-06 10:26:43.326 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:26:43 localhost nova_compute[282193]: 2025-12-06 10:26:43.333 282197 DEBUG nova.compute.provider_tree [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed in ProviderTree for provider: 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 6 05:26:43 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:26:43 localhost nova_compute[282193]: 2025-12-06 10:26:43.394 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 6 05:26:43 localhost nova_compute[282193]: 2025-12-06 10:26:43.396 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Compute_service record updated for np0005548789.localdomain:np0005548789.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 6 05:26:43 localhost nova_compute[282193]: 2025-12-06 10:26:43.396 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:26:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:26:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:26:44 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 6 05:26:44 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 6 05:26:44 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 6 05:26:44 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Dec 6 05:26:44 localhost systemd[1]: tmp-crun.fLHIIJ.mount: Deactivated successfully. Dec 6 05:26:44 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #61. Immutable memtables: 0. Dec 6 05:26:44 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:26:44.928682) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 6 05:26:44 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 61 Dec 6 05:26:44 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016804928743, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2661, "num_deletes": 266, "total_data_size": 4649519, "memory_usage": 4706728, "flush_reason": "Manual Compaction"} Dec 6 05:26:44 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #62: started Dec 6 05:26:44 localhost podman[336572]: 2025-12-06 10:26:44.937999903 +0000 UTC m=+0.096139519 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent) Dec 6 05:26:44 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016804946481, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 62, "file_size": 3017632, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33810, "largest_seqno": 36466, "table_properties": {"data_size": 3007152, "index_size": 6537, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2885, "raw_key_size": 25415, "raw_average_key_size": 22, "raw_value_size": 2984935, "raw_average_value_size": 2613, "num_data_blocks": 280, "num_entries": 1142, "num_filter_entries": 1142, "num_deletions": 266, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765016677, "oldest_key_time": 1765016677, "file_creation_time": 1765016804, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ec2a3a6-c7d0-4f1e-9923-5a3b782725ae", "db_session_id": "JMBO5KX1IJCJ8FWC64EX", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}} Dec 6 05:26:44 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 17864 microseconds, and 7678 cpu microseconds. Dec 6 05:26:44 localhost ceph-mon[298582]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 6 05:26:44 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:26:44.946543) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #62: 3017632 bytes OK Dec 6 05:26:44 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:26:44.946573) [db/memtable_list.cc:519] [default] Level-0 commit table #62 started Dec 6 05:26:44 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:26:44.948089) [db/memtable_list.cc:722] [default] Level-0 commit table #62: memtable #1 done Dec 6 05:26:44 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:26:44.948110) EVENT_LOG_v1 {"time_micros": 1765016804948103, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 6 05:26:44 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:26:44.948136) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 6 05:26:44 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 4637022, prev total WAL file size 4637022, number of live WAL files 2. Dec 6 05:26:44 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000058.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:26:44 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:26:44.949202) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132383031' seq:72057594037927935, type:22 .. '7061786F73003133303533' seq:0, type:0; will stop at (end) Dec 6 05:26:44 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 6 05:26:44 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [62(2946KB)], [60(18MB)] Dec 6 05:26:44 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016804949290, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [62], "files_L6": [60], "score": -1, "input_data_size": 22205171, "oldest_snapshot_seqno": -1} Dec 6 05:26:44 localhost podman[336572]: 2025-12-06 10:26:44.976472089 +0000 UTC m=+0.134611665 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3) Dec 6 05:26:44 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:26:45 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #63: 13940 keys, 20512656 bytes, temperature: kUnknown Dec 6 05:26:45 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016805051167, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 63, "file_size": 20512656, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 20432352, "index_size": 44363, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 34885, "raw_key_size": 375669, "raw_average_key_size": 26, "raw_value_size": 20194722, "raw_average_value_size": 1448, "num_data_blocks": 1642, "num_entries": 13940, "num_filter_entries": 13940, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015623, "oldest_key_time": 0, "file_creation_time": 1765016804, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ec2a3a6-c7d0-4f1e-9923-5a3b782725ae", "db_session_id": "JMBO5KX1IJCJ8FWC64EX", "orig_file_number": 63, "seqno_to_time_mapping": "N/A"}} Dec 6 05:26:45 localhost ceph-mon[298582]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 6 05:26:45 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:26:45.051647) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 20512656 bytes Dec 6 05:26:45 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:26:45.053527) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 217.6 rd, 201.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.9, 18.3 +0.0 blob) out(19.6 +0.0 blob), read-write-amplify(14.2) write-amplify(6.8) OK, records in: 14487, records dropped: 547 output_compression: NoCompression Dec 6 05:26:45 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:26:45.053558) EVENT_LOG_v1 {"time_micros": 1765016805053545, "job": 36, "event": "compaction_finished", "compaction_time_micros": 102034, "compaction_time_cpu_micros": 48090, "output_level": 6, "num_output_files": 1, "total_output_size": 20512656, "num_input_records": 14487, "num_output_records": 13940, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 6 05:26:45 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:26:45 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016805054164, "job": 36, "event": "table_file_deletion", "file_number": 62} Dec 6 05:26:45 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000060.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:26:45 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016805057050, "job": 36, "event": "table_file_deletion", "file_number": 60} Dec 6 05:26:45 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:26:44.949084) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:26:45 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:26:45.057182) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:26:45 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:26:45.057192) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:26:45 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:26:45.057196) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:26:45 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:26:45.057199) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:26:45 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:26:45.057210) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:26:45 localhost podman[336573]: 2025-12-06 10:26:45.071388741 +0000 UTC m=+0.227877667 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 6 05:26:45 localhost podman[336573]: 2025-12-06 10:26:45.109238227 +0000 UTC m=+0.265727123 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 6 05:26:45 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:26:45 localhost nova_compute[282193]: 2025-12-06 10:26:45.817 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:26:46 localhost nova_compute[282193]: 2025-12-06 10:26:46.599 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:26:46 localhost openstack_network_exporter[243110]: ERROR 10:26:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:26:46 localhost openstack_network_exporter[243110]: ERROR 10:26:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:26:46 localhost openstack_network_exporter[243110]: ERROR 10:26:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:26:46 localhost openstack_network_exporter[243110]: ERROR 10:26:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:26:46 localhost openstack_network_exporter[243110]: Dec 6 05:26:46 localhost openstack_network_exporter[243110]: ERROR 10:26:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:26:46 localhost openstack_network_exporter[243110]: Dec 6 05:26:47 localhost nova_compute[282193]: 2025-12-06 10:26:47.180 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:26:47 localhost nova_compute[282193]: 2025-12-06 10:26:47.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:26:47 localhost nova_compute[282193]: 2025-12-06 10:26:47.181 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 6 05:26:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:26:47.343 160509 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:26:47 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 6 05:26:47 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:26:47 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:26:47 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"}]': finished Dec 6 05:26:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:26:47.345 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:26:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:26:47.346 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:26:48 localhost nova_compute[282193]: 2025-12-06 10:26:48.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:26:48 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:26:49 localhost nova_compute[282193]: 2025-12-06 10:26:49.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:26:49 localhost nova_compute[282193]: 2025-12-06 10:26:49.182 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:26:50 localhost nova_compute[282193]: 2025-12-06 10:26:50.823 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:26:50 localhost nova_compute[282193]: 2025-12-06 10:26:50.825 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:26:50 localhost nova_compute[282193]: 2025-12-06 10:26:50.825 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:26:50 localhost nova_compute[282193]: 2025-12-06 10:26:50.825 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:26:50 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 6 05:26:50 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 6 05:26:50 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 6 05:26:50 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Dec 6 05:26:50 localhost nova_compute[282193]: 2025-12-06 10:26:50.850 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:26:50 localhost nova_compute[282193]: 2025-12-06 10:26:50.850 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:26:51 localhost sshd[336612]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:26:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:26:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:26:51 localhost systemd[1]: tmp-crun.X2s6LK.mount: Deactivated successfully. Dec 6 05:26:51 localhost podman[336614]: 2025-12-06 10:26:51.917370971 +0000 UTC m=+0.078909173 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., name=ubi9-minimal, version=9.6, io.buildah.version=1.33.7, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, config_id=edpm) Dec 6 05:26:51 localhost podman[336615]: 2025-12-06 10:26:51.96971328 +0000 UTC m=+0.130472639 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true) Dec 6 05:26:51 localhost podman[336614]: 2025-12-06 10:26:51.987096272 +0000 UTC m=+0.148634474 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, io.openshift.tags=minimal rhel9, version=9.6, io.openshift.expose-services=, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 6 05:26:52 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:26:52 localhost podman[336615]: 2025-12-06 10:26:52.010218779 +0000 UTC m=+0.170978138 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm) Dec 6 05:26:52 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:26:53 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:26:53 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 6 05:26:53 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:26:53 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:26:53 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"}]': finished Dec 6 05:26:53 localhost podman[241090]: time="2025-12-06T10:26:53Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:26:53 localhost podman[241090]: @ - - [06/Dec/2025:10:26:53 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156104 "" "Go-http-client/1.1" Dec 6 05:26:53 localhost podman[241090]: @ - - [06/Dec/2025:10:26:53 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19281 "" "Go-http-client/1.1" Dec 6 05:26:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:26:55 localhost nova_compute[282193]: 2025-12-06 10:26:55.850 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:26:55 localhost podman[336653]: 2025-12-06 10:26:55.916514619 +0000 UTC m=+0.081530693 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible) Dec 6 05:26:55 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e274 e274: 6 total, 6 up, 6 in Dec 6 05:26:55 localhost podman[336653]: 2025-12-06 10:26:55.933363584 +0000 UTC m=+0.098379668 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 6 05:26:55 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:26:56 localhost nova_compute[282193]: 2025-12-06 10:26:56.177 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:26:57 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 6 05:26:57 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 6 05:26:57 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 6 05:26:57 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Dec 6 05:26:58 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:26:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:26:58 localhost podman[336672]: 2025-12-06 10:26:58.918896227 +0000 UTC m=+0.081685798 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 6 05:26:58 localhost podman[336672]: 2025-12-06 10:26:58.9281746 +0000 UTC m=+0.090964221 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Dec 6 05:26:58 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:27:00 localhost nova_compute[282193]: 2025-12-06 10:27:00.853 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:27:00 localhost nova_compute[282193]: 2025-12-06 10:27:00.855 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:27:00 localhost nova_compute[282193]: 2025-12-06 10:27:00.855 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:27:00 localhost nova_compute[282193]: 2025-12-06 10:27:00.855 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:27:00 localhost nova_compute[282193]: 2025-12-06 10:27:00.888 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:27:00 localhost nova_compute[282193]: 2025-12-06 10:27:00.889 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:27:01 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 6 05:27:01 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:27:01 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:27:01 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"}]': finished Dec 6 05:27:02 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e275 e275: 6 total, 6 up, 6 in Dec 6 05:27:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:27:02 localhost podman[336695]: 2025-12-06 10:27:02.901046544 +0000 UTC m=+0.068334729 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:27:02 localhost podman[336695]: 2025-12-06 10:27:02.93723081 +0000 UTC m=+0.104518945 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible) Dec 6 05:27:02 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:27:03 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 6 05:27:03 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 4936 writes, 36K keys, 4936 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.06 MB/s#012Cumulative WAL: 4936 writes, 4936 syncs, 1.00 writes per sync, written: 0.06 GB, 0.06 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2414 writes, 12K keys, 2414 commit groups, 1.0 writes per commit group, ingest: 19.94 MB, 0.03 MB/s#012Interval WAL: 2414 writes, 2414 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 150.5 0.31 0.09 18 0.017 0 0 0.0 0.0#012 L6 1/0 19.56 MB 0.0 0.3 0.0 0.3 0.3 0.0 0.0 6.6 184.8 170.0 1.79 0.82 17 0.105 220K 8763 0.0 0.0#012 Sum 1/0 19.56 MB 0.0 0.3 0.0 0.3 0.3 0.1 0.0 7.6 157.8 167.2 2.10 0.91 35 0.060 220K 8763 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.2 0.0 0.2 0.2 0.0 0.0 12.6 172.7 173.8 1.02 0.47 18 0.057 123K 4774 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low 0/0 0.00 KB 0.0 0.3 0.0 0.3 0.3 0.0 0.0 0.0 184.8 170.0 1.79 0.82 17 0.105 220K 8763 0.0 0.0#012High 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 151.7 0.30 0.09 17 0.018 0 0 0.0 0.0#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.7 0.00 0.00 1 0.002 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.045, interval 0.014#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.34 GB write, 0.29 MB/s write, 0.32 GB read, 0.28 MB/s read, 2.1 seconds#012Interval compaction: 0.17 GB write, 0.30 MB/s write, 0.17 GB read, 0.29 MB/s read, 1.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55b608f29350#2 capacity: 304.00 MB usage: 29.19 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.000187 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1507,27.77 MB,9.13637%) FilterBlock(35,637.55 KB,0.204804%) IndexBlock(35,809.30 KB,0.259977%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] ** Dec 6 05:27:03 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:27:04 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 6 05:27:04 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 6 05:27:04 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 6 05:27:04 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Dec 6 05:27:05 localhost nova_compute[282193]: 2025-12-06 10:27:05.889 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:27:05 localhost nova_compute[282193]: 2025-12-06 10:27:05.890 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:27:07 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 6 05:27:07 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:27:07 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:27:07 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"}]': finished Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.917 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'name': 'test', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005548789.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'hostId': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.918 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.952 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.latency volume: 1525105336 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.952 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.latency volume: 106716064 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '85264f60-dfcf-45e9-b02d-6c58e64ed5f8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1525105336, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:27:07.918804', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '1e082692-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13246.168172242, 'message_signature': 'a3576550d98e6f807c07cff5eafadf7c7c0cd22818514687b86d7f635300b1dd'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 106716064, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:27:07.918804', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '1e083e8e-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13246.168172242, 'message_signature': 'a9f7511a3611e3389d35d8f4cb2aa8cef10e32477c9c492065b9025d51eee028'}]}, 'timestamp': '2025-12-06 10:27:07.953341', '_unique_id': '6548ab37caa44896a3e441395b54200d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.956 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.957 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.961 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.bytes volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '867a32c1-230e-4261-b478-8db1aa677516', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:27:07.957850', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '1e099ffe-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13246.207233986, 'message_signature': 'a997ce121e5f9a4c37ee1c1e08f9f2b1ba0a74d229db1c522cb2c6a2b707c048'}]}, 'timestamp': '2025-12-06 10:27:07.962412', '_unique_id': 'fd6cac175ecf41e8a9cf4b9b18090ead'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.963 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.964 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.981 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/memory.usage volume: 51.80859375 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '13a8426c-9603-45e3-8171-b50d577e2adf', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.80859375, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'timestamp': '2025-12-06T10:27:07.965003', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '1e0ca42e-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13246.230851807, 'message_signature': '9ae9493121eb363d7c193745e9be7b1462243af51fdeb7220639d121e39bbdb6'}]}, 'timestamp': '2025-12-06 10:27:07.982169', '_unique_id': '27ed04da12614bfd8726906b5b75d625'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.983 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.984 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.996 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:27:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.997 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6f68680a-4119-46ff-b712-8d4361e1564d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:27:07.984659', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '1e0ef1d4-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13246.234016774, 'message_signature': 'e49d73d77690333a29182738ccf65be96e5be71646ad7b0e1671650ab08c1c62'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:27:07.984659', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '1e0f0304-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13246.234016774, 'message_signature': 'e277e75aa012cdd8011ecb16f9b22906207857807b2dc8ce793a42b18fd5df94'}]}, 'timestamp': '2025-12-06 10:27:07.997671', '_unique_id': '6c2f61afca1149df81edb49509620390'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:07.998 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.000 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.000 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3753812e-b680-493f-88c6-d943489a863f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:27:08.000209', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '1e0f797e-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13246.207233986, 'message_signature': 'db496a0c6fbf3cba63079e963316c63dda7132712dcb036df0b932fdfdd4ca0b'}]}, 'timestamp': '2025-12-06 10:27:08.000952', '_unique_id': 'a3d80efd36484c88bb4fc367d6d9ee88'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.002 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.003 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.003 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '910868b5-f21d-4e75-b267-21964c638e64', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:27:08.003358', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '1e0ff336-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13246.207233986, 'message_signature': 'e7483b56e6dca24d12b2da7bf077bc5d3ce5967ec9118a5c908c785fcc29576f'}]}, 'timestamp': '2025-12-06 10:27:08.004165', '_unique_id': 'e69b39fcce444f92a6bd1336c2eb1879'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.005 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.006 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.006 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7aeb8b29-0a18-4b49-abed-6ae3ded58cc5', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:27:08.006517', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '1e106f00-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13246.207233986, 'message_signature': 'bc3b81d29adf8fd47925e97a1bd213a931d19bd0659e10a368015a2624405503'}]}, 'timestamp': '2025-12-06 10:27:08.007025', '_unique_id': 'c067dd640f3d46b3b1b8884a382b6478'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.008 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.009 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.009 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/cpu volume: 20050000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '866c67cf-450d-4b1b-95e8-c1ffb327d00e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 20050000000, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'timestamp': '2025-12-06T10:27:08.009567', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '1e10e7aa-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13246.230851807, 'message_signature': 'e2e43a21e36ab1063ddf8280b0a601985a5b2f21603fdc9764219295ce0045aa'}]}, 'timestamp': '2025-12-06 10:27:08.010141', '_unique_id': 'e047c119de0241449b84e113662bcb97'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.011 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.012 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.012 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.bytes volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4838331d-19d6-4a72-b451-defe4480fb48', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:27:08.012341', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '1e11510e-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13246.207233986, 'message_signature': '03aee9e81ede44c5cde2ecbc3cce0e2a855e4f23b130d21eec8fb85bbe018fc4'}]}, 'timestamp': '2025-12-06 10:27:08.012842', '_unique_id': 'a88a13d5a99f47259ba531a1c67f7cdd'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.013 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.015 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.015 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.016 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a5c576ae-7e0c-438c-88d5-e9c15c47c69e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:27:08.015497', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '1e11d2a0-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13246.168172242, 'message_signature': '95f7487fa3ec6a22adec1dec68e55373c1c51ceeba2dea40e94faf9b0943d5ad'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:27:08.015497', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '1e11e5ba-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13246.168172242, 'message_signature': 'd6b0043f3a4608db01c34669cb557b9c2b62293cf9d37ac5db0059bec9079fbe'}]}, 'timestamp': '2025-12-06 10:27:08.016590', '_unique_id': 'c71c50b0111d4c6fa0edb7cdfee6fb03'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.017 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.018 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.019 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.019 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '174d54c8-aa41-4235-996d-e43f0fcb406f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:27:08.019012', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '1e1255c2-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13246.234016774, 'message_signature': 'cafb781cff58f6b87a74d3042878a52a11747febdaea610d76d468382cddfa93'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:27:08.019012', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '1e126652-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13246.234016774, 'message_signature': '27cb764f76dd424e7dd2f9c677d8d243597a3f59e8f2224cd494568f8aa6b5ff'}]}, 'timestamp': '2025-12-06 10:27:08.019985', '_unique_id': 'be896d7ab607426387285bf3348dd51b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.021 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.022 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.022 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.022 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.022 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6a52b6b9-8247-4258-9166-0ab31f1721a7', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:27:08.022492', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '1e12ddda-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13246.234016774, 'message_signature': 'a8500b8466108073ba409556e7f8237af0673a55f5f5eee4a3f02bf6ed30006d'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:27:08.022492', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '1e12ef8c-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13246.234016774, 'message_signature': '14d88c1a159f7c4c48155c35dfd522dabfcfe36d642c8ad71be9f06fc3c9da7b'}]}, 'timestamp': '2025-12-06 10:27:08.023388', '_unique_id': '641d44e08bdb43ff99fbe65592443127'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.024 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.025 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.025 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets volume: 60 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd18f8e21-bf3c-40b4-8507-ea52d6ec0c3c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 60, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:27:08.025905', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '1e136534-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13246.207233986, 'message_signature': 'e1b6685b6d05a2123a39168918db1c57ca3868e0cd27d1fbfbf71ced32d2f66d'}]}, 'timestamp': '2025-12-06 10:27:08.026480', '_unique_id': 'b8d509a304aa4bd4a1822409835808db'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.027 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.028 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.028 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.029 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6afd2bfc-d1c7-4ffc-9856-8d2fb7b6388f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:27:08.029094', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '1e13df50-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13246.207233986, 'message_signature': '95f3228ac3bb591ef7ab585010f118e5e1bd04f24c87ca92f217f558d0cd9926'}]}, 'timestamp': '2025-12-06 10:27:08.029581', '_unique_id': '081ae50691d746238765831a19f4a326'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.030 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.031 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.032 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3dfed8df-8184-4ead-8400-0ab18247545d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:27:08.031976', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '1e1450ca-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13246.207233986, 'message_signature': '1038b0afa147cc79e9d94f039d5b194cbcd2cbfa9a9d145cefa054fc225339e6'}]}, 'timestamp': '2025-12-06 10:27:08.032475', '_unique_id': 'e4bccc1d08d2481bbebcab8a6167b032'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.033 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.034 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.034 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.035 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '613736a5-c6a0-4686-8928-56320f4018d8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:27:08.034844', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '1e14c08c-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13246.168172242, 'message_signature': 'e82f08b18017d22e64c691f236e7510fc4a2c9c7cd4574f7c9fb31a51eff7534'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:27:08.034844', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '1e14d338-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13246.168172242, 'message_signature': '544458b8de334c98488eccc64bdf7d12e259889f4e024c29f8bddc8c01bde7d4'}]}, 'timestamp': '2025-12-06 10:27:08.035834', '_unique_id': 'd82aad46034f4e1cbc3b19da414fb196'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.036 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.037 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.037 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets volume: 114 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a33b9273-b0d8-4ec6-8913-3f96948e6ace', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 114, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:27:08.037723', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '1e152e28-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13246.207233986, 'message_signature': 'bfe418d5f95a64b49f9dae0f9d00c21c34099270736acd538f30e0f296a06adb'}]}, 'timestamp': '2025-12-06 10:27:08.038040', '_unique_id': '43fbfe29dfe148e5a0c3d5737a7f2bb9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.038 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.039 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.039 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.039 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.requests volume: 47 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.039 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b263b353-7ebe-4ac3-a368-052a2db4e559', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 47, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:27:08.039462', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '1e1570cc-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13246.168172242, 'message_signature': '86bb5e8a630acff906c3657f45d24506a671746a9b34815fcdc4bb02e19dc8a7'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:27:08.039462', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '1e157c8e-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13246.168172242, 'message_signature': '3b53eedb589c77e98199109f9c53607e4719293e04f852dcec34f1e7e8757c89'}]}, 'timestamp': '2025-12-06 10:27:08.040033', '_unique_id': '956eb24934c4432e9f74b8de9522a5dc'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.040 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.041 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.041 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2fdd5db0-e2e8-45c1-a99e-e531c210f691', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:27:08.041383', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '1e15bd20-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13246.207233986, 'message_signature': '396b374410854e4c42c9c40eb983ab908ad805a419c3f90d49bc3b500bb3e583'}]}, 'timestamp': '2025-12-06 10:27:08.041724', '_unique_id': '49a4404d795f4032812d4e782b9cebea'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.042 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.043 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.043 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.latency volume: 1252245154 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.043 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.latency volume: 27668224 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b2c0d7a6-b838-42ca-a8ab-b991cca0f829', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1252245154, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:27:08.043188', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '1e16029e-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13246.168172242, 'message_signature': '6fcef92ff38e30a0cc29b0215d600838da06d8d1e4762f08eb64650565f54873'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 27668224, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:27:08.043188', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '1e160cee-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13246.168172242, 'message_signature': '35b2b08fbd4c51694302e1d610706aca092a096bde5034efb8093cec5a97aa4f'}]}, 'timestamp': '2025-12-06 10:27:08.043737', '_unique_id': '2bf2305b8892477b9e07462126789069'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.044 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.045 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.045 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.bytes volume: 389120 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.045 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'dfbc9585-1232-4dd8-9e9e-fa6739d1917d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 389120, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:27:08.045198', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '1e16510e-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13246.168172242, 'message_signature': '0acb28d51658a390ca2df6b30b302e3a41ab6c009fe358b097e566c3dbc87641'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:27:08.045198', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '1e165b40-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13246.168172242, 'message_signature': '2962a1dd3f638e5fceb838a51689dbbcfac8a4cc91c56f64d24394f42f573c75'}]}, 'timestamp': '2025-12-06 10:27:08.045728', '_unique_id': 'd3d0110f5f91422787d5984cea26ca1f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.046 12 ERROR oslo_messaging.notify.messaging Dec 6 05:27:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:27:08.047 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:27:08 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:27:10 localhost nova_compute[282193]: 2025-12-06 10:27:10.891 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:27:10 localhost nova_compute[282193]: 2025-12-06 10:27:10.893 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:27:10 localhost nova_compute[282193]: 2025-12-06 10:27:10.893 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:27:10 localhost nova_compute[282193]: 2025-12-06 10:27:10.893 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:27:10 localhost nova_compute[282193]: 2025-12-06 10:27:10.926 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:27:10 localhost nova_compute[282193]: 2025-12-06 10:27:10.928 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:27:10 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e276 e276: 6 total, 6 up, 6 in Dec 6 05:27:11 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 6 05:27:11 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 6 05:27:11 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 6 05:27:11 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Dec 6 05:27:11 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:27:11 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:27:12 localhost ovn_controller[154851]: 2025-12-06T10:27:12Z|00526|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory Dec 6 05:27:13 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:27:13 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:27:14 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 6 05:27:14 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:27:14 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:27:14 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"}]': finished Dec 6 05:27:14 localhost sshd[336807]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:27:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:27:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:27:15 localhost nova_compute[282193]: 2025-12-06 10:27:15.928 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:27:15 localhost systemd[1]: tmp-crun.bsbKKc.mount: Deactivated successfully. Dec 6 05:27:15 localhost podman[336809]: 2025-12-06 10:27:15.994677981 +0000 UTC m=+0.140060523 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 6 05:27:16 localhost podman[336809]: 2025-12-06 10:27:16.008207254 +0000 UTC m=+0.153589836 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 6 05:27:16 localhost podman[336808]: 2025-12-06 10:27:15.960880837 +0000 UTC m=+0.106777074 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Dec 6 05:27:16 localhost podman[336808]: 2025-12-06 10:27:16.044426672 +0000 UTC m=+0.190322909 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3) Dec 6 05:27:16 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:27:16 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:27:16 localhost openstack_network_exporter[243110]: ERROR 10:27:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:27:16 localhost openstack_network_exporter[243110]: ERROR 10:27:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:27:16 localhost openstack_network_exporter[243110]: ERROR 10:27:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:27:16 localhost openstack_network_exporter[243110]: ERROR 10:27:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:27:16 localhost openstack_network_exporter[243110]: Dec 6 05:27:16 localhost openstack_network_exporter[243110]: ERROR 10:27:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:27:16 localhost openstack_network_exporter[243110]: Dec 6 05:27:17 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e277 e277: 6 total, 6 up, 6 in Dec 6 05:27:18 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:27:18 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 6 05:27:18 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 6 05:27:18 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 6 05:27:18 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Dec 6 05:27:18 localhost sshd[336849]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:27:20 localhost nova_compute[282193]: 2025-12-06 10:27:20.931 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:27:20 localhost nova_compute[282193]: 2025-12-06 10:27:20.933 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:27:20 localhost nova_compute[282193]: 2025-12-06 10:27:20.933 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:27:20 localhost nova_compute[282193]: 2025-12-06 10:27:20.934 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:27:20 localhost nova_compute[282193]: 2025-12-06 10:27:20.966 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:27:20 localhost nova_compute[282193]: 2025-12-06 10:27:20.967 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:27:21 localhost sshd[336851]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:27:21 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 6 05:27:21 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:27:21 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:27:21 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"}]': finished Dec 6 05:27:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:27:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:27:22 localhost podman[336854]: 2025-12-06 10:27:22.922665938 +0000 UTC m=+0.078946765 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm) Dec 6 05:27:22 localhost systemd[1]: tmp-crun.WZ8lCD.mount: Deactivated successfully. Dec 6 05:27:22 localhost podman[336854]: 2025-12-06 10:27:22.967171958 +0000 UTC m=+0.123452825 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:27:22 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:27:23 localhost podman[336853]: 2025-12-06 10:27:22.974521763 +0000 UTC m=+0.134313277 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, vcs-type=git, config_id=edpm, release=1755695350, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6) Dec 6 05:27:23 localhost podman[336853]: 2025-12-06 10:27:23.054666913 +0000 UTC m=+0.214458417 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9) Dec 6 05:27:23 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:27:23 localhost sshd[336890]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:27:23 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:27:23 localhost podman[241090]: time="2025-12-06T10:27:23Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:27:23 localhost podman[241090]: @ - - [06/Dec/2025:10:27:23 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156104 "" "Go-http-client/1.1" Dec 6 05:27:23 localhost podman[241090]: @ - - [06/Dec/2025:10:27:23 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19275 "" "Go-http-client/1.1" Dec 6 05:27:24 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 6 05:27:24 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 6 05:27:24 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 6 05:27:24 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Dec 6 05:27:25 localhost sshd[336892]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:27:25 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e278 e278: 6 total, 6 up, 6 in Dec 6 05:27:25 localhost nova_compute[282193]: 2025-12-06 10:27:25.969 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4995-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:27:25 localhost nova_compute[282193]: 2025-12-06 10:27:25.971 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:27:25 localhost nova_compute[282193]: 2025-12-06 10:27:25.971 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:27:25 localhost nova_compute[282193]: 2025-12-06 10:27:25.971 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:27:26 localhost nova_compute[282193]: 2025-12-06 10:27:26.014 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:27:26 localhost nova_compute[282193]: 2025-12-06 10:27:26.014 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:27:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:27:26 localhost podman[336894]: 2025-12-06 10:27:26.923060494 +0000 UTC m=+0.080321466 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:27:26 localhost podman[336894]: 2025-12-06 10:27:26.939142545 +0000 UTC m=+0.096403517 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3) Dec 6 05:27:26 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:27:27 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 6 05:27:27 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:27:27 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:27:28 localhost sshd[336912]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:27:28 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:27:28 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"}]': finished Dec 6 05:27:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:27:29 localhost podman[336914]: 2025-12-06 10:27:29.920571993 +0000 UTC m=+0.081152182 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:27:29 localhost podman[336914]: 2025-12-06 10:27:29.929079043 +0000 UTC m=+0.089659192 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 6 05:27:29 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:27:30 localhost ovn_metadata_agent[160504]: 2025-12-06 10:27:30.519 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:6c:02', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:a8:2f:0c:cb:a1'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:27:30 localhost nova_compute[282193]: 2025-12-06 10:27:30.519 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:27:30 localhost ovn_metadata_agent[160504]: 2025-12-06 10:27:30.521 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 6 05:27:31 localhost nova_compute[282193]: 2025-12-06 10:27:31.050 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:27:31 localhost nova_compute[282193]: 2025-12-06 10:27:31.052 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:27:31 localhost auditd[725]: Audit daemon rotating log files Dec 6 05:27:32 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 6 05:27:32 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 6 05:27:32 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 6 05:27:32 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Dec 6 05:27:32 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e279 e279: 6 total, 6 up, 6 in Dec 6 05:27:33 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:27:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:27:33 localhost podman[336938]: 2025-12-06 10:27:33.930580519 +0000 UTC m=+0.087834825 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible) Dec 6 05:27:33 localhost podman[336938]: 2025-12-06 10:27:33.966786596 +0000 UTC m=+0.124040852 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Dec 6 05:27:33 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:27:34 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} : dispatch Dec 6 05:27:34 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/39a0fb23-501d-479d-b543-1f708ea4574a/3e5f0a88-786f-4b8b-a999-6c8148201aa5", "osd", "allow rw pool=manila_data namespace=fsvolumens_39a0fb23-501d-479d-b543-1f708ea4574a", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:27:34 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/39a0fb23-501d-479d-b543-1f708ea4574a/3e5f0a88-786f-4b8b-a999-6c8148201aa5", "osd", "allow rw pool=manila_data namespace=fsvolumens_39a0fb23-501d-479d-b543-1f708ea4574a", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:27:34 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/39a0fb23-501d-479d-b543-1f708ea4574a/3e5f0a88-786f-4b8b-a999-6c8148201aa5", "osd", "allow rw pool=manila_data namespace=fsvolumens_39a0fb23-501d-479d-b543-1f708ea4574a", "mon", "allow r"], "format": "json"}]': finished Dec 6 05:27:34 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 6 05:27:34 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:27:34 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:27:34 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"}]': finished Dec 6 05:27:36 localhost nova_compute[282193]: 2025-12-06 10:27:36.053 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:27:36 localhost nova_compute[282193]: 2025-12-06 10:27:36.056 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:27:37 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} : dispatch Dec 6 05:27:37 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"} : dispatch Dec 6 05:27:37 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"} : dispatch Dec 6 05:27:37 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"}]': finished Dec 6 05:27:38 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:27:38 localhost ovn_metadata_agent[160504]: 2025-12-06 10:27:38.522 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b142a5ef-fbed-4e92-aa78-e3ad080c6370, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 05:27:38 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 6 05:27:38 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 6 05:27:38 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 6 05:27:38 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Dec 6 05:27:39 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 6 05:27:39 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1777930601' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 6 05:27:39 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 6 05:27:39 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1777930601' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 6 05:27:41 localhost nova_compute[282193]: 2025-12-06 10:27:41.056 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:27:41 localhost nova_compute[282193]: 2025-12-06 10:27:41.058 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:27:41 localhost nova_compute[282193]: 2025-12-06 10:27:41.058 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:27:41 localhost nova_compute[282193]: 2025-12-06 10:27:41.059 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:27:41 localhost nova_compute[282193]: 2025-12-06 10:27:41.095 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:27:41 localhost nova_compute[282193]: 2025-12-06 10:27:41.095 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:27:41 localhost nova_compute[282193]: 2025-12-06 10:27:41.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:27:41 localhost nova_compute[282193]: 2025-12-06 10:27:41.181 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 6 05:27:41 localhost nova_compute[282193]: 2025-12-06 10:27:41.182 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 6 05:27:41 localhost nova_compute[282193]: 2025-12-06 10:27:41.304 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 6 05:27:41 localhost nova_compute[282193]: 2025-12-06 10:27:41.305 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquired lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 6 05:27:41 localhost nova_compute[282193]: 2025-12-06 10:27:41.305 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 6 05:27:41 localhost nova_compute[282193]: 2025-12-06 10:27:41.305 282197 DEBUG nova.objects.instance [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 6 05:27:41 localhost nova_compute[282193]: 2025-12-06 10:27:41.734 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updating instance_info_cache with network_info: [{"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 6 05:27:41 localhost nova_compute[282193]: 2025-12-06 10:27:41.759 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Releasing lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 6 05:27:41 localhost nova_compute[282193]: 2025-12-06 10:27:41.759 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 6 05:27:41 localhost nova_compute[282193]: 2025-12-06 10:27:41.760 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:27:41 localhost nova_compute[282193]: 2025-12-06 10:27:41.780 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:27:41 localhost nova_compute[282193]: 2025-12-06 10:27:41.781 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:27:41 localhost nova_compute[282193]: 2025-12-06 10:27:41.781 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:27:41 localhost nova_compute[282193]: 2025-12-06 10:27:41.781 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Auditing locally available compute resources for np0005548789.localdomain (node: np0005548789.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 6 05:27:41 localhost nova_compute[282193]: 2025-12-06 10:27:41.782 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:27:41 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 6 05:27:41 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:27:41 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:27:41 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"}]': finished Dec 6 05:27:42 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:27:42 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/585070944' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:27:42 localhost nova_compute[282193]: 2025-12-06 10:27:42.224 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:27:42 localhost nova_compute[282193]: 2025-12-06 10:27:42.300 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:27:42 localhost nova_compute[282193]: 2025-12-06 10:27:42.301 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:27:42 localhost nova_compute[282193]: 2025-12-06 10:27:42.498 282197 WARNING nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 6 05:27:42 localhost nova_compute[282193]: 2025-12-06 10:27:42.499 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Hypervisor/Node resource view: name=np0005548789.localdomain free_ram=11112MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 6 05:27:42 localhost nova_compute[282193]: 2025-12-06 10:27:42.500 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:27:42 localhost nova_compute[282193]: 2025-12-06 10:27:42.500 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:27:42 localhost nova_compute[282193]: 2025-12-06 10:27:42.592 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 6 05:27:42 localhost nova_compute[282193]: 2025-12-06 10:27:42.592 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 6 05:27:42 localhost nova_compute[282193]: 2025-12-06 10:27:42.593 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Final resource view: name=np0005548789.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 6 05:27:42 localhost nova_compute[282193]: 2025-12-06 10:27:42.643 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:27:43 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:27:43 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3710319401' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:27:43 localhost nova_compute[282193]: 2025-12-06 10:27:43.121 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:27:43 localhost nova_compute[282193]: 2025-12-06 10:27:43.127 282197 DEBUG nova.compute.provider_tree [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed in ProviderTree for provider: 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 6 05:27:43 localhost nova_compute[282193]: 2025-12-06 10:27:43.153 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 6 05:27:43 localhost nova_compute[282193]: 2025-12-06 10:27:43.155 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Compute_service record updated for np0005548789.localdomain:np0005548789.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 6 05:27:43 localhost nova_compute[282193]: 2025-12-06 10:27:43.156 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:27:43 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:27:44 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} : dispatch Dec 6 05:27:45 localhost nova_compute[282193]: 2025-12-06 10:27:45.152 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:27:45 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/5588df5a-ffa6-47fc-b21c-3740e4fe0c02/c27ff085-a577-4abe-b26a-9dc51132e3c0", "osd", "allow rw pool=manila_data namespace=fsvolumens_5588df5a-ffa6-47fc-b21c-3740e4fe0c02", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:27:45 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/5588df5a-ffa6-47fc-b21c-3740e4fe0c02/c27ff085-a577-4abe-b26a-9dc51132e3c0", "osd", "allow rw pool=manila_data namespace=fsvolumens_5588df5a-ffa6-47fc-b21c-3740e4fe0c02", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:27:45 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/5588df5a-ffa6-47fc-b21c-3740e4fe0c02/c27ff085-a577-4abe-b26a-9dc51132e3c0", "osd", "allow rw pool=manila_data namespace=fsvolumens_5588df5a-ffa6-47fc-b21c-3740e4fe0c02", "mon", "allow r"], "format": "json"}]': finished Dec 6 05:27:45 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 6 05:27:45 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 6 05:27:45 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 6 05:27:45 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Dec 6 05:27:46 localhost nova_compute[282193]: 2025-12-06 10:27:46.096 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:27:46 localhost nova_compute[282193]: 2025-12-06 10:27:46.098 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:27:46 localhost nova_compute[282193]: 2025-12-06 10:27:46.098 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:27:46 localhost nova_compute[282193]: 2025-12-06 10:27:46.099 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:27:46 localhost nova_compute[282193]: 2025-12-06 10:27:46.137 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:27:46 localhost nova_compute[282193]: 2025-12-06 10:27:46.138 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:27:46 localhost nova_compute[282193]: 2025-12-06 10:27:46.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:27:46 localhost openstack_network_exporter[243110]: ERROR 10:27:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:27:46 localhost openstack_network_exporter[243110]: ERROR 10:27:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:27:46 localhost openstack_network_exporter[243110]: ERROR 10:27:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:27:46 localhost openstack_network_exporter[243110]: ERROR 10:27:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:27:46 localhost openstack_network_exporter[243110]: Dec 6 05:27:46 localhost openstack_network_exporter[243110]: ERROR 10:27:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:27:46 localhost openstack_network_exporter[243110]: Dec 6 05:27:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:27:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:27:46 localhost podman[337008]: 2025-12-06 10:27:46.936427356 +0000 UTC m=+0.089497327 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:27:46 localhost podman[337008]: 2025-12-06 10:27:46.944221824 +0000 UTC m=+0.097291795 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Dec 6 05:27:46 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:27:47 localhost systemd[1]: tmp-crun.lsqYPN.mount: Deactivated successfully. Dec 6 05:27:47 localhost podman[337009]: 2025-12-06 10:27:47.0484498 +0000 UTC m=+0.198185689 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 6 05:27:47 localhost podman[337009]: 2025-12-06 10:27:47.061637943 +0000 UTC m=+0.211373802 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:27:47 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:27:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:27:47.344 160509 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:27:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:27:47.345 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:27:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:27:47.346 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:27:47 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} : dispatch Dec 6 05:27:47 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"} : dispatch Dec 6 05:27:47 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"} : dispatch Dec 6 05:27:47 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"}]': finished Dec 6 05:27:48 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:27:48 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 6 05:27:48 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:27:48 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:27:48 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"}]': finished Dec 6 05:27:49 localhost nova_compute[282193]: 2025-12-06 10:27:49.180 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:27:49 localhost nova_compute[282193]: 2025-12-06 10:27:49.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:27:49 localhost nova_compute[282193]: 2025-12-06 10:27:49.182 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:27:49 localhost nova_compute[282193]: 2025-12-06 10:27:49.182 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 6 05:27:51 localhost nova_compute[282193]: 2025-12-06 10:27:51.188 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:27:51 localhost nova_compute[282193]: 2025-12-06 10:27:51.189 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:27:51 localhost nova_compute[282193]: 2025-12-06 10:27:51.190 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:27:51 localhost nova_compute[282193]: 2025-12-06 10:27:51.191 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:27:51 localhost nova_compute[282193]: 2025-12-06 10:27:51.191 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5052 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:27:51 localhost nova_compute[282193]: 2025-12-06 10:27:51.191 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:27:51 localhost nova_compute[282193]: 2025-12-06 10:27:51.192 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:27:51 localhost nova_compute[282193]: 2025-12-06 10:27:51.194 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:27:51 localhost ceph-osd[31726]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2. Dec 6 05:27:51 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 6 05:27:51 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 6 05:27:51 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 6 05:27:51 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Dec 6 05:27:53 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:27:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:27:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:27:53 localhost podman[241090]: time="2025-12-06T10:27:53Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:27:53 localhost podman[241090]: @ - - [06/Dec/2025:10:27:53 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156104 "" "Go-http-client/1.1" Dec 6 05:27:53 localhost systemd[1]: tmp-crun.0nafcR.mount: Deactivated successfully. Dec 6 05:27:54 localhost systemd[1]: tmp-crun.HBBVU2.mount: Deactivated successfully. Dec 6 05:27:54 localhost podman[337048]: 2025-12-06 10:27:54.022203466 +0000 UTC m=+0.171150073 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, release=1755695350, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible) Dec 6 05:27:54 localhost podman[337048]: 2025-12-06 10:27:54.033543422 +0000 UTC m=+0.182489999 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, name=ubi9-minimal, release=1755695350, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=minimal rhel9, version=9.6, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., config_id=edpm, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, container_name=openstack_network_exporter, vcs-type=git) Dec 6 05:27:54 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:27:54 localhost podman[337049]: 2025-12-06 10:27:53.985002429 +0000 UTC m=+0.131372677 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3) Dec 6 05:27:54 localhost podman[241090]: @ - - [06/Dec/2025:10:27:53 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19277 "" "Go-http-client/1.1" Dec 6 05:27:54 localhost podman[337049]: 2025-12-06 10:27:54.11719889 +0000 UTC m=+0.263569218 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute) Dec 6 05:27:54 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:27:54 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} : dispatch Dec 6 05:27:54 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/1591869a-dd1e-4430-a3b9-ab5fdaf392f8/647cfa54-e318-4c22-b8b7-7c0ead2d396a", "osd", "allow rw pool=manila_data namespace=fsvolumens_1591869a-dd1e-4430-a3b9-ab5fdaf392f8", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:27:54 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/1591869a-dd1e-4430-a3b9-ab5fdaf392f8/647cfa54-e318-4c22-b8b7-7c0ead2d396a", "osd", "allow rw pool=manila_data namespace=fsvolumens_1591869a-dd1e-4430-a3b9-ab5fdaf392f8", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:27:54 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/1591869a-dd1e-4430-a3b9-ab5fdaf392f8/647cfa54-e318-4c22-b8b7-7c0ead2d396a", "osd", "allow rw pool=manila_data namespace=fsvolumens_1591869a-dd1e-4430-a3b9-ab5fdaf392f8", "mon", "allow r"], "format": "json"}]': finished Dec 6 05:27:54 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 6 05:27:54 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:27:54 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:27:54 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"}]': finished Dec 6 05:27:56 localhost nova_compute[282193]: 2025-12-06 10:27:56.195 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:27:56 localhost nova_compute[282193]: 2025-12-06 10:27:56.197 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:27:56 localhost nova_compute[282193]: 2025-12-06 10:27:56.198 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:27:56 localhost nova_compute[282193]: 2025-12-06 10:27:56.198 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:27:56 localhost nova_compute[282193]: 2025-12-06 10:27:56.231 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:27:56 localhost nova_compute[282193]: 2025-12-06 10:27:56.232 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:27:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:27:57 localhost systemd[1]: tmp-crun.RH67zk.mount: Deactivated successfully. Dec 6 05:27:57 localhost podman[337085]: 2025-12-06 10:27:57.925707284 +0000 UTC m=+0.086874157 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd) Dec 6 05:27:57 localhost podman[337085]: 2025-12-06 10:27:57.940407115 +0000 UTC m=+0.101574028 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Dec 6 05:27:57 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:27:57 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} : dispatch Dec 6 05:27:57 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"} : dispatch Dec 6 05:27:57 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"} : dispatch Dec 6 05:27:57 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"}]': finished Dec 6 05:27:57 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 6 05:27:57 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 6 05:27:57 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 6 05:27:57 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Dec 6 05:27:58 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:28:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:28:00 localhost podman[337105]: 2025-12-06 10:28:00.925582905 +0000 UTC m=+0.082068049 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:28:00 localhost podman[337105]: 2025-12-06 10:28:00.936348275 +0000 UTC m=+0.092833399 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 6 05:28:00 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:28:01 localhost nova_compute[282193]: 2025-12-06 10:28:01.233 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:28:01 localhost nova_compute[282193]: 2025-12-06 10:28:01.234 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:28:01 localhost nova_compute[282193]: 2025-12-06 10:28:01.234 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:28:01 localhost nova_compute[282193]: 2025-12-06 10:28:01.235 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:28:01 localhost nova_compute[282193]: 2025-12-06 10:28:01.271 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:28:01 localhost nova_compute[282193]: 2025-12-06 10:28:01.272 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:28:01 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} : dispatch Dec 6 05:28:01 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/b113f2fd-9e34-49b1-8d3c-8099c23d423a/eeefc94a-19bd-4e4c-8e5e-d71eabbfaa73", "osd", "allow rw pool=manila_data namespace=fsvolumens_b113f2fd-9e34-49b1-8d3c-8099c23d423a", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:28:01 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/b113f2fd-9e34-49b1-8d3c-8099c23d423a/eeefc94a-19bd-4e4c-8e5e-d71eabbfaa73", "osd", "allow rw pool=manila_data namespace=fsvolumens_b113f2fd-9e34-49b1-8d3c-8099c23d423a", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:28:01 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/b113f2fd-9e34-49b1-8d3c-8099c23d423a/eeefc94a-19bd-4e4c-8e5e-d71eabbfaa73", "osd", "allow rw pool=manila_data namespace=fsvolumens_b113f2fd-9e34-49b1-8d3c-8099c23d423a", "mon", "allow r"], "format": "json"}]': finished Dec 6 05:28:01 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 6 05:28:01 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:28:01 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:28:01 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"}]': finished Dec 6 05:28:03 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:28:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:28:04 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} : dispatch Dec 6 05:28:04 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"} : dispatch Dec 6 05:28:04 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"} : dispatch Dec 6 05:28:04 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"}]': finished Dec 6 05:28:04 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 6 05:28:04 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 6 05:28:04 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 6 05:28:04 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Dec 6 05:28:04 localhost systemd[1]: tmp-crun.z9fq4k.mount: Deactivated successfully. Dec 6 05:28:04 localhost podman[337128]: 2025-12-06 10:28:04.916148941 +0000 UTC m=+0.081854983 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, container_name=ovn_controller) Dec 6 05:28:04 localhost podman[337128]: 2025-12-06 10:28:04.958124599 +0000 UTC m=+0.123830691 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Dec 6 05:28:04 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:28:06 localhost nova_compute[282193]: 2025-12-06 10:28:06.272 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:28:06 localhost nova_compute[282193]: 2025-12-06 10:28:06.274 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:28:06 localhost nova_compute[282193]: 2025-12-06 10:28:06.274 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:28:06 localhost nova_compute[282193]: 2025-12-06 10:28:06.275 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:28:06 localhost nova_compute[282193]: 2025-12-06 10:28:06.318 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:28:06 localhost nova_compute[282193]: 2025-12-06 10:28:06.319 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:28:06 localhost sshd[337153]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:28:07 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} : dispatch Dec 6 05:28:07 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/b113f2fd-9e34-49b1-8d3c-8099c23d423a/eeefc94a-19bd-4e4c-8e5e-d71eabbfaa73", "osd", "allow rw pool=manila_data namespace=fsvolumens_b113f2fd-9e34-49b1-8d3c-8099c23d423a", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:28:07 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/b113f2fd-9e34-49b1-8d3c-8099c23d423a/eeefc94a-19bd-4e4c-8e5e-d71eabbfaa73", "osd", "allow rw pool=manila_data namespace=fsvolumens_b113f2fd-9e34-49b1-8d3c-8099c23d423a", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:28:07 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/b113f2fd-9e34-49b1-8d3c-8099c23d423a/eeefc94a-19bd-4e4c-8e5e-d71eabbfaa73", "osd", "allow rw pool=manila_data namespace=fsvolumens_b113f2fd-9e34-49b1-8d3c-8099c23d423a", "mon", "allow r"], "format": "json"}]': finished Dec 6 05:28:08 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:28:08 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 6 05:28:08 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:28:08 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:28:08 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"}]': finished Dec 6 05:28:10 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} : dispatch Dec 6 05:28:10 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"} : dispatch Dec 6 05:28:10 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"} : dispatch Dec 6 05:28:10 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"}]': finished Dec 6 05:28:11 localhost nova_compute[282193]: 2025-12-06 10:28:11.320 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:28:11 localhost nova_compute[282193]: 2025-12-06 10:28:11.322 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:28:11 localhost nova_compute[282193]: 2025-12-06 10:28:11.323 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:28:11 localhost nova_compute[282193]: 2025-12-06 10:28:11.323 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:28:11 localhost nova_compute[282193]: 2025-12-06 10:28:11.487 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:28:11 localhost nova_compute[282193]: 2025-12-06 10:28:11.488 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:28:11 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 6 05:28:11 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 6 05:28:11 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 6 05:28:11 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Dec 6 05:28:13 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:28:13 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:28:13 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:28:14 localhost sshd[337239]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:28:14 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} : dispatch Dec 6 05:28:14 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/b113f2fd-9e34-49b1-8d3c-8099c23d423a/eeefc94a-19bd-4e4c-8e5e-d71eabbfaa73", "osd", "allow rw pool=manila_data namespace=fsvolumens_b113f2fd-9e34-49b1-8d3c-8099c23d423a", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:28:14 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/b113f2fd-9e34-49b1-8d3c-8099c23d423a/eeefc94a-19bd-4e4c-8e5e-d71eabbfaa73", "osd", "allow rw pool=manila_data namespace=fsvolumens_b113f2fd-9e34-49b1-8d3c-8099c23d423a", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:28:14 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/b113f2fd-9e34-49b1-8d3c-8099c23d423a/eeefc94a-19bd-4e4c-8e5e-d71eabbfaa73", "osd", "allow rw pool=manila_data namespace=fsvolumens_b113f2fd-9e34-49b1-8d3c-8099c23d423a", "mon", "allow r"], "format": "json"}]': finished Dec 6 05:28:15 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 6 05:28:15 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:28:15 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:28:15 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"}]': finished Dec 6 05:28:16 localhost nova_compute[282193]: 2025-12-06 10:28:16.489 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:28:16 localhost nova_compute[282193]: 2025-12-06 10:28:16.491 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:28:16 localhost nova_compute[282193]: 2025-12-06 10:28:16.492 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:28:16 localhost nova_compute[282193]: 2025-12-06 10:28:16.492 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:28:16 localhost nova_compute[282193]: 2025-12-06 10:28:16.526 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:28:16 localhost nova_compute[282193]: 2025-12-06 10:28:16.527 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:28:16 localhost openstack_network_exporter[243110]: ERROR 10:28:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:28:16 localhost openstack_network_exporter[243110]: ERROR 10:28:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:28:16 localhost openstack_network_exporter[243110]: ERROR 10:28:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:28:16 localhost openstack_network_exporter[243110]: ERROR 10:28:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:28:16 localhost openstack_network_exporter[243110]: Dec 6 05:28:16 localhost openstack_network_exporter[243110]: ERROR 10:28:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:28:16 localhost openstack_network_exporter[243110]: Dec 6 05:28:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:28:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:28:17 localhost podman[337241]: 2025-12-06 10:28:17.775254298 +0000 UTC m=+0.102042801 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 6 05:28:17 localhost podman[337241]: 2025-12-06 10:28:17.809259711 +0000 UTC m=+0.136048214 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true) Dec 6 05:28:17 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:28:17 localhost podman[337242]: 2025-12-06 10:28:17.81899549 +0000 UTC m=+0.145566457 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 6 05:28:17 localhost podman[337242]: 2025-12-06 10:28:17.903242945 +0000 UTC m=+0.229813892 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 6 05:28:17 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:28:18 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:28:18 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} : dispatch Dec 6 05:28:18 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"} : dispatch Dec 6 05:28:18 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"} : dispatch Dec 6 05:28:18 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"}]': finished Dec 6 05:28:18 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 6 05:28:18 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 6 05:28:18 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 6 05:28:18 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Dec 6 05:28:18 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:28:21 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} : dispatch Dec 6 05:28:21 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/b113f2fd-9e34-49b1-8d3c-8099c23d423a/eeefc94a-19bd-4e4c-8e5e-d71eabbfaa73", "osd", "allow rw pool=manila_data namespace=fsvolumens_b113f2fd-9e34-49b1-8d3c-8099c23d423a", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:28:21 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/b113f2fd-9e34-49b1-8d3c-8099c23d423a/eeefc94a-19bd-4e4c-8e5e-d71eabbfaa73", "osd", "allow rw pool=manila_data namespace=fsvolumens_b113f2fd-9e34-49b1-8d3c-8099c23d423a", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:28:21 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/b113f2fd-9e34-49b1-8d3c-8099c23d423a/eeefc94a-19bd-4e4c-8e5e-d71eabbfaa73", "osd", "allow rw pool=manila_data namespace=fsvolumens_b113f2fd-9e34-49b1-8d3c-8099c23d423a", "mon", "allow r"], "format": "json"}]': finished Dec 6 05:28:21 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 6 05:28:21 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:28:21 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:28:21 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"}]': finished Dec 6 05:28:21 localhost nova_compute[282193]: 2025-12-06 10:28:21.529 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4995-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:28:21 localhost nova_compute[282193]: 2025-12-06 10:28:21.531 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:28:21 localhost nova_compute[282193]: 2025-12-06 10:28:21.531 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:28:21 localhost nova_compute[282193]: 2025-12-06 10:28:21.531 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:28:21 localhost nova_compute[282193]: 2025-12-06 10:28:21.564 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:28:21 localhost nova_compute[282193]: 2025-12-06 10:28:21.565 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:28:23 localhost sshd[337282]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:28:23 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:28:23 localhost podman[241090]: time="2025-12-06T10:28:23Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:28:23 localhost podman[241090]: @ - - [06/Dec/2025:10:28:23 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156104 "" "Go-http-client/1.1" Dec 6 05:28:23 localhost podman[241090]: @ - - [06/Dec/2025:10:28:23 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19279 "" "Go-http-client/1.1" Dec 6 05:28:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:28:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:28:24 localhost podman[337285]: 2025-12-06 10:28:24.890025021 +0000 UTC m=+0.088179897 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Dec 6 05:28:24 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} : dispatch Dec 6 05:28:24 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"} : dispatch Dec 6 05:28:24 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"} : dispatch Dec 6 05:28:24 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"}]': finished Dec 6 05:28:24 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 6 05:28:24 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 6 05:28:24 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 6 05:28:24 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Dec 6 05:28:24 localhost podman[337284]: 2025-12-06 10:28:24.934914478 +0000 UTC m=+0.137195900 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, distribution-scope=public, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, name=ubi9-minimal, release=1755695350, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, architecture=x86_64, version=9.6, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers) Dec 6 05:28:24 localhost podman[337284]: 2025-12-06 10:28:24.951094044 +0000 UTC m=+0.153375476 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, version=9.6, maintainer=Red Hat, Inc., vcs-type=git, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, name=ubi9-minimal, release=1755695350, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers) Dec 6 05:28:24 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:28:25 localhost podman[337285]: 2025-12-06 10:28:25.003590115 +0000 UTC m=+0.201744991 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:28:25 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:28:26 localhost nova_compute[282193]: 2025-12-06 10:28:26.566 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:28:26 localhost nova_compute[282193]: 2025-12-06 10:28:26.596 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:28:26 localhost nova_compute[282193]: 2025-12-06 10:28:26.596 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5031 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:28:26 localhost nova_compute[282193]: 2025-12-06 10:28:26.596 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:28:26 localhost nova_compute[282193]: 2025-12-06 10:28:26.598 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:28:26 localhost nova_compute[282193]: 2025-12-06 10:28:26.598 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:28:28 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:28:28 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 6 05:28:28 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:28:28 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:28:28 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"}]': finished Dec 6 05:28:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:28:28 localhost podman[337324]: 2025-12-06 10:28:28.928695714 +0000 UTC m=+0.085209565 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Dec 6 05:28:28 localhost podman[337324]: 2025-12-06 10:28:28.939293719 +0000 UTC m=+0.095807580 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:28:28 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:28:31 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 6 05:28:31 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 6 05:28:31 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 6 05:28:31 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Dec 6 05:28:31 localhost nova_compute[282193]: 2025-12-06 10:28:31.599 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:28:31 localhost nova_compute[282193]: 2025-12-06 10:28:31.600 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:28:31 localhost nova_compute[282193]: 2025-12-06 10:28:31.600 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:28:31 localhost nova_compute[282193]: 2025-12-06 10:28:31.600 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:28:31 localhost nova_compute[282193]: 2025-12-06 10:28:31.627 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:28:31 localhost nova_compute[282193]: 2025-12-06 10:28:31.628 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:28:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:28:31 localhost podman[337344]: 2025-12-06 10:28:31.931113623 +0000 UTC m=+0.082159022 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 6 05:28:31 localhost podman[337344]: 2025-12-06 10:28:31.944154933 +0000 UTC m=+0.095200322 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 6 05:28:31 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:28:32 localhost nova_compute[282193]: 2025-12-06 10:28:32.510 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:28:32 localhost ovn_metadata_agent[160504]: 2025-12-06 10:28:32.510 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:6c:02', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:a8:2f:0c:cb:a1'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:28:32 localhost ovn_metadata_agent[160504]: 2025-12-06 10:28:32.512 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 6 05:28:33 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:28:34 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 6 05:28:34 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:28:34 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:28:34 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"}]': finished Dec 6 05:28:35 localhost ovn_metadata_agent[160504]: 2025-12-06 10:28:35.515 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b142a5ef-fbed-4e92-aa78-e3ad080c6370, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 05:28:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:28:35 localhost podman[337367]: 2025-12-06 10:28:35.927058536 +0000 UTC m=+0.090697205 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller) Dec 6 05:28:35 localhost podman[337367]: 2025-12-06 10:28:35.996337101 +0000 UTC m=+0.159975790 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, container_name=ovn_controller, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Dec 6 05:28:36 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:28:36 localhost nova_compute[282193]: 2025-12-06 10:28:36.666 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:28:38 localhost nova_compute[282193]: 2025-12-06 10:28:38.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:28:38 localhost nova_compute[282193]: 2025-12-06 10:28:38.181 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Dec 6 05:28:38 localhost nova_compute[282193]: 2025-12-06 10:28:38.208 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Dec 6 05:28:38 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:28:38 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 6 05:28:38 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 6 05:28:38 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 6 05:28:38 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Dec 6 05:28:41 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 6 05:28:41 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:28:41 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:28:41 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"}]': finished Dec 6 05:28:41 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:28:41.518 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:28:41Z, description=, device_id=2b4c1404-e334-461c-914c-573c510f7280, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=1f3176da-af83-4428-90ea-651fd6a69b2f, ip_allocation=immediate, mac_address=fa:16:3e:ad:3a:34, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3885, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-06T10:28:41Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:28:41 localhost nova_compute[282193]: 2025-12-06 10:28:41.668 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:28:41 localhost nova_compute[282193]: 2025-12-06 10:28:41.670 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:28:41 localhost nova_compute[282193]: 2025-12-06 10:28:41.671 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:28:41 localhost nova_compute[282193]: 2025-12-06 10:28:41.671 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:28:41 localhost nova_compute[282193]: 2025-12-06 10:28:41.696 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:28:41 localhost nova_compute[282193]: 2025-12-06 10:28:41.697 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:28:42 localhost nova_compute[282193]: 2025-12-06 10:28:42.209 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:28:42 localhost nova_compute[282193]: 2025-12-06 10:28:42.234 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:28:42 localhost nova_compute[282193]: 2025-12-06 10:28:42.235 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:28:42 localhost nova_compute[282193]: 2025-12-06 10:28:42.235 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:28:42 localhost nova_compute[282193]: 2025-12-06 10:28:42.235 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Auditing locally available compute resources for np0005548789.localdomain (node: np0005548789.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 6 05:28:42 localhost nova_compute[282193]: 2025-12-06 10:28:42.236 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:28:42 localhost podman[337412]: 2025-12-06 10:28:42.414257493 +0000 UTC m=+0.062791317 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:28:42 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 2 addresses Dec 6 05:28:42 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:28:42 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:28:42 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:28:42 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1883521674' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:28:42 localhost nova_compute[282193]: 2025-12-06 10:28:42.716 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:28:42 localhost nova_compute[282193]: 2025-12-06 10:28:42.789 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:28:42 localhost nova_compute[282193]: 2025-12-06 10:28:42.790 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:28:42 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:28:42.801 263652 INFO neutron.agent.dhcp.agent [None req-05cfcc95-78bf-45e4-872f-210f4aae8746 - - - - - -] DHCP configuration for ports {'1f3176da-af83-4428-90ea-651fd6a69b2f'} is completed#033[00m Dec 6 05:28:43 localhost nova_compute[282193]: 2025-12-06 10:28:43.011 282197 WARNING nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 6 05:28:43 localhost nova_compute[282193]: 2025-12-06 10:28:43.013 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Hypervisor/Node resource view: name=np0005548789.localdomain free_ram=11098MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 6 05:28:43 localhost nova_compute[282193]: 2025-12-06 10:28:43.013 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:28:43 localhost nova_compute[282193]: 2025-12-06 10:28:43.014 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:28:43 localhost nova_compute[282193]: 2025-12-06 10:28:43.091 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:28:43 localhost nova_compute[282193]: 2025-12-06 10:28:43.315 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 6 05:28:43 localhost nova_compute[282193]: 2025-12-06 10:28:43.316 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 6 05:28:43 localhost nova_compute[282193]: 2025-12-06 10:28:43.316 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Final resource view: name=np0005548789.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 6 05:28:43 localhost nova_compute[282193]: 2025-12-06 10:28:43.380 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Refreshing inventories for resource provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Dec 6 05:28:43 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:28:43 localhost nova_compute[282193]: 2025-12-06 10:28:43.472 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Updating ProviderTree inventory for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Dec 6 05:28:43 localhost nova_compute[282193]: 2025-12-06 10:28:43.472 282197 DEBUG nova.compute.provider_tree [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Updating inventory in ProviderTree for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Dec 6 05:28:43 localhost nova_compute[282193]: 2025-12-06 10:28:43.486 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Refreshing aggregate associations for resource provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Dec 6 05:28:43 localhost nova_compute[282193]: 2025-12-06 10:28:43.506 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Refreshing trait associations for resource provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad, traits: HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_RESCUE_BFV,HW_CPU_X86_AVX2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SHA,HW_CPU_X86_BMI2,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSSE3,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_AVX,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AMD_SVM,HW_CPU_X86_FMA3,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_F16C,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_ABM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Dec 6 05:28:43 localhost nova_compute[282193]: 2025-12-06 10:28:43.539 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:28:43 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:28:43 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3637666271' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:28:43 localhost nova_compute[282193]: 2025-12-06 10:28:43.972 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:28:43 localhost nova_compute[282193]: 2025-12-06 10:28:43.978 282197 DEBUG nova.compute.provider_tree [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed in ProviderTree for provider: 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 6 05:28:44 localhost nova_compute[282193]: 2025-12-06 10:28:44.012 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 6 05:28:44 localhost nova_compute[282193]: 2025-12-06 10:28:44.014 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Compute_service record updated for np0005548789.localdomain:np0005548789.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 6 05:28:44 localhost nova_compute[282193]: 2025-12-06 10:28:44.015 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:28:44 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Dec 6 05:28:44 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 6 05:28:44 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Dec 6 05:28:44 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Dec 6 05:28:44 localhost nova_compute[282193]: 2025-12-06 10:28:44.983 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:28:44 localhost nova_compute[282193]: 2025-12-06 10:28:44.984 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:28:44 localhost nova_compute[282193]: 2025-12-06 10:28:44.984 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 6 05:28:44 localhost nova_compute[282193]: 2025-12-06 10:28:44.984 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 6 05:28:45 localhost nova_compute[282193]: 2025-12-06 10:28:45.081 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 6 05:28:45 localhost nova_compute[282193]: 2025-12-06 10:28:45.082 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquired lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 6 05:28:45 localhost nova_compute[282193]: 2025-12-06 10:28:45.082 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 6 05:28:45 localhost nova_compute[282193]: 2025-12-06 10:28:45.083 282197 DEBUG nova.objects.instance [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 6 05:28:45 localhost nova_compute[282193]: 2025-12-06 10:28:45.620 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updating instance_info_cache with network_info: [{"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 6 05:28:45 localhost nova_compute[282193]: 2025-12-06 10:28:45.658 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Releasing lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 6 05:28:45 localhost nova_compute[282193]: 2025-12-06 10:28:45.658 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 6 05:28:46 localhost nova_compute[282193]: 2025-12-06 10:28:46.571 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:28:46 localhost openstack_network_exporter[243110]: ERROR 10:28:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:28:46 localhost openstack_network_exporter[243110]: ERROR 10:28:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:28:46 localhost openstack_network_exporter[243110]: ERROR 10:28:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:28:46 localhost openstack_network_exporter[243110]: ERROR 10:28:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:28:46 localhost openstack_network_exporter[243110]: Dec 6 05:28:46 localhost openstack_network_exporter[243110]: ERROR 10:28:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:28:46 localhost openstack_network_exporter[243110]: Dec 6 05:28:46 localhost nova_compute[282193]: 2025-12-06 10:28:46.700 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:28:46 localhost nova_compute[282193]: 2025-12-06 10:28:46.702 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:28:47 localhost nova_compute[282193]: 2025-12-06 10:28:47.180 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:28:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:28:47.345 160509 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:28:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:28:47.346 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:28:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:28:47.347 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:28:48 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:28:48 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 6 05:28:48 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:28:48 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:28:48 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"}]': finished Dec 6 05:28:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:28:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:28:48 localhost podman[337476]: 2025-12-06 10:28:48.935207868 +0000 UTC m=+0.095411518 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:28:48 localhost podman[337476]: 2025-12-06 10:28:48.970339746 +0000 UTC m=+0.130543336 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 6 05:28:48 localhost systemd[1]: tmp-crun.gmgSA1.mount: Deactivated successfully. Dec 6 05:28:48 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:28:49 localhost podman[337477]: 2025-12-06 10:28:49.002005587 +0000 UTC m=+0.158958438 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 6 05:28:49 localhost podman[337477]: 2025-12-06 10:28:49.015344437 +0000 UTC m=+0.172297328 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 6 05:28:49 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:28:49 localhost nova_compute[282193]: 2025-12-06 10:28:49.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:28:49 localhost nova_compute[282193]: 2025-12-06 10:28:49.182 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:28:49 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #64. Immutable memtables: 0. Dec 6 05:28:49 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:28:49.602485) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 6 05:28:49 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 64 Dec 6 05:28:49 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016929602559, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2758, "num_deletes": 254, "total_data_size": 3125500, "memory_usage": 3182664, "flush_reason": "Manual Compaction"} Dec 6 05:28:49 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #65: started Dec 6 05:28:49 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016929616524, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 65, "file_size": 2002207, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36472, "largest_seqno": 39224, "table_properties": {"data_size": 1992001, "index_size": 6139, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3013, "raw_key_size": 26470, "raw_average_key_size": 22, "raw_value_size": 1969480, "raw_average_value_size": 1637, "num_data_blocks": 266, "num_entries": 1203, "num_filter_entries": 1203, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765016805, "oldest_key_time": 1765016805, "file_creation_time": 1765016929, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ec2a3a6-c7d0-4f1e-9923-5a3b782725ae", "db_session_id": "JMBO5KX1IJCJ8FWC64EX", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}} Dec 6 05:28:49 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 14151 microseconds, and 6437 cpu microseconds. Dec 6 05:28:49 localhost ceph-mon[298582]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 6 05:28:49 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:28:49.616643) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #65: 2002207 bytes OK Dec 6 05:28:49 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:28:49.616701) [db/memtable_list.cc:519] [default] Level-0 commit table #65 started Dec 6 05:28:49 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:28:49.618955) [db/memtable_list.cc:722] [default] Level-0 commit table #65: memtable #1 done Dec 6 05:28:49 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:28:49.618977) EVENT_LOG_v1 {"time_micros": 1765016929618970, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 6 05:28:49 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:28:49.619004) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 6 05:28:49 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 3112336, prev total WAL file size 3112336, number of live WAL files 2. Dec 6 05:28:49 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000061.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:28:49 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:28:49.620175) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003133303532' seq:72057594037927935, type:22 .. '7061786F73003133333034' seq:0, type:0; will stop at (end) Dec 6 05:28:49 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 6 05:28:49 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [65(1955KB)], [63(19MB)] Dec 6 05:28:49 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016929620239, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [65], "files_L6": [63], "score": -1, "input_data_size": 22514863, "oldest_snapshot_seqno": -1} Dec 6 05:28:49 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #66: 14612 keys, 20699727 bytes, temperature: kUnknown Dec 6 05:28:49 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016929723856, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 66, "file_size": 20699727, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 20614414, "index_size": 47708, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36549, "raw_key_size": 391733, "raw_average_key_size": 26, "raw_value_size": 20364613, "raw_average_value_size": 1393, "num_data_blocks": 1774, "num_entries": 14612, "num_filter_entries": 14612, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015623, "oldest_key_time": 0, "file_creation_time": 1765016929, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ec2a3a6-c7d0-4f1e-9923-5a3b782725ae", "db_session_id": "JMBO5KX1IJCJ8FWC64EX", "orig_file_number": 66, "seqno_to_time_mapping": "N/A"}} Dec 6 05:28:49 localhost ceph-mon[298582]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 6 05:28:49 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:28:49.724170) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 20699727 bytes Dec 6 05:28:49 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:28:49.726043) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 217.1 rd, 199.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 19.6 +0.0 blob) out(19.7 +0.0 blob), read-write-amplify(21.6) write-amplify(10.3) OK, records in: 15143, records dropped: 531 output_compression: NoCompression Dec 6 05:28:49 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:28:49.726072) EVENT_LOG_v1 {"time_micros": 1765016929726058, "job": 38, "event": "compaction_finished", "compaction_time_micros": 103712, "compaction_time_cpu_micros": 51170, "output_level": 6, "num_output_files": 1, "total_output_size": 20699727, "num_input_records": 15143, "num_output_records": 14612, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 6 05:28:49 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:28:49 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016929726463, "job": 38, "event": "table_file_deletion", "file_number": 65} Dec 6 05:28:49 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000063.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:28:49 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016929729063, "job": 38, "event": "table_file_deletion", "file_number": 63} Dec 6 05:28:49 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:28:49.620044) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:28:49 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:28:49.729192) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:28:49 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:28:49.729201) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:28:49 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:28:49.729204) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:28:49 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:28:49.729207) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:28:49 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:28:49.729210) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:28:51 localhost nova_compute[282193]: 2025-12-06 10:28:51.197 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:28:51 localhost nova_compute[282193]: 2025-12-06 10:28:51.197 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:28:51 localhost nova_compute[282193]: 2025-12-06 10:28:51.198 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 6 05:28:51 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 1 addresses Dec 6 05:28:51 localhost podman[337535]: 2025-12-06 10:28:51.563043824 +0000 UTC m=+0.066206643 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2) Dec 6 05:28:51 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:28:51 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:28:51 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 6 05:28:51 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 6 05:28:51 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 6 05:28:51 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Dec 6 05:28:51 localhost nova_compute[282193]: 2025-12-06 10:28:51.718 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:28:51 localhost nova_compute[282193]: 2025-12-06 10:28:51.720 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:28:51 localhost nova_compute[282193]: 2025-12-06 10:28:51.720 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5019 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:28:51 localhost nova_compute[282193]: 2025-12-06 10:28:51.721 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:28:51 localhost nova_compute[282193]: 2025-12-06 10:28:51.721 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:28:51 localhost nova_compute[282193]: 2025-12-06 10:28:51.725 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:28:51 localhost ovn_controller[154851]: 2025-12-06T10:28:51Z|00527|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:28:51 localhost nova_compute[282193]: 2025-12-06 10:28:51.818 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:28:52 localhost nova_compute[282193]: 2025-12-06 10:28:52.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:28:52 localhost nova_compute[282193]: 2025-12-06 10:28:52.181 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Dec 6 05:28:53 localhost nova_compute[282193]: 2025-12-06 10:28:53.208 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:28:53 localhost nova_compute[282193]: 2025-12-06 10:28:53.208 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:28:53 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:28:53 localhost podman[241090]: time="2025-12-06T10:28:53Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:28:53 localhost podman[241090]: @ - - [06/Dec/2025:10:28:53 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156104 "" "Go-http-client/1.1" Dec 6 05:28:53 localhost podman[241090]: @ - - [06/Dec/2025:10:28:53 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19280 "" "Go-http-client/1.1" Dec 6 05:28:54 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 6 05:28:54 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:28:54 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:28:54 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"}]': finished Dec 6 05:28:55 localhost nova_compute[282193]: 2025-12-06 10:28:55.521 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:28:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:28:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:28:55 localhost podman[337556]: 2025-12-06 10:28:55.913856365 +0000 UTC m=+0.072346880 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, version=9.6, build-date=2025-08-20T13:12:41, distribution-scope=public, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vcs-type=git, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, container_name=openstack_network_exporter, io.openshift.expose-services=, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, release=1755695350) Dec 6 05:28:55 localhost podman[337556]: 2025-12-06 10:28:55.921424608 +0000 UTC m=+0.079915203 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, vcs-type=git, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, container_name=openstack_network_exporter, io.openshift.expose-services=, release=1755695350) Dec 6 05:28:55 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:28:55 localhost podman[337557]: 2025-12-06 10:28:55.982953396 +0000 UTC m=+0.138387278 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true) Dec 6 05:28:55 localhost podman[337557]: 2025-12-06 10:28:55.992330153 +0000 UTC m=+0.147764055 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Dec 6 05:28:56 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:28:56 localhost nova_compute[282193]: 2025-12-06 10:28:56.767 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:28:57 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Dec 6 05:28:57 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 6 05:28:57 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Dec 6 05:28:57 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Dec 6 05:28:58 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:28:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:28:59 localhost podman[337596]: 2025-12-06 10:28:59.910381526 +0000 UTC m=+0.081390048 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3) Dec 6 05:28:59 localhost podman[337596]: 2025-12-06 10:28:59.925167599 +0000 UTC m=+0.096176121 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=multipathd, io.buildah.version=1.41.3) Dec 6 05:28:59 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:29:01 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 6 05:29:01 localhost nova_compute[282193]: 2025-12-06 10:29:01.232 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:29:01 localhost nova_compute[282193]: 2025-12-06 10:29:01.802 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:29:01 localhost nova_compute[282193]: 2025-12-06 10:29:01.804 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:29:01 localhost nova_compute[282193]: 2025-12-06 10:29:01.804 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5036 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:29:01 localhost nova_compute[282193]: 2025-12-06 10:29:01.805 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:29:01 localhost nova_compute[282193]: 2025-12-06 10:29:01.806 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:29:01 localhost nova_compute[282193]: 2025-12-06 10:29:01.807 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:29:02 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:29:02 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:29:02 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"}]': finished Dec 6 05:29:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:29:02 localhost podman[337615]: 2025-12-06 10:29:02.927896117 +0000 UTC m=+0.089487127 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 6 05:29:02 localhost podman[337615]: 2025-12-06 10:29:02.935291014 +0000 UTC m=+0.096882044 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 6 05:29:02 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:29:03 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:29:04 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 6 05:29:04 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 6 05:29:04 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 6 05:29:04 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Dec 6 05:29:06 localhost nova_compute[282193]: 2025-12-06 10:29:06.810 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:29:06 localhost nova_compute[282193]: 2025-12-06 10:29:06.811 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:29:06 localhost nova_compute[282193]: 2025-12-06 10:29:06.812 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:29:06 localhost nova_compute[282193]: 2025-12-06 10:29:06.812 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:29:06 localhost nova_compute[282193]: 2025-12-06 10:29:06.846 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:29:06 localhost nova_compute[282193]: 2025-12-06 10:29:06.846 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:29:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:29:06 localhost podman[337638]: 2025-12-06 10:29:06.946554985 +0000 UTC m=+0.081101899 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, tcib_managed=true) Dec 6 05:29:06 localhost podman[337638]: 2025-12-06 10:29:06.985430428 +0000 UTC m=+0.119977352 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125) Dec 6 05:29:07 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.917 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'name': 'test', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005548789.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'hostId': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.918 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.941 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.latency volume: 1525105336 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.942 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.latency volume: 106716064 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7a7a5fcb-b411-45d4-91b4-49bfbf5faf8c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1525105336, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:29:07.918623', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '658d1234-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13366.167983894, 'message_signature': 'b334fe7de1022e4eb9f546f48486192f7d59a8adbb836d7513ab7764a9d0c944'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 106716064, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:29:07.918623', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '658d26ac-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13366.167983894, 'message_signature': '8054b2c4acd4a5ac1aaccab06758509bbc2044e9de642ac145d5da0cfd77951a'}]}, 'timestamp': '2025-12-06 10:29:07.942616', '_unique_id': '5ac0da0aee134467b98abc350513b062'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.944 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.945 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.945 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.946 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.latency volume: 1252245154 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.946 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.latency volume: 27668224 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '087b0cf6-5c20-4819-9fb0-9883416a8388', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1252245154, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:29:07.946203', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '658dc90e-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13366.167983894, 'message_signature': '66b114c322cee4a6ce4b8f805b44a43220ee80c5ca6711039cd3b51c61ed1728'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 27668224, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:29:07.946203', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '658dde4e-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13366.167983894, 'message_signature': '8d22cfd7127d99c29258b77bbc712cccd52759b75af96ecffe5566c69204b449'}]}, 'timestamp': '2025-12-06 10:29:07.947355', '_unique_id': '27726bba2cd24315826d2d17ac1e7198'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.948 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.950 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.950 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.959 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.960 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6c9d1ba7-428e-46b8-bbb6-2f1cd9739551', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:29:07.950453', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '658fd230-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13366.199861453, 'message_signature': '8a0ece50415883060a3668f616a5b25baf7fd3827fc5234fb38ab0a2db405e26'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:29:07.950453', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '658fe022-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13366.199861453, 'message_signature': 'e7cb433905f8ee3562d53273784a81a66649af638e02768e92849f55b0b6be64'}]}, 'timestamp': '2025-12-06 10:29:07.960421', '_unique_id': '0c9598aa134e4fed932ac7f050e629b7'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.961 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.962 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.965 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f2806ffc-5a9d-4bc9-8383-8a8194908ffa', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:29:07.962138', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '6590bb8c-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13366.21149235, 'message_signature': '98e80dc50c5bd045eedd23b5881810a0e4d90a8845de5a94ea54b5b169bae601'}]}, 'timestamp': '2025-12-06 10:29:07.966043', '_unique_id': '12e3a7ad5b6a4544a59eca66393cd688'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.966 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.967 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.983 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/memory.usage volume: 51.80859375 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '3e80c309-c564-496a-ab98-fe5f475e8c68', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.80859375, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'timestamp': '2025-12-06T10:29:07.967512', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': '65937bd8-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13366.232812474, 'message_signature': 'cec775fc671789b15da071168dd4d6ceb4816d07ff3954fc2fed55d1d0b0397a'}]}, 'timestamp': '2025-12-06 10:29:07.984096', '_unique_id': '4a0837264e53450eb9b21a0a1f55f0b9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.985 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.986 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.986 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'fe9d9cc7-e1ea-4c77-a159-c473e10dab7f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:29:07.986279', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '6593e1fe-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13366.21149235, 'message_signature': 'b3442ed5174017c9b8ebea29b4c36053dfaa3b82bc8b5efe1849259767ed642c'}]}, 'timestamp': '2025-12-06 10:29:07.986700', '_unique_id': '32a447c27fb54a46b5d9ee0f1594638e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.987 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.988 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.988 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets volume: 114 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'fafa9e4f-b83e-435c-ac31-32b42aafc8b2', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 114, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:29:07.988605', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '65943dde-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13366.21149235, 'message_signature': 'acef3873af564a433ba5fe86097e9e2baf8e038fe05d2a0097f1398b9b4346ff'}]}, 'timestamp': '2025-12-06 10:29:07.989048', '_unique_id': 'b0e83e76bac74a1f9d1f0293b4b0d4e6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.989 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.990 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.990 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4ca4637d-7b8c-41a9-8b7c-55dabef42938', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:29:07.990930', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '65949784-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13366.21149235, 'message_signature': '1124270376220935bd081fbd0269ce8c13c364684b07a970bd1173ac82595625'}]}, 'timestamp': '2025-12-06 10:29:07.991343', '_unique_id': '21d331b549864e0ea874c87cddf4931e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.992 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.993 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.993 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.993 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4ab05cbc-3e97-41e3-b962-7e4e19e75988', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:29:07.993259', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '6594f242-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13366.199861453, 'message_signature': 'a1391f20f6dd30eef93b013558fefe94a529375b769aaad51e7e394af0662148'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:29:07.993259', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '65950160-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13366.199861453, 'message_signature': '3f5681ae9bae10d2d6c3128d59c94fb073a6c271a690690c2ea31ddc777fcedb'}]}, 'timestamp': '2025-12-06 10:29:07.994030', '_unique_id': '56887e24f51d45039b3b585da41cea85'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:29:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.994 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.995 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.995 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.bytes volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e11e87d9-c22f-4468-a32f-ab6627414c40', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:29:07.995927', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '65955ae8-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13366.21149235, 'message_signature': 'f9062f3eaaee030ce961c231fee3f944c820afaa1679842491b76cef16ddebd7'}]}, 'timestamp': '2025-12-06 10:29:07.996360', '_unique_id': 'cde9772fcc874d0ba9781a44e1811685'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.997 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.998 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.998 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.998 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '342e2349-a7c7-4e4f-a23f-dd9465c07a53', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:29:07.998252', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '6595b574-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13366.167983894, 'message_signature': 'c5a940fd7ea5d91f36d271b082fc2943eed641702bf613ed1e6644db5ee4f959'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:29:07.998252', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '6595c4a6-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13366.167983894, 'message_signature': '4c5f8e85a702e42f6ed775a9503ef79fcce057c836cb1e464a48887b3f1326ed'}]}, 'timestamp': '2025-12-06 10:29:07.999021', '_unique_id': '633b4fc87e7f417d956a9bfd07569ad6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:07.999 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.000 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.000 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.bytes volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'cfcdc643-9e7c-4d98-b125-ec71ea78ae8f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:29:08.000959', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '65961f78-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13366.21149235, 'message_signature': '583f53d6d6aa5290571589b2fb77340ac77799bbd7205918c0762bd2d0038d50'}]}, 'timestamp': '2025-12-06 10:29:08.001377', '_unique_id': 'c6284f5da7b04a59bd973e30ea195c60'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.002 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.003 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.003 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/cpu volume: 20700000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '071912ff-5efe-4f6d-bdd0-5cc8db7b8d6d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 20700000000, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'timestamp': '2025-12-06T10:29:08.003472', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': '65968116-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13366.232812474, 'message_signature': 'ae697f0a731b0a86d8f372b656c6752917cb1cbb04a4315c859c0036e2dce979'}]}, 'timestamp': '2025-12-06 10:29:08.003880', '_unique_id': '6b73474de01a470baa7b0a4805fd96f9'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.004 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.005 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.005 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.requests volume: 47 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.006 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1529245b-ecb1-444d-bd7d-dfec61174941', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 47, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:29:08.005750', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '6596db84-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13366.167983894, 'message_signature': '3ce05acad23084007ffb304b0a24f614a07e574989b8809ef247121d980eaea3'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:29:08.005750', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '6596e9d0-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13366.167983894, 'message_signature': 'ebaf8c7494a2bb140ddf9b84ccacafca626c74ca08b65fe09a54575f974ffc7c'}]}, 'timestamp': '2025-12-06 10:29:08.006535', '_unique_id': '10e1eee3fbc4467da6e07442933ace69'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.007 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.008 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.008 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.bytes volume: 389120 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.008 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a44ab393-6220-49dc-9521-07cf1858d7a5', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 389120, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:29:08.008427', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '6597434e-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13366.167983894, 'message_signature': 'a5353723a3c168279a561d127c8b5211f2ed6cd76433b4e9c7efe0c10e8924fa'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:29:08.008427', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '6597538e-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13366.167983894, 'message_signature': '02ef1f5571d70bd64209dc23f42799b39ecf1acd3016221a954607e6a8b0c252'}]}, 'timestamp': '2025-12-06 10:29:08.009237', '_unique_id': '69f9a811c24c4dde810c03aeea1c08f1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.010 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.011 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.011 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets volume: 60 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'aae60064-0ad2-4dd9-b4b1-6b7e833b5434', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 60, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:29:08.011146', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '6597acf8-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13366.21149235, 'message_signature': '181771abf698e61d48f498c96bcb77f38135275e8d41080c729deb280b7a8152'}]}, 'timestamp': '2025-12-06 10:29:08.011551', '_unique_id': '5040e0d1932c42cfa5381363b193a7db'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.012 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.013 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.013 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.013 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5d4025d0-9444-4e1b-8dd1-b89e497aaded', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:29:08.013441', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '6598068a-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13366.167983894, 'message_signature': '62c7cc59650d59fd6b85e20292e956dc3725d3beb09e27535f76eeb2c2a3dd21'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:29:08.013441', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '65981670-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13366.167983894, 'message_signature': 'b229d03425bc8978e52237887110f08dbf451f03932613888e3ffdc1ba538d0b'}]}, 'timestamp': '2025-12-06 10:29:08.014225', '_unique_id': '1156e8ea2edc44dfbc6275c27c0118c5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.015 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.016 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.016 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.016 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '95be7a3a-c2fd-4342-8663-96a27bb11a53', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:29:08.016241', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '659874c6-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13366.21149235, 'message_signature': '08eccc366aae6257342e55678004b87bc311cbcc1ff80757976c97935e7c10b3'}]}, 'timestamp': '2025-12-06 10:29:08.016662', '_unique_id': '27fe8cf776fd470ba52f0b35fd163364'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.017 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.018 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.018 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.019 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'dc029b07-0357-4fb1-b138-84358ab0e5b0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:29:08.018685', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': '6598d51a-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13366.199861453, 'message_signature': '7fc6a8f9b9e2974315bdd546fd107859d88d168f5a4882893407cb1f823da9c4'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:29:08.018685', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': '6598e456-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13366.199861453, 'message_signature': '28707555586c8f3d0db31eaac4709a1aca714b5d9179bdf8bd62d1f42ce3be0f'}]}, 'timestamp': '2025-12-06 10:29:08.019506', '_unique_id': '7afdf52faeff47d6b678ef994473f2f3'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.020 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.021 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.021 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.021 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '52d1f3cb-3ac5-4917-b260-3718a4fde6ee', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:29:08.021669', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '659949b4-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13366.21149235, 'message_signature': '87c5077fec81bff787167f18759787f7b8d74b4551106c20679f2eaa7934b802'}]}, 'timestamp': '2025-12-06 10:29:08.022128', '_unique_id': '8ebcad51a2c94dadbac5a61604ea2aec'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.022 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.023 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.024 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'bb80391e-fa4a-4edf-bdbc-2395414eaf5a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:29:08.024006', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': '6599a40e-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13366.21149235, 'message_signature': 'cdf1bd4937c990e7db1988fd7f31e44c0cd4d3ee9ce1f778ae4c5deb47cf72c2'}]}, 'timestamp': '2025-12-06 10:29:08.024458', '_unique_id': '23fd8847fe9e4c838f970894cdc9b497'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:29:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:29:08.025 12 ERROR oslo_messaging.notify.messaging Dec 6 05:29:08 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:29:08 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 6 05:29:08 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:29:08 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:29:08 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"}]': finished Dec 6 05:29:11 localhost nova_compute[282193]: 2025-12-06 10:29:11.060 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:29:11 localhost nova_compute[282193]: 2025-12-06 10:29:11.149 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Triggering sync for uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m Dec 6 05:29:11 localhost nova_compute[282193]: 2025-12-06 10:29:11.150 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:29:11 localhost nova_compute[282193]: 2025-12-06 10:29:11.150 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" acquired by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:29:11 localhost nova_compute[282193]: 2025-12-06 10:29:11.174 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" "released" by "nova.compute.manager.ComputeManager._sync_power_states.._sync..query_driver_power_state_and_sync" :: held 0.024s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:29:11 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Dec 6 05:29:11 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 6 05:29:11 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Dec 6 05:29:11 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Dec 6 05:29:11 localhost nova_compute[282193]: 2025-12-06 10:29:11.847 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:29:11 localhost nova_compute[282193]: 2025-12-06 10:29:11.850 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:29:11 localhost nova_compute[282193]: 2025-12-06 10:29:11.850 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:29:11 localhost nova_compute[282193]: 2025-12-06 10:29:11.850 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:29:11 localhost nova_compute[282193]: 2025-12-06 10:29:11.895 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:29:11 localhost nova_compute[282193]: 2025-12-06 10:29:11.896 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:29:13 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:29:13 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:29:13 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:29:14 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Dec 6 05:29:14 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:29:14 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch Dec 6 05:29:14 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"}]': finished Dec 6 05:29:16 localhost openstack_network_exporter[243110]: ERROR 10:29:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:29:16 localhost openstack_network_exporter[243110]: ERROR 10:29:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:29:16 localhost openstack_network_exporter[243110]: ERROR 10:29:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:29:16 localhost openstack_network_exporter[243110]: ERROR 10:29:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:29:16 localhost openstack_network_exporter[243110]: Dec 6 05:29:16 localhost openstack_network_exporter[243110]: ERROR 10:29:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:29:16 localhost openstack_network_exporter[243110]: Dec 6 05:29:16 localhost nova_compute[282193]: 2025-12-06 10:29:16.897 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:29:16 localhost nova_compute[282193]: 2025-12-06 10:29:16.899 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:29:16 localhost nova_compute[282193]: 2025-12-06 10:29:16.899 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:29:16 localhost nova_compute[282193]: 2025-12-06 10:29:16.899 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:29:16 localhost nova_compute[282193]: 2025-12-06 10:29:16.938 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:29:16 localhost nova_compute[282193]: 2025-12-06 10:29:16.938 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:29:18 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:29:18 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:29:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:29:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:29:19 localhost podman[337747]: 2025-12-06 10:29:19.950165594 +0000 UTC m=+0.098336638 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent) Dec 6 05:29:19 localhost podman[337747]: 2025-12-06 10:29:19.961159231 +0000 UTC m=+0.109330305 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS) Dec 6 05:29:19 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:29:20 localhost podman[337748]: 2025-12-06 10:29:20.056718723 +0000 UTC m=+0.202281397 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 6 05:29:20 localhost podman[337748]: 2025-12-06 10:29:20.065461071 +0000 UTC m=+0.211023765 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:29:20 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:29:21 localhost nova_compute[282193]: 2025-12-06 10:29:21.939 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:29:21 localhost nova_compute[282193]: 2025-12-06 10:29:21.941 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:29:22 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Dec 6 05:29:22 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b,allow rw path=/volumes/_nogroup/ac611655-851d-48c2-9d00-93668f6ff5e1/35c0f9c9-609d-4a42-9e75-505856ad36fb", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723,allow rw pool=manila_data namespace=fsvolumens_ac611655-851d-48c2-9d00-93668f6ff5e1"]} : dispatch Dec 6 05:29:22 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b,allow rw path=/volumes/_nogroup/ac611655-851d-48c2-9d00-93668f6ff5e1/35c0f9c9-609d-4a42-9e75-505856ad36fb", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723,allow rw pool=manila_data namespace=fsvolumens_ac611655-851d-48c2-9d00-93668f6ff5e1"]} : dispatch Dec 6 05:29:22 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b,allow rw path=/volumes/_nogroup/ac611655-851d-48c2-9d00-93668f6ff5e1/35c0f9c9-609d-4a42-9e75-505856ad36fb", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723,allow rw pool=manila_data namespace=fsvolumens_ac611655-851d-48c2-9d00-93668f6ff5e1"]}]': finished Dec 6 05:29:22 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Dec 6 05:29:22 localhost ovn_controller[154851]: 2025-12-06T10:29:22Z|00528|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory Dec 6 05:29:22 localhost sshd[337789]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:29:23 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:29:23 localhost podman[241090]: time="2025-12-06T10:29:23Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:29:23 localhost podman[241090]: @ - - [06/Dec/2025:10:29:23 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156104 "" "Go-http-client/1.1" Dec 6 05:29:23 localhost podman[241090]: @ - - [06/Dec/2025:10:29:23 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19281 "" "Go-http-client/1.1" Dec 6 05:29:26 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Dec 6 05:29:26 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723"]} : dispatch Dec 6 05:29:26 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723"]} : dispatch Dec 6 05:29:26 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723"]}]': finished Dec 6 05:29:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:29:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:29:26 localhost podman[337791]: 2025-12-06 10:29:26.931847004 +0000 UTC m=+0.080933894 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, vcs-type=git, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., name=ubi9-minimal, container_name=openstack_network_exporter, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, config_id=edpm, managed_by=edpm_ansible, distribution-scope=public) Dec 6 05:29:26 localhost nova_compute[282193]: 2025-12-06 10:29:26.942 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:29:26 localhost nova_compute[282193]: 2025-12-06 10:29:26.944 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:29:26 localhost nova_compute[282193]: 2025-12-06 10:29:26.944 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:29:26 localhost nova_compute[282193]: 2025-12-06 10:29:26.945 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:29:26 localhost nova_compute[282193]: 2025-12-06 10:29:26.978 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:29:26 localhost nova_compute[282193]: 2025-12-06 10:29:26.979 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:29:27 localhost podman[337792]: 2025-12-06 10:29:27.018029538 +0000 UTC m=+0.164424265 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:29:27 localhost podman[337791]: 2025-12-06 10:29:27.028547061 +0000 UTC m=+0.177633961 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_id=edpm, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.33.7, name=ubi9-minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, distribution-scope=public, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64) Dec 6 05:29:27 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:29:27 localhost podman[337792]: 2025-12-06 10:29:27.083349543 +0000 UTC m=+0.229744230 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm) Dec 6 05:29:27 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:29:28 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:29:29 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Dec 6 05:29:29 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch Dec 6 05:29:29 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch Dec 6 05:29:29 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.bob"}]': finished Dec 6 05:29:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:29:30 localhost systemd[1]: tmp-crun.x8ViG9.mount: Deactivated successfully. Dec 6 05:29:30 localhost podman[337832]: 2025-12-06 10:29:30.952652619 +0000 UTC m=+0.116052622 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd) Dec 6 05:29:30 localhost podman[337832]: 2025-12-06 10:29:30.965652417 +0000 UTC m=+0.129052440 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:29:30 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:29:31 localhost nova_compute[282193]: 2025-12-06 10:29:31.980 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:29:31 localhost nova_compute[282193]: 2025-12-06 10:29:31.982 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:29:31 localhost nova_compute[282193]: 2025-12-06 10:29:31.983 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:29:31 localhost nova_compute[282193]: 2025-12-06 10:29:31.983 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:29:32 localhost nova_compute[282193]: 2025-12-06 10:29:32.013 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:29:32 localhost nova_compute[282193]: 2025-12-06 10:29:32.013 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:29:33 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:29:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:29:33 localhost podman[337851]: 2025-12-06 10:29:33.926077898 +0000 UTC m=+0.088664541 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Dec 6 05:29:33 localhost podman[337851]: 2025-12-06 10:29:33.940272044 +0000 UTC m=+0.102858657 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 6 05:29:33 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:29:37 localhost nova_compute[282193]: 2025-12-06 10:29:37.014 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:29:37 localhost nova_compute[282193]: 2025-12-06 10:29:37.040 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:29:37 localhost nova_compute[282193]: 2025-12-06 10:29:37.041 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5027 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:29:37 localhost nova_compute[282193]: 2025-12-06 10:29:37.041 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:29:37 localhost nova_compute[282193]: 2025-12-06 10:29:37.042 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:29:37 localhost nova_compute[282193]: 2025-12-06 10:29:37.043 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:29:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:29:37 localhost podman[337874]: 2025-12-06 10:29:37.923068511 +0000 UTC m=+0.079180050 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 6 05:29:37 localhost podman[337874]: 2025-12-06 10:29:37.96572278 +0000 UTC m=+0.121834319 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Dec 6 05:29:37 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:29:38 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:29:40 localhost ovn_metadata_agent[160504]: 2025-12-06 10:29:40.689 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:6c:02', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:a8:2f:0c:cb:a1'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:29:40 localhost ovn_metadata_agent[160504]: 2025-12-06 10:29:40.690 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 6 05:29:40 localhost nova_compute[282193]: 2025-12-06 10:29:40.723 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:29:42 localhost nova_compute[282193]: 2025-12-06 10:29:42.076 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:29:43 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:29:43 localhost ovn_metadata_agent[160504]: 2025-12-06 10:29:43.692 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b142a5ef-fbed-4e92-aa78-e3ad080c6370, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 05:29:44 localhost nova_compute[282193]: 2025-12-06 10:29:44.180 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:29:44 localhost nova_compute[282193]: 2025-12-06 10:29:44.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:29:44 localhost nova_compute[282193]: 2025-12-06 10:29:44.218 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:29:44 localhost nova_compute[282193]: 2025-12-06 10:29:44.218 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:29:44 localhost nova_compute[282193]: 2025-12-06 10:29:44.218 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:29:44 localhost nova_compute[282193]: 2025-12-06 10:29:44.219 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Auditing locally available compute resources for np0005548789.localdomain (node: np0005548789.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 6 05:29:44 localhost nova_compute[282193]: 2025-12-06 10:29:44.220 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:29:44 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:29:44 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1217810012' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:29:44 localhost nova_compute[282193]: 2025-12-06 10:29:44.670 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:29:44 localhost nova_compute[282193]: 2025-12-06 10:29:44.761 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:29:44 localhost nova_compute[282193]: 2025-12-06 10:29:44.762 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:29:44 localhost nova_compute[282193]: 2025-12-06 10:29:44.998 282197 WARNING nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 6 05:29:45 localhost nova_compute[282193]: 2025-12-06 10:29:45.000 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Hypervisor/Node resource view: name=np0005548789.localdomain free_ram=11098MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 6 05:29:45 localhost nova_compute[282193]: 2025-12-06 10:29:45.000 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:29:45 localhost nova_compute[282193]: 2025-12-06 10:29:45.001 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:29:45 localhost nova_compute[282193]: 2025-12-06 10:29:45.126 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 6 05:29:45 localhost nova_compute[282193]: 2025-12-06 10:29:45.127 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 6 05:29:45 localhost nova_compute[282193]: 2025-12-06 10:29:45.127 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Final resource view: name=np0005548789.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 6 05:29:45 localhost nova_compute[282193]: 2025-12-06 10:29:45.166 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:29:45 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:29:45 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3042594742' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:29:45 localhost nova_compute[282193]: 2025-12-06 10:29:45.618 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:29:45 localhost nova_compute[282193]: 2025-12-06 10:29:45.624 282197 DEBUG nova.compute.provider_tree [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed in ProviderTree for provider: 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 6 05:29:45 localhost nova_compute[282193]: 2025-12-06 10:29:45.650 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 6 05:29:45 localhost nova_compute[282193]: 2025-12-06 10:29:45.653 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Compute_service record updated for np0005548789.localdomain:np0005548789.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 6 05:29:45 localhost nova_compute[282193]: 2025-12-06 10:29:45.653 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:29:46 localhost openstack_network_exporter[243110]: ERROR 10:29:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:29:46 localhost openstack_network_exporter[243110]: ERROR 10:29:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:29:46 localhost openstack_network_exporter[243110]: ERROR 10:29:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:29:46 localhost openstack_network_exporter[243110]: ERROR 10:29:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:29:46 localhost openstack_network_exporter[243110]: Dec 6 05:29:46 localhost openstack_network_exporter[243110]: ERROR 10:29:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:29:46 localhost openstack_network_exporter[243110]: Dec 6 05:29:46 localhost nova_compute[282193]: 2025-12-06 10:29:46.653 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:29:46 localhost nova_compute[282193]: 2025-12-06 10:29:46.654 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 6 05:29:46 localhost nova_compute[282193]: 2025-12-06 10:29:46.655 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 6 05:29:47 localhost nova_compute[282193]: 2025-12-06 10:29:47.078 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:29:47 localhost nova_compute[282193]: 2025-12-06 10:29:47.081 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:29:47 localhost nova_compute[282193]: 2025-12-06 10:29:47.081 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:29:47 localhost nova_compute[282193]: 2025-12-06 10:29:47.082 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:29:47 localhost nova_compute[282193]: 2025-12-06 10:29:47.106 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:29:47 localhost nova_compute[282193]: 2025-12-06 10:29:47.108 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:29:47 localhost nova_compute[282193]: 2025-12-06 10:29:47.188 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 6 05:29:47 localhost nova_compute[282193]: 2025-12-06 10:29:47.189 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquired lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 6 05:29:47 localhost nova_compute[282193]: 2025-12-06 10:29:47.189 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 6 05:29:47 localhost nova_compute[282193]: 2025-12-06 10:29:47.190 282197 DEBUG nova.objects.instance [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 6 05:29:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:29:47.346 160509 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:29:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:29:47.347 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:29:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:29:47.348 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:29:47 localhost nova_compute[282193]: 2025-12-06 10:29:47.712 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updating instance_info_cache with network_info: [{"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 6 05:29:47 localhost nova_compute[282193]: 2025-12-06 10:29:47.741 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Releasing lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 6 05:29:47 localhost nova_compute[282193]: 2025-12-06 10:29:47.742 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 6 05:29:47 localhost nova_compute[282193]: 2025-12-06 10:29:47.742 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:29:47 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #67. Immutable memtables: 0. Dec 6 05:29:47 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:29:47.941672) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 6 05:29:47 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 67 Dec 6 05:29:47 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016987941733, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 1148, "num_deletes": 257, "total_data_size": 1153329, "memory_usage": 1177440, "flush_reason": "Manual Compaction"} Dec 6 05:29:47 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #68: started Dec 6 05:29:47 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016987950922, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 68, "file_size": 741105, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39230, "largest_seqno": 40372, "table_properties": {"data_size": 736493, "index_size": 2083, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11423, "raw_average_key_size": 19, "raw_value_size": 726591, "raw_average_value_size": 1268, "num_data_blocks": 92, "num_entries": 573, "num_filter_entries": 573, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765016930, "oldest_key_time": 1765016930, "file_creation_time": 1765016987, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ec2a3a6-c7d0-4f1e-9923-5a3b782725ae", "db_session_id": "JMBO5KX1IJCJ8FWC64EX", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}} Dec 6 05:29:47 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 9296 microseconds, and 3253 cpu microseconds. Dec 6 05:29:47 localhost ceph-mon[298582]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 6 05:29:47 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:29:47.950969) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #68: 741105 bytes OK Dec 6 05:29:47 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:29:47.950992) [db/memtable_list.cc:519] [default] Level-0 commit table #68 started Dec 6 05:29:47 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:29:47.953416) [db/memtable_list.cc:722] [default] Level-0 commit table #68: memtable #1 done Dec 6 05:29:47 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:29:47.953439) EVENT_LOG_v1 {"time_micros": 1765016987953432, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 6 05:29:47 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:29:47.953461) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 6 05:29:47 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 1147508, prev total WAL file size 1147832, number of live WAL files 2. Dec 6 05:29:47 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000064.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:29:47 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:29:47.954409) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034353232' seq:72057594037927935, type:22 .. '6C6F676D0034373735' seq:0, type:0; will stop at (end) Dec 6 05:29:47 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 6 05:29:47 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [68(723KB)], [66(19MB)] Dec 6 05:29:47 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016987954504, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [68], "files_L6": [66], "score": -1, "input_data_size": 21440832, "oldest_snapshot_seqno": -1} Dec 6 05:29:48 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #69: 14650 keys, 21308274 bytes, temperature: kUnknown Dec 6 05:29:48 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016988068274, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 69, "file_size": 21308274, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 21221249, "index_size": 49292, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36677, "raw_key_size": 393797, "raw_average_key_size": 26, "raw_value_size": 20969385, "raw_average_value_size": 1431, "num_data_blocks": 1837, "num_entries": 14650, "num_filter_entries": 14650, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015623, "oldest_key_time": 0, "file_creation_time": 1765016987, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ec2a3a6-c7d0-4f1e-9923-5a3b782725ae", "db_session_id": "JMBO5KX1IJCJ8FWC64EX", "orig_file_number": 69, "seqno_to_time_mapping": "N/A"}} Dec 6 05:29:48 localhost ceph-mon[298582]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 6 05:29:48 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:29:48.068716) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 21308274 bytes Dec 6 05:29:48 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:29:48.070655) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 188.2 rd, 187.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 19.7 +0.0 blob) out(20.3 +0.0 blob), read-write-amplify(57.7) write-amplify(28.8) OK, records in: 15185, records dropped: 535 output_compression: NoCompression Dec 6 05:29:48 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:29:48.070685) EVENT_LOG_v1 {"time_micros": 1765016988070671, "job": 40, "event": "compaction_finished", "compaction_time_micros": 113906, "compaction_time_cpu_micros": 56197, "output_level": 6, "num_output_files": 1, "total_output_size": 21308274, "num_input_records": 15185, "num_output_records": 14650, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 6 05:29:48 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:29:48 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016988071117, "job": 40, "event": "table_file_deletion", "file_number": 68} Dec 6 05:29:48 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000066.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:29:48 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016988074073, "job": 40, "event": "table_file_deletion", "file_number": 66} Dec 6 05:29:48 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:29:47.954261) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:29:48 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:29:48.074145) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:29:48 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:29:48.074154) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:29:48 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:29:48.074158) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:29:48 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:29:48.074163) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:29:48 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:29:48.074167) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:29:48 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:29:49 localhost nova_compute[282193]: 2025-12-06 10:29:49.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:29:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:29:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:29:50 localhost podman[337944]: 2025-12-06 10:29:50.916143459 +0000 UTC m=+0.071634920 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Dec 6 05:29:50 localhost podman[337944]: 2025-12-06 10:29:50.928231089 +0000 UTC m=+0.083722550 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 6 05:29:50 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:29:51 localhost podman[337943]: 2025-12-06 10:29:51.011584257 +0000 UTC m=+0.166926653 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125) Dec 6 05:29:51 localhost podman[337943]: 2025-12-06 10:29:51.016816898 +0000 UTC m=+0.172159294 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true) Dec 6 05:29:51 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:29:51 localhost nova_compute[282193]: 2025-12-06 10:29:51.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:29:52 localhost nova_compute[282193]: 2025-12-06 10:29:52.109 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:29:52 localhost nova_compute[282193]: 2025-12-06 10:29:52.111 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:29:52 localhost nova_compute[282193]: 2025-12-06 10:29:52.111 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:29:52 localhost nova_compute[282193]: 2025-12-06 10:29:52.111 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:29:52 localhost nova_compute[282193]: 2025-12-06 10:29:52.141 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:29:52 localhost nova_compute[282193]: 2025-12-06 10:29:52.142 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:29:53 localhost sshd[337984]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:29:53 localhost nova_compute[282193]: 2025-12-06 10:29:53.180 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:29:53 localhost nova_compute[282193]: 2025-12-06 10:29:53.180 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:29:53 localhost nova_compute[282193]: 2025-12-06 10:29:53.181 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 6 05:29:53 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:29:53 localhost podman[241090]: time="2025-12-06T10:29:53Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:29:53 localhost podman[241090]: @ - - [06/Dec/2025:10:29:53 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156104 "" "Go-http-client/1.1" Dec 6 05:29:53 localhost podman[241090]: @ - - [06/Dec/2025:10:29:53 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19271 "" "Go-http-client/1.1" Dec 6 05:29:55 localhost nova_compute[282193]: 2025-12-06 10:29:55.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:29:56 localhost sshd[337986]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:29:57 localhost nova_compute[282193]: 2025-12-06 10:29:57.142 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:29:57 localhost nova_compute[282193]: 2025-12-06 10:29:57.144 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:29:57 localhost nova_compute[282193]: 2025-12-06 10:29:57.145 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:29:57 localhost nova_compute[282193]: 2025-12-06 10:29:57.145 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:29:57 localhost nova_compute[282193]: 2025-12-06 10:29:57.178 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:29:57 localhost nova_compute[282193]: 2025-12-06 10:29:57.179 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:29:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:29:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:29:57 localhost podman[337988]: 2025-12-06 10:29:57.940833856 +0000 UTC m=+0.092487638 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, architecture=x86_64, config_id=edpm, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public) Dec 6 05:29:57 localhost podman[337988]: 2025-12-06 10:29:57.979297646 +0000 UTC m=+0.130951378 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, config_id=edpm, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, distribution-scope=public, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41) Dec 6 05:29:58 localhost podman[337989]: 2025-12-06 10:29:58.001234629 +0000 UTC m=+0.147815576 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible) Dec 6 05:29:58 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:29:58 localhost podman[337989]: 2025-12-06 10:29:58.039285497 +0000 UTC m=+0.185866474 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:29:58 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:29:58 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:30:00 localhost ceph-mon[298582]: overall HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm Dec 6 05:30:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:30:02 localhost podman[338028]: 2025-12-06 10:30:02.066892249 +0000 UTC m=+0.078258462 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2) Dec 6 05:30:02 localhost podman[338028]: 2025-12-06 10:30:02.077303419 +0000 UTC m=+0.088669612 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:30:02 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:30:02 localhost nova_compute[282193]: 2025-12-06 10:30:02.180 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:30:02 localhost nova_compute[282193]: 2025-12-06 10:30:02.181 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:30:02 localhost nova_compute[282193]: 2025-12-06 10:30:02.181 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:30:02 localhost nova_compute[282193]: 2025-12-06 10:30:02.182 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:30:02 localhost nova_compute[282193]: 2025-12-06 10:30:02.213 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:30:02 localhost nova_compute[282193]: 2025-12-06 10:30:02.214 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:30:03 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:30:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:30:04 localhost podman[338045]: 2025-12-06 10:30:04.899061364 +0000 UTC m=+0.064539571 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 6 05:30:04 localhost podman[338045]: 2025-12-06 10:30:04.935218363 +0000 UTC m=+0.100696480 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 6 05:30:04 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:30:07 localhost nova_compute[282193]: 2025-12-06 10:30:07.215 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:30:07 localhost nova_compute[282193]: 2025-12-06 10:30:07.217 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:30:07 localhost nova_compute[282193]: 2025-12-06 10:30:07.217 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:30:07 localhost nova_compute[282193]: 2025-12-06 10:30:07.217 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:30:07 localhost nova_compute[282193]: 2025-12-06 10:30:07.266 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:30:07 localhost nova_compute[282193]: 2025-12-06 10:30:07.266 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:30:08 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:30:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:30:08 localhost podman[338068]: 2025-12-06 10:30:08.895790719 +0000 UTC m=+0.061238869 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:30:08 localhost podman[338068]: 2025-12-06 10:30:08.931503405 +0000 UTC m=+0.096951565 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller) Dec 6 05:30:08 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:30:09 localhost ceph-osd[31726]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 6 05:30:09 localhost ceph-osd[31726]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 9000.1 total, 600.0 interval#012Cumulative writes: 24K writes, 90K keys, 24K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.01 MB/s#012Cumulative WAL: 24K writes, 8386 syncs, 2.88 writes per sync, written: 0.07 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 11K writes, 42K keys, 11K commit groups, 1.0 writes per commit group, ingest: 35.98 MB, 0.06 MB/s#012Interval WAL: 11K writes, 4381 syncs, 2.51 writes per sync, written: 0.04 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 6 05:30:10 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e280 e280: 6 total, 6 up, 6 in Dec 6 05:30:12 localhost nova_compute[282193]: 2025-12-06 10:30:12.308 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:30:12 localhost nova_compute[282193]: 2025-12-06 10:30:12.310 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:30:12 localhost nova_compute[282193]: 2025-12-06 10:30:12.310 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5043 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:30:12 localhost nova_compute[282193]: 2025-12-06 10:30:12.310 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:30:12 localhost nova_compute[282193]: 2025-12-06 10:30:12.311 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:30:12 localhost nova_compute[282193]: 2025-12-06 10:30:12.311 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:30:12 localhost ceph-osd[32665]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Dec 6 05:30:12 localhost ceph-osd[32665]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 9000.2 total, 600.0 interval#012Cumulative writes: 22K writes, 87K keys, 22K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.01 MB/s#012Cumulative WAL: 22K writes, 8052 syncs, 2.84 writes per sync, written: 0.08 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 12K writes, 48K keys, 12K commit groups, 1.0 writes per commit group, ingest: 44.21 MB, 0.07 MB/s#012Interval WAL: 12K writes, 5039 syncs, 2.50 writes per sync, written: 0.04 GB, 0.07 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Dec 6 05:30:13 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:30:15 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:30:15 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:30:16 localhost openstack_network_exporter[243110]: ERROR 10:30:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:30:16 localhost openstack_network_exporter[243110]: ERROR 10:30:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:30:16 localhost openstack_network_exporter[243110]: ERROR 10:30:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:30:16 localhost openstack_network_exporter[243110]: ERROR 10:30:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:30:16 localhost openstack_network_exporter[243110]: Dec 6 05:30:16 localhost openstack_network_exporter[243110]: ERROR 10:30:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:30:16 localhost openstack_network_exporter[243110]: Dec 6 05:30:17 localhost nova_compute[282193]: 2025-12-06 10:30:17.312 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:30:17 localhost nova_compute[282193]: 2025-12-06 10:30:17.314 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:30:17 localhost nova_compute[282193]: 2025-12-06 10:30:17.314 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:30:17 localhost nova_compute[282193]: 2025-12-06 10:30:17.315 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:30:17 localhost nova_compute[282193]: 2025-12-06 10:30:17.339 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:30:17 localhost nova_compute[282193]: 2025-12-06 10:30:17.340 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:30:17 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e281 e281: 6 total, 6 up, 6 in Dec 6 05:30:18 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:30:18 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:30:20 localhost ovn_metadata_agent[160504]: 2025-12-06 10:30:20.769 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:6c:02', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:a8:2f:0c:cb:a1'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:30:20 localhost nova_compute[282193]: 2025-12-06 10:30:20.769 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:30:20 localhost ovn_metadata_agent[160504]: 2025-12-06 10:30:20.772 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Dec 6 05:30:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:30:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:30:21 localhost podman[338179]: 2025-12-06 10:30:21.935335294 +0000 UTC m=+0.090484188 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 6 05:30:21 localhost podman[338179]: 2025-12-06 10:30:21.94627967 +0000 UTC m=+0.101428564 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent) Dec 6 05:30:21 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:30:22 localhost podman[338180]: 2025-12-06 10:30:22.032139154 +0000 UTC m=+0.187114842 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 6 05:30:22 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:30:22.038 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:30:21Z, description=, device_id=574c884b-be04-4d2b-b229-29068bb8b5d2, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=2227d8ce-3e8d-404d-ba2e-378142bee228, ip_allocation=immediate, mac_address=fa:16:3e:cb:28:78, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3949, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-06T10:30:21Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:30:22 localhost podman[338180]: 2025-12-06 10:30:22.043142791 +0000 UTC m=+0.198118479 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 6 05:30:22 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:30:22 localhost podman[338237]: 2025-12-06 10:30:22.232939565 +0000 UTC m=+0.055050301 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Dec 6 05:30:22 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 2 addresses Dec 6 05:30:22 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:30:22 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:30:22 localhost systemd[1]: tmp-crun.rmEZw9.mount: Deactivated successfully. Dec 6 05:30:22 localhost nova_compute[282193]: 2025-12-06 10:30:22.369 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:30:22 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:30:22.540 263652 INFO neutron.agent.dhcp.agent [None req-4fd6c565-f23c-4500-a7d2-8713d8462566 - - - - - -] DHCP configuration for ports {'2227d8ce-3e8d-404d-ba2e-378142bee228'} is completed#033[00m Dec 6 05:30:22 localhost nova_compute[282193]: 2025-12-06 10:30:22.865 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:30:22 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #70. Immutable memtables: 0. Dec 6 05:30:22 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:30:22.976052) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 6 05:30:22 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 70 Dec 6 05:30:22 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765017022976092, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 716, "num_deletes": 251, "total_data_size": 682557, "memory_usage": 695392, "flush_reason": "Manual Compaction"} Dec 6 05:30:22 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #71: started Dec 6 05:30:22 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765017022980012, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 71, "file_size": 346964, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40377, "largest_seqno": 41088, "table_properties": {"data_size": 344004, "index_size": 879, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 8408, "raw_average_key_size": 21, "raw_value_size": 337544, "raw_average_value_size": 843, "num_data_blocks": 39, "num_entries": 400, "num_filter_entries": 400, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765016987, "oldest_key_time": 1765016987, "file_creation_time": 1765017022, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ec2a3a6-c7d0-4f1e-9923-5a3b782725ae", "db_session_id": "JMBO5KX1IJCJ8FWC64EX", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}} Dec 6 05:30:22 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 3995 microseconds, and 1250 cpu microseconds. Dec 6 05:30:22 localhost ceph-mon[298582]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 6 05:30:22 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:30:22.980046) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #71: 346964 bytes OK Dec 6 05:30:22 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:30:22.980066) [db/memtable_list.cc:519] [default] Level-0 commit table #71 started Dec 6 05:30:22 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:30:22.982131) [db/memtable_list.cc:722] [default] Level-0 commit table #71: memtable #1 done Dec 6 05:30:22 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:30:22.982143) EVENT_LOG_v1 {"time_micros": 1765017022982139, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 6 05:30:22 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:30:22.982154) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 6 05:30:22 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 678674, prev total WAL file size 678998, number of live WAL files 2. Dec 6 05:30:22 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000067.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:30:22 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:30:22.984287) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740034323535' seq:72057594037927935, type:22 .. '6D6772737461740034353036' seq:0, type:0; will stop at (end) Dec 6 05:30:22 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 6 05:30:22 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [71(338KB)], [69(20MB)] Dec 6 05:30:22 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765017022984330, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [71], "files_L6": [69], "score": -1, "input_data_size": 21655238, "oldest_snapshot_seqno": -1} Dec 6 05:30:23 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #72: 14539 keys, 19629687 bytes, temperature: kUnknown Dec 6 05:30:23 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765017023086356, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 72, "file_size": 19629687, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19547811, "index_size": 44463, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36357, "raw_key_size": 391770, "raw_average_key_size": 26, "raw_value_size": 19302194, "raw_average_value_size": 1327, "num_data_blocks": 1635, "num_entries": 14539, "num_filter_entries": 14539, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015623, "oldest_key_time": 0, "file_creation_time": 1765017022, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ec2a3a6-c7d0-4f1e-9923-5a3b782725ae", "db_session_id": "JMBO5KX1IJCJ8FWC64EX", "orig_file_number": 72, "seqno_to_time_mapping": "N/A"}} Dec 6 05:30:23 localhost ceph-mon[298582]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 6 05:30:23 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:30:23.086702) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 19629687 bytes Dec 6 05:30:23 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:30:23.088362) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 212.0 rd, 192.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 20.3 +0.0 blob) out(18.7 +0.0 blob), read-write-amplify(119.0) write-amplify(56.6) OK, records in: 15050, records dropped: 511 output_compression: NoCompression Dec 6 05:30:23 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:30:23.088385) EVENT_LOG_v1 {"time_micros": 1765017023088373, "job": 42, "event": "compaction_finished", "compaction_time_micros": 102144, "compaction_time_cpu_micros": 52111, "output_level": 6, "num_output_files": 1, "total_output_size": 19629687, "num_input_records": 15050, "num_output_records": 14539, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 6 05:30:23 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:30:23 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765017023088560, "job": 42, "event": "table_file_deletion", "file_number": 71} Dec 6 05:30:23 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000069.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:30:23 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765017023090886, "job": 42, "event": "table_file_deletion", "file_number": 69} Dec 6 05:30:23 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:30:22.984233) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:30:23 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:30:23.090938) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:30:23 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:30:23.090945) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:30:23 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:30:23.090947) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:30:23 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:30:23.090948) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:30:23 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:30:23.090950) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:30:23 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:30:23 localhost podman[241090]: time="2025-12-06T10:30:23Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:30:23 localhost podman[241090]: @ - - [06/Dec/2025:10:30:23 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156104 "" "Go-http-client/1.1" Dec 6 05:30:23 localhost podman[241090]: @ - - [06/Dec/2025:10:30:23 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19281 "" "Go-http-client/1.1" Dec 6 05:30:27 localhost nova_compute[282193]: 2025-12-06 10:30:27.428 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:30:27 localhost nova_compute[282193]: 2025-12-06 10:30:27.947 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:30:28 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:30:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:30:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:30:28 localhost systemd[1]: tmp-crun.PU9Kej.mount: Deactivated successfully. Dec 6 05:30:28 localhost podman[338259]: 2025-12-06 10:30:28.925849355 +0000 UTC m=+0.082597935 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, managed_by=edpm_ansible) Dec 6 05:30:28 localhost podman[338259]: 2025-12-06 10:30:28.938149133 +0000 UTC m=+0.094897723 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible) Dec 6 05:30:28 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:30:29 localhost podman[338258]: 2025-12-06 10:30:29.032467986 +0000 UTC m=+0.187843054 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.buildah.version=1.33.7, distribution-scope=public, vcs-type=git, architecture=x86_64, config_id=edpm, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, name=ubi9-minimal, maintainer=Red Hat, Inc., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers) Dec 6 05:30:29 localhost podman[338258]: 2025-12-06 10:30:29.048170088 +0000 UTC m=+0.203545256 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, version=9.6, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, build-date=2025-08-20T13:12:41, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vendor=Red Hat, Inc., name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Dec 6 05:30:29 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:30:29 localhost ovn_metadata_agent[160504]: 2025-12-06 10:30:29.774 160509 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=b142a5ef-fbed-4e92-aa78-e3ad080c6370, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Dec 6 05:30:31 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:30:31.963 263652 INFO neutron.agent.linux.ip_lib [None req-8e4c2764-24e6-4cd8-b12e-c26a555e62ed - - - - - -] Device tap1edce767-a0 cannot be used as it has no MAC address#033[00m Dec 6 05:30:31 localhost nova_compute[282193]: 2025-12-06 10:30:31.989 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:30:31 localhost kernel: device tap1edce767-a0 entered promiscuous mode Dec 6 05:30:31 localhost nova_compute[282193]: 2025-12-06 10:30:31.998 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:30:31 localhost ovn_controller[154851]: 2025-12-06T10:30:31Z|00529|binding|INFO|Claiming lport 1edce767-a07b-4181-9f27-a5630d291dd5 for this chassis. Dec 6 05:30:32 localhost NetworkManager[5973]: [1765017032.0008] manager: (tap1edce767-a0): new Generic device (/org/freedesktop/NetworkManager/Devices/83) Dec 6 05:30:32 localhost ovn_controller[154851]: 2025-12-06T10:30:31Z|00530|binding|INFO|1edce767-a07b-4181-9f27-a5630d291dd5: Claiming unknown Dec 6 05:30:32 localhost systemd-udevd[338307]: Network interface NamePolicy= disabled on kernel command line. Dec 6 05:30:32 localhost ovn_metadata_agent[160504]: 2025-12-06 10:30:32.015 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-3ce5dfa7-fde4-49df-8cf8-aff98cb18adf', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3ce5dfa7-fde4-49df-8cf8-aff98cb18adf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7d2c3fc1d605488db2b4af2af7696c67', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4dcc78c6-bb1e-494e-a935-28e9e1421181, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=1edce767-a07b-4181-9f27-a5630d291dd5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:30:32 localhost ovn_metadata_agent[160504]: 2025-12-06 10:30:32.016 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 1edce767-a07b-4181-9f27-a5630d291dd5 in datapath 3ce5dfa7-fde4-49df-8cf8-aff98cb18adf bound to our chassis#033[00m Dec 6 05:30:32 localhost ovn_metadata_agent[160504]: 2025-12-06 10:30:32.020 160509 DEBUG neutron.agent.ovn.metadata.agent [-] Port 6199ee5b-335e-4714-8d26-80e7969ccc10 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Dec 6 05:30:32 localhost ovn_metadata_agent[160504]: 2025-12-06 10:30:32.020 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3ce5dfa7-fde4-49df-8cf8-aff98cb18adf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:30:32 localhost ovn_metadata_agent[160504]: 2025-12-06 10:30:32.021 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[aa2e3e7f-e601-42da-821c-db3a5e9b86e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:30:32 localhost journal[230404]: ethtool ioctl error on tap1edce767-a0: No such device Dec 6 05:30:32 localhost journal[230404]: ethtool ioctl error on tap1edce767-a0: No such device Dec 6 05:30:32 localhost ovn_controller[154851]: 2025-12-06T10:30:32Z|00531|binding|INFO|Setting lport 1edce767-a07b-4181-9f27-a5630d291dd5 ovn-installed in OVS Dec 6 05:30:32 localhost ovn_controller[154851]: 2025-12-06T10:30:32Z|00532|binding|INFO|Setting lport 1edce767-a07b-4181-9f27-a5630d291dd5 up in Southbound Dec 6 05:30:32 localhost journal[230404]: ethtool ioctl error on tap1edce767-a0: No such device Dec 6 05:30:32 localhost journal[230404]: ethtool ioctl error on tap1edce767-a0: No such device Dec 6 05:30:32 localhost nova_compute[282193]: 2025-12-06 10:30:32.043 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:30:32 localhost journal[230404]: ethtool ioctl error on tap1edce767-a0: No such device Dec 6 05:30:32 localhost journal[230404]: ethtool ioctl error on tap1edce767-a0: No such device Dec 6 05:30:32 localhost journal[230404]: ethtool ioctl error on tap1edce767-a0: No such device Dec 6 05:30:32 localhost journal[230404]: ethtool ioctl error on tap1edce767-a0: No such device Dec 6 05:30:32 localhost nova_compute[282193]: 2025-12-06 10:30:32.081 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:30:32 localhost nova_compute[282193]: 2025-12-06 10:30:32.109 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:30:32 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:30:32.241 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:30:31Z, description=, device_id=994662bc-2fe4-420e-80b2-94ab1d759954, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=47c772c4-13e0-4bd1-bfd3-8aae8031c96d, ip_allocation=immediate, mac_address=fa:16:3e:2c:7a:f1, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T08:43:55Z, description=, dns_domain=, id=8e238f59-5792-4ff4-95af-f993c8e9e14f, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=3d603431c0bb4967bafc7a0aa6108bfe, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['be7ff830-a3e2-49cd-a05f-63bc528d6d1c'], tags=[], tenant_id=3d603431c0bb4967bafc7a0aa6108bfe, updated_at=2025-12-06T08:44:01Z, vlan_transparent=None, network_id=8e238f59-5792-4ff4-95af-f993c8e9e14f, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3963, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-06T10:30:31Z on network 8e238f59-5792-4ff4-95af-f993c8e9e14f#033[00m Dec 6 05:30:32 localhost nova_compute[282193]: 2025-12-06 10:30:32.471 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:30:32 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 3 addresses Dec 6 05:30:32 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:30:32 localhost podman[338365]: 2025-12-06 10:30:32.487349017 +0000 UTC m=+0.091999303 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Dec 6 05:30:32 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:30:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:30:32 localhost systemd[1]: tmp-crun.3DMVdk.mount: Deactivated successfully. Dec 6 05:30:32 localhost podman[338383]: 2025-12-06 10:30:32.567676722 +0000 UTC m=+0.063797909 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Dec 6 05:30:32 localhost podman[338383]: 2025-12-06 10:30:32.582094354 +0000 UTC m=+0.078215611 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2) Dec 6 05:30:32 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:30:32 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:30:32.750 263652 INFO neutron.agent.dhcp.agent [None req-b4f64022-b44e-4bbb-b798-f5ee31f8a2a6 - - - - - -] DHCP configuration for ports {'47c772c4-13e0-4bd1-bfd3-8aae8031c96d'} is completed#033[00m Dec 6 05:30:32 localhost systemd[1]: tmp-crun.HH2HRV.mount: Deactivated successfully. Dec 6 05:30:32 localhost podman[338434]: Dec 6 05:30:33 localhost podman[338434]: 2025-12-06 10:30:33.001519143 +0000 UTC m=+0.088645191 container create d68a3aac5ad9280c33e780fc50141c75b969f9713ad15986c7963925dcbf4e3a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3ce5dfa7-fde4-49df-8cf8-aff98cb18adf, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Dec 6 05:30:33 localhost systemd[1]: Started libpod-conmon-d68a3aac5ad9280c33e780fc50141c75b969f9713ad15986c7963925dcbf4e3a.scope. Dec 6 05:30:33 localhost systemd[1]: Started libcrun container. Dec 6 05:30:33 localhost podman[338434]: 2025-12-06 10:30:32.959635908 +0000 UTC m=+0.046761946 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Dec 6 05:30:33 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ec5a68b272409c0780e9b34c01467356b790bdd39419926a1b87579a37fc65f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Dec 6 05:30:33 localhost podman[338434]: 2025-12-06 10:30:33.071602073 +0000 UTC m=+0.158728061 container init d68a3aac5ad9280c33e780fc50141c75b969f9713ad15986c7963925dcbf4e3a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3ce5dfa7-fde4-49df-8cf8-aff98cb18adf, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 6 05:30:33 localhost podman[338434]: 2025-12-06 10:30:33.084246292 +0000 UTC m=+0.171372280 container start d68a3aac5ad9280c33e780fc50141c75b969f9713ad15986c7963925dcbf4e3a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3ce5dfa7-fde4-49df-8cf8-aff98cb18adf, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:30:33 localhost dnsmasq[338452]: started, version 2.85 cachesize 150 Dec 6 05:30:33 localhost dnsmasq[338452]: DNS service limited to local subnets Dec 6 05:30:33 localhost dnsmasq[338452]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Dec 6 05:30:33 localhost dnsmasq[338452]: warning: no upstream servers configured Dec 6 05:30:33 localhost dnsmasq-dhcp[338452]: DHCP, static leases only on 10.100.0.0, lease time 1d Dec 6 05:30:33 localhost dnsmasq[338452]: read /var/lib/neutron/dhcp/3ce5dfa7-fde4-49df-8cf8-aff98cb18adf/addn_hosts - 0 addresses Dec 6 05:30:33 localhost dnsmasq-dhcp[338452]: read /var/lib/neutron/dhcp/3ce5dfa7-fde4-49df-8cf8-aff98cb18adf/host Dec 6 05:30:33 localhost dnsmasq-dhcp[338452]: read /var/lib/neutron/dhcp/3ce5dfa7-fde4-49df-8cf8-aff98cb18adf/opts Dec 6 05:30:33 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:30:33 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:30:33.513 263652 INFO neutron.agent.dhcp.agent [None req-00b19acc-fd4b-4c0b-95cf-0187f8f6f303 - - - - - -] DHCP configuration for ports {'e6bf6217-acc3-4b45-a780-fd5e44cdc315'} is completed#033[00m Dec 6 05:30:33 localhost nova_compute[282193]: 2025-12-06 10:30:33.590 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:30:35 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:30:35.062 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:30:34Z, description=, device_id=994662bc-2fe4-420e-80b2-94ab1d759954, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=902f33d2-09bd-47a8-8ed4-f71d87316d90, ip_allocation=immediate, mac_address=fa:16:3e:f7:32:ca, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:30:28Z, description=, dns_domain=, id=3ce5dfa7-fde4-49df-8cf8-aff98cb18adf, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PrometheusGabbiTest-1706947745-network, port_security_enabled=True, project_id=7d2c3fc1d605488db2b4af2af7696c67, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=48346, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3956, status=ACTIVE, subnets=['b1a7cf81-c0c7-4652-b7ea-4819206fe79a'], tags=[], tenant_id=7d2c3fc1d605488db2b4af2af7696c67, updated_at=2025-12-06T10:30:29Z, vlan_transparent=None, network_id=3ce5dfa7-fde4-49df-8cf8-aff98cb18adf, port_security_enabled=False, project_id=7d2c3fc1d605488db2b4af2af7696c67, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3964, status=DOWN, tags=[], tenant_id=7d2c3fc1d605488db2b4af2af7696c67, updated_at=2025-12-06T10:30:34Z on network 3ce5dfa7-fde4-49df-8cf8-aff98cb18adf#033[00m Dec 6 05:30:35 localhost dnsmasq[338452]: read /var/lib/neutron/dhcp/3ce5dfa7-fde4-49df-8cf8-aff98cb18adf/addn_hosts - 1 addresses Dec 6 05:30:35 localhost dnsmasq-dhcp[338452]: read /var/lib/neutron/dhcp/3ce5dfa7-fde4-49df-8cf8-aff98cb18adf/host Dec 6 05:30:35 localhost podman[338471]: 2025-12-06 10:30:35.27705554 +0000 UTC m=+0.064789629 container kill d68a3aac5ad9280c33e780fc50141c75b969f9713ad15986c7963925dcbf4e3a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3ce5dfa7-fde4-49df-8cf8-aff98cb18adf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Dec 6 05:30:35 localhost dnsmasq-dhcp[338452]: read /var/lib/neutron/dhcp/3ce5dfa7-fde4-49df-8cf8-aff98cb18adf/opts Dec 6 05:30:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:30:35 localhost systemd[1]: tmp-crun.1FuqXw.mount: Deactivated successfully. Dec 6 05:30:35 localhost podman[338484]: 2025-12-06 10:30:35.392908625 +0000 UTC m=+0.088468395 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 6 05:30:35 localhost podman[338484]: 2025-12-06 10:30:35.408514244 +0000 UTC m=+0.104074014 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Dec 6 05:30:35 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:30:35 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:30:35.558 263652 INFO neutron.agent.dhcp.agent [None req-7d0c375b-c979-4247-bd8f-b550d5a21af3 - - - - - -] DHCP configuration for ports {'902f33d2-09bd-47a8-8ed4-f71d87316d90'} is completed#033[00m Dec 6 05:30:36 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:30:36.676 263652 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:30:34Z, description=, device_id=994662bc-2fe4-420e-80b2-94ab1d759954, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=902f33d2-09bd-47a8-8ed4-f71d87316d90, ip_allocation=immediate, mac_address=fa:16:3e:f7:32:ca, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:30:28Z, description=, dns_domain=, id=3ce5dfa7-fde4-49df-8cf8-aff98cb18adf, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PrometheusGabbiTest-1706947745-network, port_security_enabled=True, project_id=7d2c3fc1d605488db2b4af2af7696c67, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=48346, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3956, status=ACTIVE, subnets=['b1a7cf81-c0c7-4652-b7ea-4819206fe79a'], tags=[], tenant_id=7d2c3fc1d605488db2b4af2af7696c67, updated_at=2025-12-06T10:30:29Z, vlan_transparent=None, network_id=3ce5dfa7-fde4-49df-8cf8-aff98cb18adf, port_security_enabled=False, project_id=7d2c3fc1d605488db2b4af2af7696c67, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3964, status=DOWN, tags=[], tenant_id=7d2c3fc1d605488db2b4af2af7696c67, updated_at=2025-12-06T10:30:34Z on network 3ce5dfa7-fde4-49df-8cf8-aff98cb18adf#033[00m Dec 6 05:30:36 localhost dnsmasq[338452]: read /var/lib/neutron/dhcp/3ce5dfa7-fde4-49df-8cf8-aff98cb18adf/addn_hosts - 1 addresses Dec 6 05:30:36 localhost podman[338531]: 2025-12-06 10:30:36.889880335 +0000 UTC m=+0.058755944 container kill d68a3aac5ad9280c33e780fc50141c75b969f9713ad15986c7963925dcbf4e3a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3ce5dfa7-fde4-49df-8cf8-aff98cb18adf, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:30:36 localhost dnsmasq-dhcp[338452]: read /var/lib/neutron/dhcp/3ce5dfa7-fde4-49df-8cf8-aff98cb18adf/host Dec 6 05:30:36 localhost dnsmasq-dhcp[338452]: read /var/lib/neutron/dhcp/3ce5dfa7-fde4-49df-8cf8-aff98cb18adf/opts Dec 6 05:30:37 localhost nova_compute[282193]: 2025-12-06 10:30:37.509 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:30:37 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:30:37.524 263652 INFO neutron.agent.dhcp.agent [None req-a29facaf-6969-4596-802f-917779c911a5 - - - - - -] DHCP configuration for ports {'902f33d2-09bd-47a8-8ed4-f71d87316d90'} is completed#033[00m Dec 6 05:30:38 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:30:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:30:39 localhost podman[338551]: 2025-12-06 10:30:39.926013519 +0000 UTC m=+0.088278470 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Dec 6 05:30:40 localhost podman[338551]: 2025-12-06 10:30:40.01439398 +0000 UTC m=+0.176658921 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Dec 6 05:30:40 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:30:42 localhost nova_compute[282193]: 2025-12-06 10:30:42.510 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:30:42 localhost nova_compute[282193]: 2025-12-06 10:30:42.512 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:30:42 localhost nova_compute[282193]: 2025-12-06 10:30:42.513 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:30:42 localhost nova_compute[282193]: 2025-12-06 10:30:42.513 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:30:42 localhost nova_compute[282193]: 2025-12-06 10:30:42.555 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:30:42 localhost nova_compute[282193]: 2025-12-06 10:30:42.556 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:30:43 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:30:44 localhost nova_compute[282193]: 2025-12-06 10:30:44.177 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:30:45 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e282 e282: 6 total, 6 up, 6 in Dec 6 05:30:45 localhost nova_compute[282193]: 2025-12-06 10:30:45.180 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:30:45 localhost nova_compute[282193]: 2025-12-06 10:30:45.366 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:30:45 localhost nova_compute[282193]: 2025-12-06 10:30:45.367 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:30:45 localhost nova_compute[282193]: 2025-12-06 10:30:45.367 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:30:45 localhost nova_compute[282193]: 2025-12-06 10:30:45.368 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Auditing locally available compute resources for np0005548789.localdomain (node: np0005548789.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 6 05:30:45 localhost nova_compute[282193]: 2025-12-06 10:30:45.368 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:30:45 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:30:45 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/510296205' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:30:45 localhost nova_compute[282193]: 2025-12-06 10:30:45.825 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:30:45 localhost nova_compute[282193]: 2025-12-06 10:30:45.894 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:30:45 localhost nova_compute[282193]: 2025-12-06 10:30:45.895 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:30:46 localhost nova_compute[282193]: 2025-12-06 10:30:46.100 282197 WARNING nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 6 05:30:46 localhost nova_compute[282193]: 2025-12-06 10:30:46.102 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Hypervisor/Node resource view: name=np0005548789.localdomain free_ram=11094MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 6 05:30:46 localhost nova_compute[282193]: 2025-12-06 10:30:46.102 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:30:46 localhost nova_compute[282193]: 2025-12-06 10:30:46.103 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:30:46 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e283 e283: 6 total, 6 up, 6 in Dec 6 05:30:46 localhost nova_compute[282193]: 2025-12-06 10:30:46.205 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 6 05:30:46 localhost nova_compute[282193]: 2025-12-06 10:30:46.206 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 6 05:30:46 localhost nova_compute[282193]: 2025-12-06 10:30:46.206 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Final resource view: name=np0005548789.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 6 05:30:46 localhost nova_compute[282193]: 2025-12-06 10:30:46.246 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:30:46 localhost openstack_network_exporter[243110]: ERROR 10:30:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:30:46 localhost openstack_network_exporter[243110]: ERROR 10:30:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:30:46 localhost openstack_network_exporter[243110]: ERROR 10:30:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:30:46 localhost openstack_network_exporter[243110]: ERROR 10:30:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:30:46 localhost openstack_network_exporter[243110]: Dec 6 05:30:46 localhost openstack_network_exporter[243110]: ERROR 10:30:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:30:46 localhost openstack_network_exporter[243110]: Dec 6 05:30:46 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:30:46 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/495746363' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:30:46 localhost nova_compute[282193]: 2025-12-06 10:30:46.672 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:30:46 localhost nova_compute[282193]: 2025-12-06 10:30:46.679 282197 DEBUG nova.compute.provider_tree [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed in ProviderTree for provider: 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 6 05:30:46 localhost nova_compute[282193]: 2025-12-06 10:30:46.699 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 6 05:30:46 localhost nova_compute[282193]: 2025-12-06 10:30:46.700 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Compute_service record updated for np0005548789.localdomain:np0005548789.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 6 05:30:46 localhost nova_compute[282193]: 2025-12-06 10:30:46.700 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:30:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:30:47.347 160509 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:30:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:30:47.347 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:30:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:30:47.348 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:30:47 localhost nova_compute[282193]: 2025-12-06 10:30:47.587 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:30:47 localhost nova_compute[282193]: 2025-12-06 10:30:47.589 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:30:47 localhost nova_compute[282193]: 2025-12-06 10:30:47.589 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5033 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:30:47 localhost nova_compute[282193]: 2025-12-06 10:30:47.590 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:30:47 localhost nova_compute[282193]: 2025-12-06 10:30:47.591 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:30:47 localhost nova_compute[282193]: 2025-12-06 10:30:47.595 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:30:47 localhost nova_compute[282193]: 2025-12-06 10:30:47.701 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:30:47 localhost nova_compute[282193]: 2025-12-06 10:30:47.702 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 6 05:30:47 localhost nova_compute[282193]: 2025-12-06 10:30:47.702 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 6 05:30:48 localhost sshd[338621]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:30:48 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:30:48 localhost nova_compute[282193]: 2025-12-06 10:30:48.517 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 6 05:30:48 localhost nova_compute[282193]: 2025-12-06 10:30:48.518 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquired lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 6 05:30:48 localhost nova_compute[282193]: 2025-12-06 10:30:48.518 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 6 05:30:48 localhost nova_compute[282193]: 2025-12-06 10:30:48.519 282197 DEBUG nova.objects.instance [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 6 05:30:49 localhost nova_compute[282193]: 2025-12-06 10:30:49.115 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updating instance_info_cache with network_info: [{"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 6 05:30:49 localhost nova_compute[282193]: 2025-12-06 10:30:49.140 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Releasing lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 6 05:30:49 localhost nova_compute[282193]: 2025-12-06 10:30:49.140 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 6 05:30:49 localhost nova_compute[282193]: 2025-12-06 10:30:49.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:30:49 localhost nova_compute[282193]: 2025-12-06 10:30:49.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:30:52 localhost nova_compute[282193]: 2025-12-06 10:30:52.183 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:30:52 localhost nova_compute[282193]: 2025-12-06 10:30:52.596 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:30:52 localhost nova_compute[282193]: 2025-12-06 10:30:52.598 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:30:52 localhost nova_compute[282193]: 2025-12-06 10:30:52.598 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:30:52 localhost nova_compute[282193]: 2025-12-06 10:30:52.598 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:30:52 localhost nova_compute[282193]: 2025-12-06 10:30:52.608 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:30:52 localhost nova_compute[282193]: 2025-12-06 10:30:52.609 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:30:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:30:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:30:52 localhost podman[338623]: 2025-12-06 10:30:52.91263296 +0000 UTC m=+0.076246421 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent) Dec 6 05:30:52 localhost podman[338623]: 2025-12-06 10:30:52.921091309 +0000 UTC m=+0.084704760 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent) Dec 6 05:30:52 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:30:52 localhost podman[338624]: 2025-12-06 10:30:52.965905545 +0000 UTC m=+0.127611507 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 6 05:30:53 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e284 e284: 6 total, 6 up, 6 in Dec 6 05:30:53 localhost podman[338624]: 2025-12-06 10:30:53.008226083 +0000 UTC m=+0.169932095 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Dec 6 05:30:53 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:30:53 localhost nova_compute[282193]: 2025-12-06 10:30:53.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:30:53 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:30:53 localhost ovn_controller[154851]: 2025-12-06T10:30:53Z|00533|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:30:53 localhost nova_compute[282193]: 2025-12-06 10:30:53.733 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:30:53 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 2 addresses Dec 6 05:30:53 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:30:53 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:30:53 localhost podman[338682]: 2025-12-06 10:30:53.761859725 +0000 UTC m=+0.084395690 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Dec 6 05:30:53 localhost podman[241090]: time="2025-12-06T10:30:53Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:30:53 localhost podman[241090]: @ - - [06/Dec/2025:10:30:53 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157928 "" "Go-http-client/1.1" Dec 6 05:30:53 localhost podman[241090]: @ - - [06/Dec/2025:10:30:53 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19756 "" "Go-http-client/1.1" Dec 6 05:30:54 localhost nova_compute[282193]: 2025-12-06 10:30:54.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:30:54 localhost nova_compute[282193]: 2025-12-06 10:30:54.182 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Dec 6 05:30:55 localhost nova_compute[282193]: 2025-12-06 10:30:55.183 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:30:55 localhost dnsmasq[338452]: read /var/lib/neutron/dhcp/3ce5dfa7-fde4-49df-8cf8-aff98cb18adf/addn_hosts - 0 addresses Dec 6 05:30:55 localhost dnsmasq-dhcp[338452]: read /var/lib/neutron/dhcp/3ce5dfa7-fde4-49df-8cf8-aff98cb18adf/host Dec 6 05:30:55 localhost dnsmasq-dhcp[338452]: read /var/lib/neutron/dhcp/3ce5dfa7-fde4-49df-8cf8-aff98cb18adf/opts Dec 6 05:30:55 localhost podman[338722]: 2025-12-06 10:30:55.686848327 +0000 UTC m=+0.042017381 container kill d68a3aac5ad9280c33e780fc50141c75b969f9713ad15986c7963925dcbf4e3a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3ce5dfa7-fde4-49df-8cf8-aff98cb18adf, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:30:55 localhost nova_compute[282193]: 2025-12-06 10:30:55.867 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:30:55 localhost ovn_controller[154851]: 2025-12-06T10:30:55Z|00534|binding|INFO|Releasing lport 1edce767-a07b-4181-9f27-a5630d291dd5 from this chassis (sb_readonly=0) Dec 6 05:30:55 localhost ovn_controller[154851]: 2025-12-06T10:30:55Z|00535|binding|INFO|Setting lport 1edce767-a07b-4181-9f27-a5630d291dd5 down in Southbound Dec 6 05:30:55 localhost kernel: device tap1edce767-a0 left promiscuous mode Dec 6 05:30:55 localhost ovn_metadata_agent[160504]: 2025-12-06 10:30:55.878 160509 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548789.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcpc2183f4b-9cdb-5853-8c79-53b0285a0de7-3ce5dfa7-fde4-49df-8cf8-aff98cb18adf', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3ce5dfa7-fde4-49df-8cf8-aff98cb18adf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7d2c3fc1d605488db2b4af2af7696c67', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548789.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4dcc78c6-bb1e-494e-a935-28e9e1421181, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=1edce767-a07b-4181-9f27-a5630d291dd5) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Dec 6 05:30:55 localhost ovn_metadata_agent[160504]: 2025-12-06 10:30:55.880 160509 INFO neutron.agent.ovn.metadata.agent [-] Port 1edce767-a07b-4181-9f27-a5630d291dd5 in datapath 3ce5dfa7-fde4-49df-8cf8-aff98cb18adf unbound from our chassis#033[00m Dec 6 05:30:55 localhost ovn_metadata_agent[160504]: 2025-12-06 10:30:55.882 160509 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3ce5dfa7-fde4-49df-8cf8-aff98cb18adf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Dec 6 05:30:55 localhost ovn_metadata_agent[160504]: 2025-12-06 10:30:55.882 160674 DEBUG oslo.privsep.daemon [-] privsep: reply[ac9c1cf7-5e51-47a1-a94d-91553f4c24e2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Dec 6 05:30:55 localhost nova_compute[282193]: 2025-12-06 10:30:55.892 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:30:57 localhost ovn_controller[154851]: 2025-12-06T10:30:57Z|00536|binding|INFO|Releasing lport 4fb81ffd-e198-4628-9bd0-0c0f0c89c33a from this chassis (sb_readonly=0) Dec 6 05:30:57 localhost nova_compute[282193]: 2025-12-06 10:30:57.535 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:30:57 localhost dnsmasq[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/addn_hosts - 1 addresses Dec 6 05:30:57 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/host Dec 6 05:30:57 localhost podman[338760]: 2025-12-06 10:30:57.550832616 +0000 UTC m=+0.047209260 container kill dd70518caed84fde347015325d6b35481123abc4a39df99f8bc9422c2468322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8e238f59-5792-4ff4-95af-f993c8e9e14f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 6 05:30:57 localhost dnsmasq-dhcp[314636]: read /var/lib/neutron/dhcp/8e238f59-5792-4ff4-95af-f993c8e9e14f/opts Dec 6 05:30:57 localhost nova_compute[282193]: 2025-12-06 10:30:57.610 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:30:58 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:30:59 localhost dnsmasq[338452]: exiting on receipt of SIGTERM Dec 6 05:30:59 localhost podman[338796]: 2025-12-06 10:30:59.463564992 +0000 UTC m=+0.060951112 container kill d68a3aac5ad9280c33e780fc50141c75b969f9713ad15986c7963925dcbf4e3a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3ce5dfa7-fde4-49df-8cf8-aff98cb18adf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true) Dec 6 05:30:59 localhost systemd[1]: libpod-d68a3aac5ad9280c33e780fc50141c75b969f9713ad15986c7963925dcbf4e3a.scope: Deactivated successfully. Dec 6 05:30:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:30:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:30:59 localhost podman[338811]: 2025-12-06 10:30:59.512304537 +0000 UTC m=+0.040880065 container died d68a3aac5ad9280c33e780fc50141c75b969f9713ad15986c7963925dcbf4e3a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3ce5dfa7-fde4-49df-8cf8-aff98cb18adf, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:30:59 localhost systemd[1]: tmp-crun.1GOr4b.mount: Deactivated successfully. Dec 6 05:30:59 localhost podman[338828]: 2025-12-06 10:30:59.560902188 +0000 UTC m=+0.068818363 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible) Dec 6 05:30:59 localhost podman[338828]: 2025-12-06 10:30:59.571219704 +0000 UTC m=+0.079135899 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 6 05:30:59 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:30:59 localhost podman[338811]: 2025-12-06 10:30:59.598482641 +0000 UTC m=+0.127058129 container cleanup d68a3aac5ad9280c33e780fc50141c75b969f9713ad15986c7963925dcbf4e3a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3ce5dfa7-fde4-49df-8cf8-aff98cb18adf, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd) Dec 6 05:30:59 localhost systemd[1]: libpod-conmon-d68a3aac5ad9280c33e780fc50141c75b969f9713ad15986c7963925dcbf4e3a.scope: Deactivated successfully. Dec 6 05:30:59 localhost podman[338818]: 2025-12-06 10:30:59.661282678 +0000 UTC m=+0.176835647 container remove d68a3aac5ad9280c33e780fc50141c75b969f9713ad15986c7963925dcbf4e3a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3ce5dfa7-fde4-49df-8cf8-aff98cb18adf, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Dec 6 05:30:59 localhost podman[338819]: 2025-12-06 10:30:59.729637435 +0000 UTC m=+0.237026244 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.33.7, managed_by=edpm_ansible, container_name=openstack_network_exporter, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container) Dec 6 05:30:59 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:30:59.744 263652 INFO neutron.agent.dhcp.agent [None req-0ba61d5b-4921-4b96-a54b-dc004b1b0cee - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:30:59 localhost podman[338819]: 2025-12-06 10:30:59.751260639 +0000 UTC m=+0.258649438 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.7, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, version=9.6, maintainer=Red Hat, Inc., config_id=edpm, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, distribution-scope=public, build-date=2025-08-20T13:12:41, name=ubi9-minimal, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Dec 6 05:30:59 localhost neutron_dhcp_agent[263648]: 2025-12-06 10:30:59.760 263652 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Dec 6 05:30:59 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:31:00 localhost systemd[1]: var-lib-containers-storage-overlay-7ec5a68b272409c0780e9b34c01467356b790bdd39419926a1b87579a37fc65f-merged.mount: Deactivated successfully. Dec 6 05:31:00 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d68a3aac5ad9280c33e780fc50141c75b969f9713ad15986c7963925dcbf4e3a-userdata-shm.mount: Deactivated successfully. Dec 6 05:31:00 localhost systemd[1]: run-netns-qdhcp\x2d3ce5dfa7\x2dfde4\x2d49df\x2d8cf8\x2daff98cb18adf.mount: Deactivated successfully. Dec 6 05:31:01 localhost nova_compute[282193]: 2025-12-06 10:31:01.177 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:31:02 localhost nova_compute[282193]: 2025-12-06 10:31:02.613 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:31:02 localhost nova_compute[282193]: 2025-12-06 10:31:02.615 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:31:02 localhost nova_compute[282193]: 2025-12-06 10:31:02.615 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:31:02 localhost nova_compute[282193]: 2025-12-06 10:31:02.616 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:31:02 localhost nova_compute[282193]: 2025-12-06 10:31:02.656 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:31:02 localhost nova_compute[282193]: 2025-12-06 10:31:02.658 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:31:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:31:02 localhost podman[338877]: 2025-12-06 10:31:02.918507335 +0000 UTC m=+0.082022977 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Dec 6 05:31:02 localhost podman[338877]: 2025-12-06 10:31:02.934050262 +0000 UTC m=+0.097565844 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:31:02 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:31:03 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:31:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:31:05 localhost podman[338897]: 2025-12-06 10:31:05.924919378 +0000 UTC m=+0.085719152 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 6 05:31:05 localhost podman[338897]: 2025-12-06 10:31:05.933717458 +0000 UTC m=+0.094517272 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 6 05:31:05 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:31:07 localhost nova_compute[282193]: 2025-12-06 10:31:07.659 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.916 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'name': 'test', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'np0005548789.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'hostId': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.917 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.924 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.924 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '775f0c41-2cc3-4cdb-a824-8d04d062e89d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:31:07.917121', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ad10f3d2-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13486.166449224, 'message_signature': '4a2f7a942ed623cdf75db3c2fd61369b2ae835cdf3e50f5f94f06ad13db67764'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:31:07.917121', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ad10ffa8-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13486.166449224, 'message_signature': '9e245cde37c4cc0fe03e2a0a1f4aa247497e91bcddf4d44a21618b2b5cf39d19'}]}, 'timestamp': '2025-12-06 10:31:07.924806', '_unique_id': '03f9762598d44edbb0f6558e9a72baff'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.925 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.926 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets volume: 60 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '560337ff-c0de-4966-b5da-08cf7248ca7f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 60, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:31:07.926507', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'ad11aebc-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13486.175849412, 'message_signature': 'f802640ec9976e1be59a82994db3b72676983db61dbab4c028e85bae5654b3ae'}]}, 'timestamp': '2025-12-06 10:31:07.929276', '_unique_id': 'a523e61d035e4eb6935ae68c8a15e620'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.929 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '98409ea8-9698-43e0-8116-fa953a9370e4', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:31:07.930359', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'ad11e1ca-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13486.175849412, 'message_signature': '6b09e2340de317ccf32f72013fa52a78b6cf81f03f714da5ddbfc55f029f54ec'}]}, 'timestamp': '2025-12-06 10:31:07.930570', '_unique_id': 'e2d6432194c149f68c4472cf1fe3bf86'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.930 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.931 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.931 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '09f7a8bf-fccc-410e-82f6-58ad00fe69ba', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:31:07.931554', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'ad121078-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13486.175849412, 'message_signature': 'eab5c8ce225f0c8ee2c577810aff05269a32c19cd9a776734debeec2dded61a2'}]}, 'timestamp': '2025-12-06 10:31:07.931786', '_unique_id': 'f7321eab08314cfb971a82c84c4f9555'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.932 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.bytes volume: 6809 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'f96c3131-9897-4b68-918e-490016eaf7c9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 6809, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:31:07.932822', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'ad124214-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13486.175849412, 'message_signature': '52fcb853fd3ecc5d41f85ff03caa13410c3dd55b196985e50c385e32623e1523'}]}, 'timestamp': '2025-12-06 10:31:07.933038', '_unique_id': '003e551d0dd04fb89557ba53410f8c51'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.933 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e8503cda-83c1-4529-9ce9-ade6b682c23c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:31:07.934013', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'ad127086-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13486.175849412, 'message_signature': '2da75099d3340401c766a38484881ebe79a2f8ed8d0f76919353395c22491636'}]}, 'timestamp': '2025-12-06 10:31:07.934251', '_unique_id': '177a191b6f164a88b53ebd5b6c5df656'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.934 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.935 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.954 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.bytes volume: 35560448 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.955 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.bytes volume: 2154496 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5bafb5fa-a862-4fe2-a9a7-0ff7c4e74a97', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 35560448, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:31:07.935214', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ad159d7e-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13486.184549049, 'message_signature': 'b4e34862be15baeb471edc43e474f55a7813785999c9e868ac9d6c78dd24b6a3'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2154496, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:31:07.935214', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ad15a7a6-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13486.184549049, 'message_signature': '34a22aeb3263196390e206e9e020b06f9dfe5680a6ea2c97d56ae5c80f88ce82'}]}, 'timestamp': '2025-12-06 10:31:07.955301', '_unique_id': '568c04d7587144948e96d852d14cfcbc'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.956 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.957 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.957 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.957 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.957 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b82d3ff0-cde2-4054-880f-5a1f38888134', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:31:07.957179', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ad15f986-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13486.166449224, 'message_signature': 'a78e772cf5fa37dccbc771c60973b3c0fc65658b67db31aa6163539119412739'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:31:07.957179', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ad16012e-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13486.166449224, 'message_signature': '07b5212ab75e826d956b818d14422060752650946c07b820144425369b3f0e1f'}]}, 'timestamp': '2025-12-06 10:31:07.957576', '_unique_id': '6a2571aa804a4c848a44c60dc325237c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.958 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd516ff0d-fc9c-4320-91aa-4603792aa29f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:31:07.958624', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'ad16322a-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13486.175849412, 'message_signature': '8e1f9dcb5733f4e11832ddda1362653999bfb3d63f71b776dee05ca39c93392b'}]}, 'timestamp': '2025-12-06 10:31:07.958860', '_unique_id': '152a4ea525dc4da0bcace3060a73b78a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.959 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.970 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/cpu volume: 21340000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '6560707c-b25d-4a0a-9b63-f9586c00cc47', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 21340000000, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'timestamp': '2025-12-06T10:31:07.959833', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'cpu_number': 1}, 'message_id': 'ad180c44-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13486.219738069, 'message_signature': 'd3a9c44c4bfced9ae22049d2e1c0db3842cc83257dd0cf134303ff55c33d2da6'}]}, 'timestamp': '2025-12-06 10:31:07.971143', '_unique_id': '7701cd13a78b42cfa261842e8644de6c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.972 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.973 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.973 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.bytes volume: 9770 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2c0ee22f-9807-49cd-b78d-b6a1cbd2d69a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 9770, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:31:07.973619', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'ad1881ec-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13486.175849412, 'message_signature': '81aae15694f8dd36807140bccf4f9f484b6be743597bfaa396235eb87f27bda6'}]}, 'timestamp': '2025-12-06 10:31:07.974124', '_unique_id': 'db3396b47f4b4177a94cd4a90989b559'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.975 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.976 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.976 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.outgoing.packets volume: 114 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '65ed5a95-3678-4d98-a268-2b2bfdcd33fd', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 114, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:31:07.976373', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'ad18ee8e-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13486.175849412, 'message_signature': 'c12ce71cec64f2cc688ea1625deba56ccda66a92ba96f4f2ef222d9540dd996d'}]}, 'timestamp': '2025-12-06 10:31:07.976952', '_unique_id': 'aad346e3f153439a9ab96c94de14fb01'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.977 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.979 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.979 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '834a7dbc-2251-4cb5-b700-6413d6a5f914', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:31:07.979432', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'ad196440-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13486.175849412, 'message_signature': 'e84ffeebf070043a762a7a14525ea7b314260fe2993593b54c30d80a9591387f'}]}, 'timestamp': '2025-12-06 10:31:07.979960', '_unique_id': '9a3294d9e59248c9a49bcb6fb27e560d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.981 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.982 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.982 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.requests volume: 47 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.982 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '77d360b0-e76f-45ee-a2e1-bf0d2fd9d4ce', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 47, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:31:07.982504', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ad19db78-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13486.184549049, 'message_signature': '88752c2cbacfc900ff45a5238397248be4f2e8cc320b1bff3914a33ecb9cf4a8'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:31:07.982504', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ad19ed3e-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13486.184549049, 'message_signature': '7a8daa934729653b8ebaa7355700cb7691728b6a0029521da6938601ace1b2fb'}]}, 'timestamp': '2025-12-06 10:31:07.983410', '_unique_id': '04b41d4edd9143a6b232cce47674ab0a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.984 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.985 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.985 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.latency volume: 1252245154 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.986 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.latency volume: 27668224 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '800f285e-8c47-474e-ad42-9c446b5795c2', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1252245154, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:31:07.985658', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ad1a5792-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13486.184549049, 'message_signature': '9fb376f7d7c6514497f7a8b16642e553e9aa7c103d6a3906d3d4f848093e9e70'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 27668224, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:31:07.985658', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ad1a69a8-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13486.184549049, 'message_signature': '360ec3341ee197bca6dc928a7e5173e4087c429fd89b2c1a391c024735e2a949'}]}, 'timestamp': '2025-12-06 10:31:07.986576', '_unique_id': '90661cd375d043e78bb137a28739254b'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.987 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.988 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.988 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.requests volume: 1272 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.989 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '74061940-5d7a-46f4-a732-f1cd1a24b58b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1272, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:31:07.988797', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ad1ad122-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13486.184549049, 'message_signature': '2e5028532a5ea915b0885b6e17411a5b4e65e9b67c719b04e63e15b5f176db35'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 124, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:31:07.988797', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ad1ae0f4-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13486.184549049, 'message_signature': 'e0f49567bc45426630b3b91b5ccfac970de1511017f4c262871d548d5d86c15f'}]}, 'timestamp': '2025-12-06 10:31:07.989660', '_unique_id': 'd3cc75a9dad8444789c0f6fe28850d52'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.990 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.991 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.991 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.992 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '913ec8f8-d78d-4b42-9d4f-5859a2be69bd', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'instance-00000002-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-tap86fc0b7a-fb', 'timestamp': '2025-12-06T10:31:07.992062', 'resource_metadata': {'display_name': 'test', 'name': 'tap86fc0b7a-fb', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'mac': 'fa:16:3e:64:77:f3', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tap86fc0b7a-fb'}, 'message_id': 'ad1b5084-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13486.175849412, 'message_signature': '3a4b90f6df8b649ea6f51f121badd7ca8660e5e7b1c7ca3c146ab46bd55b55a1'}]}, 'timestamp': '2025-12-06 10:31:07.992543', '_unique_id': '342d8da976354a3f9ab60fe265bd70ef'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.993 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.994 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.994 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.latency volume: 1525105336 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.994 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.read.latency volume: 106716064 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a8906853-560d-405a-b492-ea5c8cbee71d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1525105336, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:31:07.994141', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ad1b9ec2-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13486.184549049, 'message_signature': 'e33f624d0764844c1081e0453b363408b8f8845708d008281faf306ca18b2b40'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 106716064, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:31:07.994141', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ad1ba962-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13486.184549049, 'message_signature': '2c8813068f80d363ae88c3b1724e02ba9c08dfd214afa8ee91dff01cb2bd0839'}]}, 'timestamp': '2025-12-06 10:31:07.994692', '_unique_id': '87a989f534f94596bdee918bb10fa6cf'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.995 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.996 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.996 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.bytes volume: 389120 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.996 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '0d721893-7405-48c5-8f3e-9bc5f85d52cf', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 389120, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:31:07.996132', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ad1bec7e-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13486.184549049, 'message_signature': '2b6472bbab95cf07745e2628f724758a96e65abdec8a81607ede934f08bedb20'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 512, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:31:07.996132', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ad1bf7a0-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13486.184549049, 'message_signature': '5b359e0a731d09a126d82fc5edfdab8076fababb920bb7f3dfa33e7d4324c0cd'}]}, 'timestamp': '2025-12-06 10:31:07.996694', '_unique_id': 'cb6044a385e14408ac61c4d02f1ac1f2'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.997 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.998 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.998 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:31:07 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.998 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1c19f284-44e6-43ed-912a-ed0c6dac6b19', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vda', 'timestamp': '2025-12-06T10:31:07.998173', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vda'}, 'message_id': 'ad1c3c60-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13486.166449224, 'message_signature': 'fdb3f0e55fffad2ef706b97334e4b263e91b0b13799adf32cfe1d5c8c049febf'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa-vdb', 'timestamp': '2025-12-06T10:31:07.998173', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0, 'disk_name': 'vdb'}, 'message_id': 'ad1c46ec-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13486.166449224, 'message_signature': '599781b5c2daf5ad1b64665fe7479bced5e286260736c54c757ceb042c732f3e'}]}, 'timestamp': '2025-12-06 10:31:07.998739', '_unique_id': '0e46dcb292514842a95662aa4ee0b7b0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:07.999 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.000 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no new resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.000 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.000 12 DEBUG ceilometer.compute.pollsters [-] b7ed0a2e-9350-4933-9334-4e5e08d3e6aa/memory.usage volume: 51.80859375 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '139f1451-f2c3-4024-8e60-ed2d8d93dc6e', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 51.80859375, 'user_id': 'ff0049f3313348bdb67886d170c1c765', 'user_name': None, 'project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'project_name': None, 'resource_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'timestamp': '2025-12-06T10:31:08.000448', 'resource_metadata': {'display_name': 'test', 'name': 'instance-00000002', 'instance_id': 'b7ed0a2e-9350-4933-9334-4e5e08d3e6aa', 'instance_type': 'm1.small', 'host': '2da4d478d8e46de98c9126bc3117f8e5f52968a2b8b7dab159eddaf1', 'instance_host': 'np0005548789.localdomain', 'flavor': {'id': '3b9dcd46-fa1b-4714-ba2b-665da2f67af6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'e0d06706-da90-478a-9829-34b75a3ce049'}, 'image_ref': 'e0d06706-da90-478a-9829-34b75a3ce049', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 512, 'disk_gb': 1, 'ephemeral_gb': 1, 'root_gb': 0}, 'message_id': 'ad1c9836-d28e-11f0-aaf2-fa163e118844', 'monotonic_time': 13486.219738069, 'message_signature': '604c18ee5858edf9bf6324a73166860d0faf061c24bb0e36f3607aea3f0734d1'}]}, 'timestamp': '2025-12-06 10:31:08.000841', '_unique_id': 'ec2ff55762174195a96ba994f57a89b8'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging yield Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging conn.connect() Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Dec 6 05:31:08 localhost ceilometer_agent_compute[238351]: 2025-12-06 10:31:08.001 12 ERROR oslo_messaging.notify.messaging Dec 6 05:31:08 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:31:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:31:10 localhost podman[338920]: 2025-12-06 10:31:10.918217479 +0000 UTC m=+0.079019485 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3) Dec 6 05:31:10 localhost podman[338920]: 2025-12-06 10:31:10.981925264 +0000 UTC m=+0.142727230 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Dec 6 05:31:10 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:31:12 localhost nova_compute[282193]: 2025-12-06 10:31:12.662 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:31:12 localhost nova_compute[282193]: 2025-12-06 10:31:12.664 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:31:12 localhost nova_compute[282193]: 2025-12-06 10:31:12.664 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:31:12 localhost nova_compute[282193]: 2025-12-06 10:31:12.664 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:31:12 localhost nova_compute[282193]: 2025-12-06 10:31:12.695 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:31:12 localhost nova_compute[282193]: 2025-12-06 10:31:12.696 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:31:13 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:31:16 localhost openstack_network_exporter[243110]: ERROR 10:31:16 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:31:16 localhost openstack_network_exporter[243110]: ERROR 10:31:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:31:16 localhost openstack_network_exporter[243110]: ERROR 10:31:16 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:31:16 localhost openstack_network_exporter[243110]: ERROR 10:31:16 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:31:16 localhost openstack_network_exporter[243110]: Dec 6 05:31:16 localhost openstack_network_exporter[243110]: ERROR 10:31:16 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:31:16 localhost openstack_network_exporter[243110]: Dec 6 05:31:17 localhost ceph-mon[298582]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Dec 6 05:31:17 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:31:17 localhost nova_compute[282193]: 2025-12-06 10:31:17.697 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:31:17 localhost nova_compute[282193]: 2025-12-06 10:31:17.735 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:31:17 localhost nova_compute[282193]: 2025-12-06 10:31:17.736 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5039 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:31:17 localhost nova_compute[282193]: 2025-12-06 10:31:17.736 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:31:17 localhost nova_compute[282193]: 2025-12-06 10:31:17.737 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:31:17 localhost nova_compute[282193]: 2025-12-06 10:31:17.738 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:31:18 localhost ceph-mon[298582]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' Dec 6 05:31:18 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:31:22 localhost nova_compute[282193]: 2025-12-06 10:31:22.739 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:31:22 localhost nova_compute[282193]: 2025-12-06 10:31:22.741 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:31:22 localhost nova_compute[282193]: 2025-12-06 10:31:22.741 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:31:22 localhost nova_compute[282193]: 2025-12-06 10:31:22.741 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:31:22 localhost nova_compute[282193]: 2025-12-06 10:31:22.770 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:31:22 localhost nova_compute[282193]: 2025-12-06 10:31:22.771 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:31:23 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:31:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999. Dec 6 05:31:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941. Dec 6 05:31:23 localhost podman[241090]: time="2025-12-06T10:31:23Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Dec 6 05:31:23 localhost systemd[1]: tmp-crun.zR8VFh.mount: Deactivated successfully. Dec 6 05:31:23 localhost podman[339031]: 2025-12-06 10:31:23.925546898 +0000 UTC m=+0.080017805 container health_status 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20251125, managed_by=edpm_ansible) Dec 6 05:31:24 localhost podman[339031]: 2025-12-06 10:31:24.011158956 +0000 UTC m=+0.165629843 container exec_died 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true) Dec 6 05:31:24 localhost systemd[1]: 5a95a31a3d61d2f948a3383baa17e8fe9e0336cbbe8cd1193992f8830dfe5999.service: Deactivated successfully. Dec 6 05:31:24 localhost podman[339032]: 2025-12-06 10:31:23.980135224 +0000 UTC m=+0.134306462 container health_status b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Dec 6 05:31:24 localhost podman[241090]: @ - - [06/Dec/2025:10:31:23 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156104 "" "Go-http-client/1.1" Dec 6 05:31:24 localhost podman[339032]: 2025-12-06 10:31:24.114677392 +0000 UTC m=+0.268848680 container exec_died b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Dec 6 05:31:24 localhost podman[241090]: @ - - [06/Dec/2025:10:31:24 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19269 "" "Go-http-client/1.1" Dec 6 05:31:24 localhost systemd[1]: b72b7f6308415cf6475230cb9b0cbf85f87f74051929730ede4e25c3d87f5941.service: Deactivated successfully. Dec 6 05:31:27 localhost nova_compute[282193]: 2025-12-06 10:31:27.772 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:31:27 localhost nova_compute[282193]: 2025-12-06 10:31:27.774 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:31:27 localhost nova_compute[282193]: 2025-12-06 10:31:27.774 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:31:27 localhost nova_compute[282193]: 2025-12-06 10:31:27.775 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:31:27 localhost nova_compute[282193]: 2025-12-06 10:31:27.823 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:31:27 localhost nova_compute[282193]: 2025-12-06 10:31:27.824 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:31:28 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:31:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e. Dec 6 05:31:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094. Dec 6 05:31:29 localhost podman[339072]: 2025-12-06 10:31:29.917624036 +0000 UTC m=+0.076910960 container health_status bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Dec 6 05:31:29 localhost podman[339072]: 2025-12-06 10:31:29.927284953 +0000 UTC m=+0.086571877 container exec_died bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3) Dec 6 05:31:29 localhost systemd[1]: bb7a9bcbc6b5241be8139843cc74262124ff60e2712fcc5adc78da15e3143094.service: Deactivated successfully. Dec 6 05:31:29 localhost ovn_controller[154851]: 2025-12-06T10:31:29Z|00537|memory_trim|INFO|Detected inactivity (last active 30014 ms ago): trimming memory Dec 6 05:31:29 localhost podman[339071]: 2025-12-06 10:31:29.968485847 +0000 UTC m=+0.129906047 container health_status 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, distribution-scope=public, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, io.openshift.expose-services=, vcs-type=git, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, managed_by=edpm_ansible, config_id=edpm) Dec 6 05:31:29 localhost podman[339071]: 2025-12-06 10:31:29.981237108 +0000 UTC m=+0.142657338 container exec_died 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, release=1755695350, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vcs-type=git, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_id=edpm, distribution-scope=public, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Dec 6 05:31:29 localhost systemd[1]: 10ccb8cb18db7849aa21674e6bb5b9d3988304a3a1061b356419125284408a6e.service: Deactivated successfully. Dec 6 05:31:32 localhost nova_compute[282193]: 2025-12-06 10:31:32.825 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:31:32 localhost nova_compute[282193]: 2025-12-06 10:31:32.827 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:31:32 localhost nova_compute[282193]: 2025-12-06 10:31:32.827 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:31:32 localhost nova_compute[282193]: 2025-12-06 10:31:32.827 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:31:32 localhost nova_compute[282193]: 2025-12-06 10:31:32.862 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:31:32 localhost nova_compute[282193]: 2025-12-06 10:31:32.863 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:31:33 localhost sshd[339111]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:31:33 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:31:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6. Dec 6 05:31:33 localhost systemd-logind[766]: New session 79 of user zuul. Dec 6 05:31:33 localhost systemd[1]: Started Session 79 of User zuul. Dec 6 05:31:33 localhost podman[339113]: 2025-12-06 10:31:33.607956649 +0000 UTC m=+0.092836239 container health_status 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd) Dec 6 05:31:33 localhost podman[339113]: 2025-12-06 10:31:33.619796763 +0000 UTC m=+0.104676393 container exec_died 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Dec 6 05:31:33 localhost systemd[1]: 44e8baf5fd6841d0aefb15fe2359de24b840fa2fbe07d09ba5f718006ae6ace6.service: Deactivated successfully. Dec 6 05:31:33 localhost python3[339153]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager unregister#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-7de2-0762-00000000000c-1-overcloudnovacompute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Dec 6 05:31:34 localhost sshd[339156]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:31:34 localhost sshd[339158]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:31:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538. Dec 6 05:31:36 localhost podman[339160]: 2025-12-06 10:31:36.284856282 +0000 UTC m=+0.087471915 container health_status d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Dec 6 05:31:36 localhost podman[339160]: 2025-12-06 10:31:36.298309085 +0000 UTC m=+0.100924718 container exec_died d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Dec 6 05:31:36 localhost systemd[1]: d25404de82bf33c2d8ac6199349771ccea807f639a7289be7b7e445bb6613538.service: Deactivated successfully. Dec 6 05:31:37 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #73. Immutable memtables: 0. Dec 6 05:31:37 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:31:37.581944) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Dec 6 05:31:37 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 73 Dec 6 05:31:37 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765017097581988, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 1135, "num_deletes": 252, "total_data_size": 1870586, "memory_usage": 2068016, "flush_reason": "Manual Compaction"} Dec 6 05:31:37 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #74: started Dec 6 05:31:37 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765017097591866, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 74, "file_size": 1214849, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41093, "largest_seqno": 42223, "table_properties": {"data_size": 1210299, "index_size": 2149, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10698, "raw_average_key_size": 20, "raw_value_size": 1200818, "raw_average_value_size": 2291, "num_data_blocks": 96, "num_entries": 524, "num_filter_entries": 524, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765017022, "oldest_key_time": 1765017022, "file_creation_time": 1765017097, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ec2a3a6-c7d0-4f1e-9923-5a3b782725ae", "db_session_id": "JMBO5KX1IJCJ8FWC64EX", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}} Dec 6 05:31:37 localhost ceph-mon[298582]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 9974 microseconds, and 4412 cpu microseconds. Dec 6 05:31:37 localhost ceph-mon[298582]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 6 05:31:37 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:31:37.591917) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #74: 1214849 bytes OK Dec 6 05:31:37 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:31:37.591946) [db/memtable_list.cc:519] [default] Level-0 commit table #74 started Dec 6 05:31:37 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:31:37.594161) [db/memtable_list.cc:722] [default] Level-0 commit table #74: memtable #1 done Dec 6 05:31:37 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:31:37.594185) EVENT_LOG_v1 {"time_micros": 1765017097594178, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Dec 6 05:31:37 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:31:37.594207) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Dec 6 05:31:37 localhost ceph-mon[298582]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 1865029, prev total WAL file size 1865029, number of live WAL files 2. Dec 6 05:31:37 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000070.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:31:37 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:31:37.595118) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003133333033' seq:72057594037927935, type:22 .. '7061786F73003133353535' seq:0, type:0; will stop at (end) Dec 6 05:31:37 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00 Dec 6 05:31:37 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [74(1186KB)], [72(18MB)] Dec 6 05:31:37 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765017097595176, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [74], "files_L6": [72], "score": -1, "input_data_size": 20844536, "oldest_snapshot_seqno": -1} Dec 6 05:31:37 localhost ceph-mon[298582]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #75: 14535 keys, 19226756 bytes, temperature: kUnknown Dec 6 05:31:37 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765017097697591, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 75, "file_size": 19226756, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19145609, "index_size": 43734, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36357, "raw_key_size": 392178, "raw_average_key_size": 26, "raw_value_size": 18900741, "raw_average_value_size": 1300, "num_data_blocks": 1601, "num_entries": 14535, "num_filter_entries": 14535, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015623, "oldest_key_time": 0, "file_creation_time": 1765017097, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1ec2a3a6-c7d0-4f1e-9923-5a3b782725ae", "db_session_id": "JMBO5KX1IJCJ8FWC64EX", "orig_file_number": 75, "seqno_to_time_mapping": "N/A"}} Dec 6 05:31:37 localhost ceph-mon[298582]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Dec 6 05:31:37 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:31:37.698021) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 19226756 bytes Dec 6 05:31:37 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:31:37.699748) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 203.3 rd, 187.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 18.7 +0.0 blob) out(18.3 +0.0 blob), read-write-amplify(33.0) write-amplify(15.8) OK, records in: 15063, records dropped: 528 output_compression: NoCompression Dec 6 05:31:37 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:31:37.699814) EVENT_LOG_v1 {"time_micros": 1765017097699797, "job": 44, "event": "compaction_finished", "compaction_time_micros": 102507, "compaction_time_cpu_micros": 51879, "output_level": 6, "num_output_files": 1, "total_output_size": 19226756, "num_input_records": 15063, "num_output_records": 14535, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Dec 6 05:31:37 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:31:37 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765017097700197, "job": 44, "event": "table_file_deletion", "file_number": 74} Dec 6 05:31:37 localhost ceph-mon[298582]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548789/store.db/000072.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Dec 6 05:31:37 localhost ceph-mon[298582]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765017097702951, "job": 44, "event": "table_file_deletion", "file_number": 72} Dec 6 05:31:37 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:31:37.595004) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:31:37 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:31:37.702986) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:31:37 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:31:37.702993) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:31:37 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:31:37.702996) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:31:37 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:31:37.702999) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:31:37 localhost ceph-mon[298582]: rocksdb: (Original Log Time 2025/12/06-10:31:37.703002) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Dec 6 05:31:37 localhost nova_compute[282193]: 2025-12-06 10:31:37.864 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:31:37 localhost nova_compute[282193]: 2025-12-06 10:31:37.901 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:31:37 localhost nova_compute[282193]: 2025-12-06 10:31:37.902 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5038 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:31:37 localhost nova_compute[282193]: 2025-12-06 10:31:37.902 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:31:37 localhost nova_compute[282193]: 2025-12-06 10:31:37.903 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:31:37 localhost nova_compute[282193]: 2025-12-06 10:31:37.903 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:31:38 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:31:39 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Dec 6 05:31:39 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1455173782' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Dec 6 05:31:39 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Dec 6 05:31:39 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1455173782' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Dec 6 05:31:39 localhost systemd[1]: session-79.scope: Deactivated successfully. Dec 6 05:31:39 localhost systemd-logind[766]: Session 79 logged out. Waiting for processes to exit. Dec 6 05:31:39 localhost systemd-logind[766]: Removed session 79. Dec 6 05:31:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5. Dec 6 05:31:41 localhost systemd[1]: tmp-crun.6YDqm6.mount: Deactivated successfully. Dec 6 05:31:41 localhost podman[339184]: 2025-12-06 10:31:41.93118899 +0000 UTC m=+0.096171321 container health_status 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Dec 6 05:31:41 localhost podman[339184]: 2025-12-06 10:31:41.965224915 +0000 UTC m=+0.130207306 container exec_died 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Dec 6 05:31:41 localhost systemd[1]: 0ecfa85b1bac9a037a1ec468c5f8efd50bfc5b1966b6d96b971519ffca6a66f5.service: Deactivated successfully. Dec 6 05:31:42 localhost nova_compute[282193]: 2025-12-06 10:31:42.904 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:31:42 localhost nova_compute[282193]: 2025-12-06 10:31:42.906 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:31:42 localhost nova_compute[282193]: 2025-12-06 10:31:42.906 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:31:42 localhost nova_compute[282193]: 2025-12-06 10:31:42.906 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:31:42 localhost nova_compute[282193]: 2025-12-06 10:31:42.925 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:31:42 localhost nova_compute[282193]: 2025-12-06 10:31:42.926 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:31:43 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:31:44 localhost nova_compute[282193]: 2025-12-06 10:31:44.216 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:31:45 localhost nova_compute[282193]: 2025-12-06 10:31:45.182 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:31:45 localhost nova_compute[282193]: 2025-12-06 10:31:45.222 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:31:45 localhost nova_compute[282193]: 2025-12-06 10:31:45.223 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:31:45 localhost nova_compute[282193]: 2025-12-06 10:31:45.223 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:31:45 localhost nova_compute[282193]: 2025-12-06 10:31:45.223 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Auditing locally available compute resources for np0005548789.localdomain (node: np0005548789.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Dec 6 05:31:45 localhost nova_compute[282193]: 2025-12-06 10:31:45.224 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:31:45 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:31:45 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1627710630' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:31:45 localhost nova_compute[282193]: 2025-12-06 10:31:45.644 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:31:45 localhost nova_compute[282193]: 2025-12-06 10:31:45.726 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:31:45 localhost nova_compute[282193]: 2025-12-06 10:31:45.726 282197 DEBUG nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Dec 6 05:31:45 localhost nova_compute[282193]: 2025-12-06 10:31:45.922 282197 WARNING nova.virt.libvirt.driver [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Dec 6 05:31:45 localhost nova_compute[282193]: 2025-12-06 10:31:45.924 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Hypervisor/Node resource view: name=np0005548789.localdomain free_ram=11097MB free_disk=41.83699035644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Dec 6 05:31:45 localhost nova_compute[282193]: 2025-12-06 10:31:45.924 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:31:45 localhost nova_compute[282193]: 2025-12-06 10:31:45.924 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:31:46 localhost nova_compute[282193]: 2025-12-06 10:31:46.028 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Instance b7ed0a2e-9350-4933-9334-4e5e08d3e6aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m Dec 6 05:31:46 localhost nova_compute[282193]: 2025-12-06 10:31:46.029 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Dec 6 05:31:46 localhost nova_compute[282193]: 2025-12-06 10:31:46.029 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Final resource view: name=np0005548789.localdomain phys_ram=15738MB used_ram=1024MB phys_disk=41GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Dec 6 05:31:46 localhost nova_compute[282193]: 2025-12-06 10:31:46.087 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Dec 6 05:31:46 localhost ceph-mon[298582]: mon.np0005548789@1(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Dec 6 05:31:46 localhost ceph-mon[298582]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3484471563' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Dec 6 05:31:46 localhost nova_compute[282193]: 2025-12-06 10:31:46.551 282197 DEBUG oslo_concurrency.processutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Dec 6 05:31:46 localhost nova_compute[282193]: 2025-12-06 10:31:46.555 282197 DEBUG nova.compute.provider_tree [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed in ProviderTree for provider: 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Dec 6 05:31:46 localhost nova_compute[282193]: 2025-12-06 10:31:46.574 282197 DEBUG nova.scheduler.client.report [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Inventory has not changed for provider 0d33e88e-6335-4a94-8f21-32ba5b8bb7ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Dec 6 05:31:46 localhost nova_compute[282193]: 2025-12-06 10:31:46.575 282197 DEBUG nova.compute.resource_tracker [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Compute_service record updated for np0005548789.localdomain:np0005548789.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Dec 6 05:31:46 localhost nova_compute[282193]: 2025-12-06 10:31:46.575 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:31:46 localhost openstack_network_exporter[243110]: ERROR 10:31:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:31:46 localhost openstack_network_exporter[243110]: ERROR 10:31:46 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Dec 6 05:31:46 localhost openstack_network_exporter[243110]: ERROR 10:31:46 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Dec 6 05:31:46 localhost openstack_network_exporter[243110]: ERROR 10:31:46 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Dec 6 05:31:46 localhost openstack_network_exporter[243110]: Dec 6 05:31:46 localhost openstack_network_exporter[243110]: ERROR 10:31:46 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Dec 6 05:31:46 localhost openstack_network_exporter[243110]: Dec 6 05:31:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:31:47.347 160509 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Dec 6 05:31:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:31:47.348 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Dec 6 05:31:47 localhost ovn_metadata_agent[160504]: 2025-12-06 10:31:47.348 160509 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Dec 6 05:31:47 localhost nova_compute[282193]: 2025-12-06 10:31:47.927 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:31:47 localhost nova_compute[282193]: 2025-12-06 10:31:47.930 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Dec 6 05:31:47 localhost nova_compute[282193]: 2025-12-06 10:31:47.930 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m Dec 6 05:31:47 localhost nova_compute[282193]: 2025-12-06 10:31:47.930 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:31:47 localhost nova_compute[282193]: 2025-12-06 10:31:47.943 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Dec 6 05:31:47 localhost nova_compute[282193]: 2025-12-06 10:31:47.944 282197 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Dec 6 05:31:48 localhost ceph-mon[298582]: mon.np0005548789@1(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Dec 6 05:31:48 localhost nova_compute[282193]: 2025-12-06 10:31:48.575 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:31:48 localhost nova_compute[282193]: 2025-12-06 10:31:48.576 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Dec 6 05:31:48 localhost nova_compute[282193]: 2025-12-06 10:31:48.576 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Dec 6 05:31:48 localhost nova_compute[282193]: 2025-12-06 10:31:48.885 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquiring lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Dec 6 05:31:48 localhost nova_compute[282193]: 2025-12-06 10:31:48.886 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Acquired lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Dec 6 05:31:48 localhost nova_compute[282193]: 2025-12-06 10:31:48.886 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Dec 6 05:31:48 localhost nova_compute[282193]: 2025-12-06 10:31:48.887 282197 DEBUG nova.objects.instance [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b7ed0a2e-9350-4933-9334-4e5e08d3e6aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Dec 6 05:31:49 localhost nova_compute[282193]: 2025-12-06 10:31:49.326 282197 DEBUG nova.network.neutron [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updating instance_info_cache with network_info: [{"id": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "address": "fa:16:3e:64:77:f3", "network": {"id": "652b6bdc-40ce-45b7-8aa5-3bca79987993", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.162", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.20", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "192.168.0.3"}}], "meta": {"injected": false, "tenant_id": "3d603431c0bb4967bafc7a0aa6108bfe", "mtu": 1292, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86fc0b7a-fb", "ovs_interfaceid": "86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Dec 6 05:31:49 localhost nova_compute[282193]: 2025-12-06 10:31:49.343 282197 DEBUG oslo_concurrency.lockutils [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Releasing lock "refresh_cache-b7ed0a2e-9350-4933-9334-4e5e08d3e6aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Dec 6 05:31:49 localhost nova_compute[282193]: 2025-12-06 10:31:49.344 282197 DEBUG nova.compute.manager [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] [instance: b7ed0a2e-9350-4933-9334-4e5e08d3e6aa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Dec 6 05:31:51 localhost nova_compute[282193]: 2025-12-06 10:31:51.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:31:51 localhost nova_compute[282193]: 2025-12-06 10:31:51.181 282197 DEBUG oslo_service.periodic_task [None req-fd40418a-ae44-4351-b144-9b83228b14b4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Dec 6 05:31:52 localhost sshd[339253]: main: sshd: ssh-rsa algorithm is disabled Dec 6 05:31:52 localhost systemd-logind[766]: New session 80 of user zuul. Dec 6 05:31:52 localhost systemd[1]: Started Session 80 of User zuul.